radikal.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
radikal.social was created by a group of activists to offer federated social media for the radical left in and around Denmark.

Administered by:

Server stats:

154
active users

#aihype

5 posts3 participants0 posts today

"What I believe is happening is that reporters are taking OpenAI's rapid growth in revenue from 2023 to 2024 (from tens of millions a month at the start of 2023 to $300 million in August 2024) to mean that the company will always effectively double or triple revenue every single year forever, with their evidence being "OpenAI has projected this will be the case."

It's bullshit! I'm sorry! As I wrote before, OpenAI effectively is the generative AI industry, and nothing about the rest of the generative AI industry suggests that the revenue exists to sustain these ridiculous, obscene and fantastical projections. Believing this — and yes, reporting it objectively is both endorsing and believing these numbers — is engaging in childlike logic, where you take one event (OpenAI's revenue grew 1700% from 2023 to 2024! Wow!) to mean another will take place (OpenAI will continue to double revenue literally every other year! Wow!), consciously ignoring difficult questions such as "how?" and "what's the total addressable market of Large Language Model subscriptions exactly?" and "how does this company even survive when it "expects the costs of inference to triple this year to $6 billion alone"?"

wheresyoured.at/reality-check/

Ed Zitron's Where's Your Ed At · Reality CheckI'm sick and god-damn tired of this! I have written tens of thousands of words about this and still, to this day, people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws.

"On a related note, the failures of ML systems are silent. There are no glaring red lights to signal a flawed prediction. However, telling managers that they are dealing with a predictive technology that outputs probabilities, leading to many invisible failures, will not yield any consulting gigs. Watching the AI magic vanish tends to kill the mood in boardrooms.

My goal with this essay is not to dampen managers’ curiosity about ML. These are fascinating technologies with the potential to do many interesting things. Unfortunately, this curiosity is often exploited by external consultancies, “AI experts”, and other false prophets out there. Part of the problem lies in comparisons to “digital-first” organizations. Companies like Airbnb, Netflix, and Uber are frequently cited as successful examples of ML in action. These companies rely heavily on predictive algorithms in their daily operations, but unlike traditional businesses, they do not depend on physical assets, instead, they operate on digital platforms. Traditional organizations trying to replicate these companies are likely to fail because digital-first companies use fundamentally different business models and assets than conventional organizations.

Rather than simply advising managers to ask fundamental questions about their intent and motivations for using machine learning (such as, “Do you have a specific problem to solve with ML?” and “What would the return on investment be?”), I prefer to frame this critique to be within the broader societal discourse surrounding AI (see, e.g., Lindgren 2023). Even if an organization can overcome the technical, cultural, and business barriers to implementing ML, the crucial question remains: is it even worth it?"

link.springer.com/article/10.1

SpringerLinkWhy machine learning in the wild is a rare species - AI & SOCIETY
#AI#GenerativeAI#ML

HORRIBLY irresponsible reporting from KQED on so-called "AI therapy". while they do talk about a couple of horror stories and mention the privacy issues, they then call them "edge cases" and never critically examine the claims of the LLM hypesters; i.e. they take at face value that AI is actually doing some kind of therapy.

kqed.org/news/12037213/how-saf

/cc @emilymbender @alex

KQED · How Safe is AI Therapy?By Lesley McClurg

"One hint that we might just be stuck in a hype cycle is the proliferation of what you might call “second-order slop” or “slopaganda”: a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash.

That AI companies are actively patronising and fanning a cottage economy of self-described educators and influencers to bring in new customers suggests the emperor has no clothes (and six fingers).

There are an awful lot of AI newsletters out there, but the two which kept appearing in my X ads were Superhuman AI run by Zain Kahn, and Rowan Cheung’s The Rundown. Both claim to have more than a million subscribers — an impressive figure, given the FT as of February had 1.6mn subscribers across its newsletters.

If you actually read the AI newsletters, it becomes harder to see why anyone’s staying signed up. They offer a simulacrum of tech reporting, with deeper insights or scepticism stripped out and replaced with techno-euphoria. Often they resemble the kind of press release summaries ChatGPT could have written."

ft.com/content/24218775-57b1-4

Financial Times · AI hype is drowning in slopagandaBy Siddharth Venkataramakrishnan

Announcing
AITRAP,
The AI hype TRAcking Project

Here:
poritz.net/jonathan/aitrap/

What/why:
I keep a very random list of articles about AI, with a focus on hype, ethics, policy, teaching, IP law, some of the CS aspects, etc., now up to 1000s of entries.

I decided to share, in case anyone is interested; I'm thinking of people who like @emilymbender, @alex, & @davidgerard . If there is a desire, I'll add a UI to allow submission of new links, commentary, hashtags.

www.poritz.netAITRAP -- AI hype Tracking Project

MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.

It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.

When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.

I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.

But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?

Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."

blog.citp.princeton.edu/2025/0

CITP Blog · A Guide to Cutting Through AI Hype: Arvind Narayanan and Melanie Mitchell Discuss Artificial and Human Intelligence - CITP BlogLast Thursday’s Princeton Public Lecture on AI hype began with brief talks based on our respective books: The meat of the event was a discussion between the two of us and with the audience. A lightly edited transcript follows. Photo credit: Floriaan Tasche AN: You gave the example of ChatGPT being unable to comply with […]

"My core theses — The Rot Economy (that the tech industry has become dominated by growth), The Rot-Com Bubble (that the tech industry has run out of hyper-growth ideas), and that generative AI has created a kind of capitalist death cult where nobody wants to admit that they're not making any money — are far from comfortable.

The ramifications of a tech industry that has become captured by growth are that true innovation is being smothered by people that neither experience nor know how (or want) to fix real problems, and that the products we use every day are being made worse for a profit. These incentives have destroyed value-creation in venture capital and Silicon Valley at large, lionizing those who are able to show great growth metrics rather than creating meaningful products that help human beings.

The ramifications of the end of hyper-growth mean a massive reckoning for the valuations of tech companies, which will lead to tens of thousands of layoffs and a prolonged depression in Silicon Valley, the likes of which we've never seen.

The ramifications of the collapse of generative AI are much, much worse. On top of the fact that the largest tech companies have burned hundreds of billions of dollars to propagate software that doesn't really do anything that resembles what we think artificial intelligence looks like, we're now seeing that every major tech company (and an alarming amount of non-tech companies!) is willing to follow whatever it is that the market agrees is popular, even if the idea itself is flawed.

Generative AI has laid bare exactly how little the markets think about ideas, and how willing the powerful are to try and shove something unprofitable, unsustainable and questionably-useful down people's throats as a means of promoting growth.
(...)
In short, reality can fucking suck, but a true skeptic learns to live in it."

wheresyoured.at/optimistic-cow

Ed Zitron's Where's Your Ed At · The Phony Comforts of AI OptimismA few months ago, Casey Newton of Platformer ran a piece called "The phony comforts of AI skepticism," framing those who would criticize generative AI as "having fun," damning them as "hyper-fixated on the things [AI] can't do." I am not going to focus too hard on this blog, in
Replied in thread

“Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these [AI] tools they are building. Right now, the use cases…are marginal.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #aihype

Replied in thread

“…language models can fundamentally be described as supercharged autocomplete tools, prone to returning incorrect information because they are skilled at creating a facsimile of a human-written sentence—something that looks like an acceptable response—but chatbots are not doing any critical “thinking.”“
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#llm #llms #hallucinations #ai #aihype

Replied in thread

“Klarna…garnered a lot of attention and investor excitement after its CEO said last year it would replace many of its employees with AI… The CEO recently walked that claim back… Klarna had replaced a basic phone tree system with AI, which…may have resulted in customers quitting chats out of frustration.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#klarna #aichatbots #ai #aihype

Replied in thread

“…eventually there needs to be actual demand on the other side for the products being built or else these companies will crash and burn.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #aihype

Replied in thread

“Microsoft is the primary backer of OpenAI, whose CEO Sam Altman has long stoked nebulous fears about AI taking over the world. Critics often say Altman’s fear mongering is primarily intended to place himself at the centers of power…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#microsoft #openai #altman #ai #aihype

Continued thread

“…stop spreading hype about a so-called “artificial general intelligence” that could replace humans in most tasks. Nadella said essentially that it will not happen, and either way is an unnecessary distraction when the industry needs to get practical…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #agi #aihype