Understanding the AI scam


Around 8 years ago, an engineering company wanted to bring me in to help with hiring and employee morale. The company was asking employees to design software that would ultimately replace them. The CEO couldn’t understand why people weren’t more excited about the opportunity.

As the CEO argued, there are endless problems to solve. Why not let the machines take this one, allowing the engineers to move on to another interesting problem?

But the employees were already doing satisfying work that they were trained to do. Moving onto another problem felt like a risk, not a benefit, to them.

I think about that experience a lot. Although I could see both sides of the argument, I declined the project. Perhaps I could have shifted the employees’ perspective. But as AI has developed over the intervening years, I think those employees saw what was coming more clearly than most.

If we look beyond the marketing hype promoted by Big Tech, we see that AI is neither capable nor designed to deliver a future where humans happily move from one interesting problem to another. I think a lot of people, perhaps even that CEO, have been purposely misled.

AI, at least in its current usage, is being used to exploit, eliminate, and degrade at the expense of both workers and consumers.

That’s why I call it the AI scam.

Generative AI is not “smart” nor is it “learning”

The scam rests on the belief that these systems are intelligent or even “superintelligent.” But as Katie Mack explains, large language models like ChatGPT don’t know or understand what they are “learning.” The only “logic” they are applying is probabilistic.

Imagine the only language you know is English and you’re given a huge dataset of Chinese characters to study without an English translation. Based on patterns in the data, you’re able to respond to queries, sometimes correctly, even though you still have no idea what the characters themselves mean.

That's essentially what the AI is doing. It's really good at is recognizing patterns, in text and data that has, it should be emphasized, been previously created by humans. This (along with occasional outright plagiarism) is what allows it to appear smart.

This lack of what we would traditionally consider intelligence doesn’t mean the technology isn’t useful. But it does limit what AI can do well. I mean, 2.4 - 16.5% of the time (depending on the model used) AI can’t even correctly summarize a text document, where all the information it needs is in the thing you gave it. Not only does it miss key points, but it sometimes adds information that doesn’t exist in the original document.

These are called hallucinations. As the New York Times recently explained, AI bots “do not — and cannot — decide what is true and what is false.” But the bots rarely admit when their pattern recognition fails. In fact, when you call them on their lies, they often double down.

This artifact might be mildly funny when it leads to a major newspaper publishing a must-read list that includes AI-hallucinated book titles. It’s a lot less funny when AI summarizes your doctor visit and adds imagined medical treatments to your records, making it harder to get affordable health insurance.

I’m stressing this point because 1) it’s like a good magic trick—even when you know how it works, the trick is very believable and 2) companies across industries are adopting an “AI First” strategy, in spite of the very real shortcomings I’ve discussed.

You might think this issue is something the tech companies, who are getting hundreds of billions of dollars in investment money, have firmly in hand. But researchers aren’t sure what causes the hallucinations happen or how to eliminate them. Newer versions of the technology are actually getting worse.

Moreover, research shows that even the latest “reasoning” based models completely fall apart when tasks get even a tiny bit complex. For example, generative AI models can solve a Towers of Hanoi puzzle when the number of discs is low. But when the number of discs rises above eight, it fails, even when you give it a simple algorithm that will allow it to solve the puzzle.

To summarize: AI is only smart in the sense of pattern recognition, which means it isn’t surprising that it falls apart at the slightest bit of complexity.

Surprising ways the scam impacts our future

Dario Amodei, the CEO of Anthropic, has accused other AI companies and the government of hiding how AI will change the economy. In an interview with Axios, he claimed generative AI, like the kind his company produces, “could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.”

Despite this warning, Amodei justifies his company’s products on the promise AI will cure cancer. Or, as a tech-funded futurist quoted by The Guardian claims, the technology will lead to “an era of super-abundance.” An abundance of what isn’t mentioned.

These assertions would be funny if they weren’t all part of the scam.

You see, the tech industry is playing both sides of the argument. On the one hand, it’s so powerful, it is unstoppable (a word the sector loves to lean on). Who needs cancer research by actual scientists when you have AI, amiright?!

On the other hand, AI’s hallucinations and limitations might lull you into complacency. No way it can do my job, you might think.

Unfortunately, despite what the media tells you, corporate leaders aren’t imagining that AI can do whatever a human can do, but better. What they’re figuring out is that AI can do a lot more than a human can do, but worse. And by their estimates, that might make them a lot more money.

Most of us believe that a market economy will constantly push companies to improve their products, lest they lose their customers to competitors. But widespread adoption of AI across sectors means that, even as products and services get worse, it's increasingly difficult to find an AI-free alternative. Newcomers who might fill that gap will find they can’t match the price points of AI-first companies and customers who have fewer earnings can’t afford to pay more.

It’s a bit like how monopolies work, but in this case, it’s the widespread adoption of a technology that creates similar market conditions.

This transformation is already well under way. Brian Merchant, a former LA Times tech columnist, has been collecting stories on the impact of AI on the jobs of workers from various industries, starting with software developers. It’s not just layoffs, although there are plenty of those stories too. Employees share that working alongside AI typically means educated, talented humans do less creative work, even while putting in longer hours and watching the quality of their product degrade.

This means a lot of people lose their jobs, and those who don’t are left with menial, soul-sucking work that pays less. This is the real reimagining of the future that AI promises.

But the negative impact of AI goes far beyond our work, including:

  1. Data centers use enormous amounts of electricity and water at a time when both should be conserved
  2. The data centers are also a source of pollution (noise, air, and water)--nearby residents complain "I can't breathe"
  3. Chatbots are being specifically marketed as a solution to loneliness, even though research shows that people who use AI when they are lonely feel worse afterwards
  4. It’s also being specifically marketed to kids, undermining their education and encouraging cognitive laziness,
  5. It can lead to psychoses—resulting in divorces, suicide attempts, and arrests—even among those with no history of mental health problems
  6. It contributes to the spread of disinformation
  7. It often amplifies harmful racial, gender, and ethnic stereotypes
  8. It further enables and promotes both corporate and government surveillance

Any one of these issues would be cause for serious concern, not to mention potential regulation. The idea we’d willingly and enthusiastically rush towards increased, unfettered use of the technology is, well, crazy.

Sarah O’Connor with the Financial Times notes that the messaging around AI resembles high-pressure sales. There’s a reason for that.

Again, I don’t mean we should never employ AI. I have personal examples where AI has been genuinely useful. And I have good friends who work in the industry. But we can't let these kinds of interactions and personal connections blind us to the real harm these tools can (and already do) cause.

We are, in this moment, being swindled. Every time you use these tools, inside or outside of work, you normalize them. You contribute to the FOMO that drives their increased adoption. You devalue your own skills and creativity and worth.

It’s possible that corporate leaders will abandon their AI-first strategy all on their own. That they will see the damage they are wreaking. Certainly the pushback against the technology is growing.

But that’s not an excuse to sit back and see what happens. The stakes are too high and the harm is already here.

The good news? The future is never inevitable. A sudden and widespread refusal to use AI chatbots, even just for personal tasks, would send a powerful message: Any tool must work for humanity. So say we all.

Everyday Bright

“Jen is the most curious person I’ve ever met.” —My (favorite) former boss Scientist, coach, and catalyst for change. My bi-weekly newsletter helps lifelong learners and leaders unlock human potential, in themselves and others, so they can do the best work of their lives (and enjoy it).

Read more from Everyday Bright
Woman looking up, pondering

It’s a time of transition for our family. Our daughter graduates from high school in less than a week. In the fall, she heads out not just for college, but a semester abroad that is 6,252 miles from home. You might imagine I’m crying into my granola every morning. Noooooo. Not yet anyway. Instead, I’m observing my brain. Turns out, it’s hilarious. Weird times like these—when we’re doing something new, when we're scared and happy at the same time, when the world doesn’t make sense the way it...

A selection of Mexican condiments & salsas

As some of you noticed, I took an unexpected three month hiatus from writing this newsletter. If you’ve been following the news, you know that the United States is currently having a moment. Which has meant a lot of people in my network, many of whom are either scientists or government servants or both, are also having a moment. It’s been a difficult and bewildering time and I guess I’ve been waiting until I felt like I had something helpful to offer. So I thought I’d begin with what am I...

One man holds an umbrella over another man who is fishing

One of the things that’s always struck me as odd is how much advice there is about saving, but not about how and why to give. The assumption seems to be that giving comes naturally to people. I’d argue what most of us are good at is spending, which is really not the same thing. I first became aware of my own generosity gap in elementary school. I was sitting at the lunch table with my classmates when someone asked me for my cookie. Maybe I had more than one, I can’t remember. What I do...