.Why Human Intelligence Thrives Where Machines Fail

Last week, on January 5, Chat GPT CEO Sam Altman blogged about Artificial General Intelligence, the next shoe to drop in the AI revolution. “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

He announced in his post that ChatGPT would be moving beyond “iteratively putting great tools in the hands of people.” Instead, “We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.”

I guess giving humans tools to do what we do is just not that exciting any more. Helping editors and writers conduct research, or programmers save time writing code, well, that’s just making humans look good, and the petaflops of processing power in the new data centers are more ambitious than that. They want to perform human tasks and catch up to us, with human-like reasoning.

Remember the old maxim “Give a man a fish, and he’ll eat for a day. Teach a man to fish, and he’ll eat for a lifetime.” Well, artificial intelligence not only ate the fish, but it’s set on outfishing us. It doesn’t want to give us power tools like skillsaws and nail guns to make our jobs easier. It wants to design and build the house, then collect the real estate commission on the sale.

Once AI is able to catch up with us in reasoning skills, the next steps are Artificial Superintelligence (ASI), in which computing technology is smarter than us. Ostensibly, in a science fiction moment, the lines will cross, and Earth will achieve Singularity, with machines surpassing human intelligence in most every way.

Google futurist Ray Kurzweil defined the concept in his 2005 bestseller, The Singularity Is Near, as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Projecting the tipping point as 2029, now just a four-year education away, the optimistic Kurzweil considered the era in which machines can outthink humans as “neither utopian nor dystopian.”

Well, maybe. A rogue Google Gemini chatbot reportedly turned malevolent on a college student in November, saying, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

Okay.

One of the humans who has deep insights into the march of artificial intelligence—and its limitations—is former Silicon Valleyite and computer scientist Erik J. Larson. Now based in Austin and affiliated with The University of Texas, he posts regularly about AI to his Substack and is the author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2022, Belknap Press). He shared this article from his Substack with Metro. —Dan Pulcrano

The Noise Behind AI, Already

I’ll be honest: I’m increasingly concerned about where AI-driven automation is heading. Every week, I get bombarded with job offers on LinkedIn, WhatsApp and email. Some are from big-name companies; others from startups with “the perfect role” for me. Lately, it’s harder to tell if the offers are genuine. Are real people behind this? Or AI?

Much of today’s discussion simply assumes that AI is smart and getting smarter in a way that will either replace us or make us superhuman. The problem is, well, that’s not what’s happening. While we worry about (very real) issues with trust and bias, we’re ceding huge philosophical and cognitive space to the systems that we, after all, built. It’s frankly stupid. That’s why I’m writing about it here—to clear it up.

The concept of fat tails—significant, outlier events outside the normal distribution—should be at the center of our conversation about AI. Yes, you’ve likely heard “bell curve” objections to machine learning-based AI before. It’s not enough to get the idea of statistical averages. 

I have colleagues who, hearing this, immediately start in on discussion about new AI that will capture outliers. Fine, sure, but they’re missing the point about intelligence, so their theories will likewise be somewhat facile (sorry, it’s true).

Outliers—the ones in the fat tails—aren’t just occasional serendipity (though they’re that too), they’re precisely where intelligence actually happens. The world isn’t an average, and so those weird distributions actually create the environment for natural intelligence to operate. It’s a bit ironic, and sad, that we’re looking at “bell curve” machines for the future of intelligence, when optimizing on the bell curve is the one sure bet that we’ll fail.

Fortuna Isn’t an Algorithm

Here’s a quick and probably well-known example from the history of science. In 1928, a young Scottish bacteriologist named Alexander Fleming returned to his lab after a vacation to find something unexpected growing in one of his petri dishes.

Mold had contaminated the culture of Staphylococcus bacteria he’d been working on. But instead of discarding it, Fleming noticed something unusual: the bacteria around the mold were being destroyed. Fleming didn’t throw it away. In one of the greatest serendipity moments in modern science, he inferred a causal interaction from what appeared to be a mistake, and discovered penicillin, the first true antibiotic.

The reason we humans can feel bullish about such proud moments is simple, yet we almost never zero in on it: machines don’t interact dynamically with their environment the way biological intelligence does. Fleming’s discovery wasn’t just solving a problem—it was the result of constant interaction with his surroundings and inferences based on wholly unexpected observations.

BEYOND FORMULAS Humans are deeply embedded in our environments—a constant feedback loop of interaction that gives us a perpetual advantage. PHOTO: Rolf h nelson Wikimedia Commons

The whole point of large language models (LLMs) is to give the statistically best answer, which is to say, the expected. Fortuna, or chance, is embedded in human cognition in a way that machines, reliant on predetermined data, simply can’t replicate. This point is of enormous significance, because it suggests we’re thinking linearly about something in such a way that far and away the most important observation is the one most obscured.

Let me state this point in a different way. We don’t have a “black box” intelligence that simply replays prior training. Our brain learns dynamically, interacting with the environment in ways that lead to constant, unpredictable opportunities for insight. Einstein is a shopworn example but still makes the point grandly: physics while contemplating time on the back of a train, gazing at a clock behind him. What’s the point of that if you’re optimizing some function on data?

These moments—what the Greeks called Fortuna, or what we might call luck or chance—are not just nice-to-haves but integral to our intelligence. Sometimes major insights come from dreams—as with the discovery of the benzene molecule—and sometimes they come because someone dropped some milk on the floor or your mom is visiting, or what have you. Almost nothing of any consequence comes from regurgitating a dataset. Be wary of false prophets, AI isn’t heading for “AGI.”

The cognitive difference is difficult to overstate. Human intelligence emerges because we are deeply embedded in our environments—a constant feedback loop of interaction that gives us a perpetual advantage. The machine model, no matter how well-trained, doesn’t operate within this dynamic system. It not only isn’t learning in real-time, it’s not inferring from outliers but from best-fit. 

I use LLMs to spit out facts and figures that don’t come to my mind—I’m not a calculator. I don’t use them to say something interesting. The more I interact with today’s AI, the more I realize we’re not much further along—thinking about real intelligence—than decades ago. We’re still messing around with machines and shit-talking ourselves.

Abductive Reasoning and Dynamic Thinking

A similar dynamic played out during the London cholera outbreak of the 1850s. At the time, most believed cholera spread through “miasma”—bad air. Dr. John Snow, a physician, saw that the outbreak in Soho was clustered around a single water pump. Hmmm. Snow made an abductive leap by inferring that the water, not the air, was spreading the disease—the dataset everyone was using, as it were, was focused on the air. His investigation led to the removal of the water pump handle, halting the outbreak and drastically improving our understanding of disease transmission.

Snow’s breakthrough didn’t come from data alone. It didn’t come from data, except in the most trivial of senses, at all.

Bottomline

I’m pretty much constantly engaged in discussions about AI here and in the future. I’m perpetually having conversations about AI—about how it works now and what it might become. The discussion typically assumes AI is on some unstoppable cognitive trajectory, and we need to turn our gaze now to focus on things like bias, trust and data ethics.

Sure, I get that. We need systems we can trust. But we’re all missing the eight-hundred-pound gorilla in the room: true intelligence is found by moving away from larger datasets and away from statistical norms. Yes, there are statistical norms, and we make use of them in inference. It’s not that such inferences are non-existent but rather that they tell us very little about what we’re trying to understand: intelligence.

We know neural networks can handle patterns that crystallize in large enough datasets. Unfortunately, that entire exercise has very little to do with AGI in the first place. Good luck with that. Silver lining: since people are pretty disastrously bad at discerning patterns in mountains of data, AI will always play a role in our broader cognitive story.

We’ve built these systems to optimize the world as we know it. But the world we know is just the start. When will researchers stop obsessing over training data and start talking about the one thing that makes us us: the ability to handle what we haven’t seen before? Until then, AI systems are playing catch-up—nay, better, they’re pretending to catch-up—to a game we’ve been playing since day one.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Giveaways

Enter for a chance to win a $40 gift certificate for Poor House Bistro in San Jose. Drawing March 5, 2025.
Enter for a chance to win a $25 gift card to Jack's Restaurant at the Westgate Shopping Center in San Jose. Drawing February 19, 2025.
spot_img
10,828FansLike
8,305FollowersFollow
Metro Silicon Valley E-edition Metro Silicon Valley E-edition