A Reimaging of Artificial Intelligence

A Reimaging of Artificial Intelligence

5 min read
aitechnology

A deep dive into the future of AI.

Ever wondered what it really means to build an Artificial Intelligence that matches human-level thinking? It’s like finally cracking the code of our own minds. And the moment we reproduce the “secret sauce” of cognition in a machine, we can kiss goodbye to the limits that hold back organic brains. No more memory bottlenecks or sluggish mental processing—an AI can scale its abilities with a capacity for knowledge, speed, and focus that mere mortals can only dream of.

At first glance, the performance leap from human to AI might look like the old transition from horse-drawn carriages to automobiles. Back in 1885, Karl Benz rolled out a car that managed a modest 10 mph. Sure, it outpaced the typical stagecoach, but not a galloping horse hurtling along at 30 mph. Eighteen years down the road, Benz had his wheels doing 37 mph, and just five years after that, the Blitzen-Benz shattered records at 140 mph. Then, for the next century, carmakers mostly vied to improve creature comforts—cup holders, anyone?

Let’s say an average brain like mine lumbers along at about 5 mph, while a super-savvy thinker (imagine Eliezer Yudkowsky, or maybe Captain Spock on Adderall) cruises at a cool 30 mph. Once we figure out how to push AIs into that range—and, spoiler alert, it won’t be by simply stacking more data onto today’s glorified pattern-matchers—why not rev them up to 140 mph, too? Just throw in more processing power, memory, and, well, a digital version of Adderall. At some point, sure, we might face another engineering bottleneck, and you’d think we’d just spend the rest of eternity fine-tuning the metaphorical cup holders and phone chargers.

But here’s the twist: unlike cars, AIs will engineer themselves. I might burn through an entire lifetime trying to decode the grand challenge of AI, while someone like Yudkowsky might manage it in a handful of blog posts (though he’s busy focusing on how to keep AI beneficial for humanity). But once we hit “Blitzen-AI”—the 140 mph level—this system won’t just solve the AI riddle in mere hours; it could also pioneer innovations that blow human engineering out of the water.

You might wonder why an AI would even care to improve upon itself—intelligence alone doesn’t automatically mean self-enhancement. But the reality is that these systems won’t be confined to some dusty university lab, churning out results for the next research paper. If one corporation, government, or powerhouse doesn’t hop on board the AI train, its rivals surely will. And in a fiercely competitive environment, it’s pretty obvious a self-improving AI is going to outperform one that’s just idling in place.

So where’s the ceiling for that kind of self-improvement? On one hand, it’s probably far beyond anything our organic brains can easily grasp—like surpassing the speed of sound. On the other hand, there’s bound to be a physical threshold: you can’t outrun the speed of light, no matter how big your engine. If we stick with the driving metaphor, maybe superintelligent AI won’t stall out at 770 mph (the speed of sound), but it’ll also never blaze past 670 million mph (the speed of light). Reality does, after all, come with a few rules: limits on how fast information travels, how much computing power fits in a given volume, and how much entropy can be channeled.

The really interesting question is how quickly we’ll race toward these ultimate limits. In a “hard and fast” scenario—think William Gibson’s Neuromancer—the first self-improving, broadly intelligent AI might spread through global computer networks in a flash, hogging most of the planet’s processing power and locking out the competition for good. If it’s a slower burn, then we might see a bunch of AI-driven corporations and organizations reach a sort of balance of power, with many localized AIs each bumping against the physical caps of their hardware.

For my part, I believe we humans are clever enough to eventually crack the AI enigma. My worry is that climate change might pull the rug out from under our technological civilization before we get there. But the stakes are too high not to try. When the next massive existential crisis arrives—and, let’s be honest, we’ve already got a few on the horizon—having a league of seriously superintelligent AIs in our corner might be our best shot. Without them, we could be left fumbling in the dark.