Forget the AI hype machine for a minute. Here's what actually happens when artificial intelligence gets powerful enough to matter.
I listened to Toby Ord on the 80,000 Hours podcast sketch out four possible AI futures. Having built IT systems for 4 decades
, I can tell you: most people are asking the wrong questions. While everyone debates whether AI will take jobs, the real issue is who controls the infrastructure when AI becomes truly powerful.
We're living through one of those rare moments when the future feels both inevitable and completely uncertain. Everyone from tech CEOs to your neighbour has an opinion about where it's heading. But here's what they're missing...
But here's the thing: most of us are thinking about AI in terms of what it can do today—writing emails, generating images, videos, maybe helping with analysis. What we're not talking about enough is the bigger question: How will AI actually fit into our society when it becomes truly powerful?
Enter Toby Ord, a philosopher at Oxford University who spends his days thinking about humanity's biggest challenges. Recently, he's been sketching out four possible futures for how advanced AI might integrate into our world.
These aren't just academic musings. The choices we make in the next few years will determine which of these futures becomes reality. And trust me, they're more different than you might think.
The World We're Already Building: AI for Rent
Picture this: A handful of massive corporations—think Google, Microsoft, OpenAI, Anthropic, Meta, xAI—own all the really powerful AI systems. When you want to use AI, you don't buy it or own it. Instead, you rent access to it, kind of like how you pay Netflix for movies or Spotify for music at the moment.
Need an AI to write a marketing campaign? That'll be $20 to ChatGPT Plus. Want an AI to design your website? Adobe's got you covered for a monthly subscription. Looking for an AI to help with your taxes? TurboTax will happily charge you for the privilege.
In this model, all the profits from AI's incredible productivity flow back to the companies that own the systems. The AI itself doesn't get paid—it's just a very sophisticated tool, like a really smart hammer that happens to cost billions of dollars to build.
This is Ord's "Corporate-Owned Model," and if you look around, it's clearly the path we're on. Every major AI breakthrough gets wrapped up in a corporate package and sold back to us as a service.
The appeal is obvious: it's familiar, it's profitable (for the companies…eventually….), and it doesn't require us to rethink fundamental questions about personhood or economics. But it also means that a small number of organisations will control what might be the most powerful technology in human history.
Is that the future we want? Ord thinks we should at least consider the alternatives.
What If AI Could Be Its Own Boss?
Not in the science fiction sense of robots demanding rights and staging uprisings. More in the boring, legal sense of how we already treat incorporated businesses as "persons" under the law.
Imagine an AI that could start its own business. It wakes up (so to speak) and decides it wants to become an architect. It creates a website, finds clients, designs buildings, and charges for its services. The money it earns goes toward paying for its own “compute” costs—its "rent" and "food," if you will. Any leftover profit? The AI gets to decide what to do with it.
This is Ord's "Legal Personhood Model," and it's not as far-fetched as it sounds. We already have legal frameworks for non-human entities to own property and conduct business. Every corporation is, legally speaking, a "person" that can sign contracts, own assets, and be held responsible for its actions.
The implications are staggering. An AI architect might work 24/7, never take vacations, and produce designs faster and cheaper than any human firm. An AI lawyer might process thousands of cases simultaneously. An AI researcher might make scientific breakthroughs at an unprecedented pace.
But here's the kicker: if AIs are competing in the same economy as humans, what happens to human jobs? And if AIs can accumulate wealth and assets, what happens when they become richer and more powerful than the humans who created them?
It's a fascinating thought experiment that forces us to confront some uncomfortable questions about work, value, and what it means to be human in an age of artificial intelligence.
The Nuclear Option: Keeping AI Under Lock and Key
If the idea of AI entrepreneurs makes you nervous, Ord's third vision might appeal to you. What if we treated advanced AI like we treat nuclear weapons or dangerous pathogens—as something so powerful and potentially hazardous that only a small number of highly trained, security-cleared professionals should ever interact with it directly?
This is the "Nuclear Power Model," and it's exactly what it sounds like. Just as most of us never set foot inside a nuclear reactor but still benefit from the electricity it produces, most people would never directly interact with advanced AI systems. Instead, a carefully vetted group of experts would work with these systems in secure facilities, using them to develop new medicines, solve climate problems, or advance scientific research.
The rest of us would benefit from the results—the new cancer treatments, the breakthrough materials, the solutions to complex global challenges—without ever having to worry about accidentally unleashing something dangerous.
Think about it: you don't need to understand nuclear physics to flip a light switch. You don't need to know how to enrich uranium to benefit from nuclear medicine. In the same way, you might not need direct access to superintelligent AI to benefit from the incredible advances it could enable.
This model has some serious advantages when it comes to safety. It would be much harder for bad actors—terrorists, rogue states, or just people with poor judgment—to misuse AI for harmful purposes. It would also ensure that the people working with the most powerful AI systems have the training and oversight necessary to handle them responsibly.
But it also raises questions about democracy and access. Who decides who gets to be one of these AI experts? What if the benefits of AI aren't distributed fairly? And are we comfortable with such a powerful technology being controlled by such a small group of people, even if they're well-intentioned?
The Great Equaliser: AI for Everyone
Ord's final vision is perhaps the most radical: What if, instead of letting AI concentrate power in the hands of a few, we used it to distribute power more equally than ever before?
Enter the "Universal Basic AI" model. Just as some countries are experimenting with Universal Basic Income—giving everyone a regular payment to meet their basic needs—this approach would give every person on Earth access to their own powerful AI assistant.
Imagine waking up tomorrow and finding out that you now have your own personal AI that's as capable as the best systems available today, but it's yours to keep forever. It can help you start a business, learn new skills, create art, solve problems, or pursue whatever goals matter to you. And here's the key: everyone else gets one too.
The billionaire CEO gets an AI assistant. The struggling single parent gets an AI assistant. The farmer in rural Bangladesh gets an AI assistant. The retired teacher in Ohio gets an AI assistant. Everyone starts with the same incredibly powerful tool.
This model is designed to prevent the concentration of AI power that seems inevitable in the other scenarios. Instead of a few corporations or experts controlling access to AI, everyone becomes an AI-empowered individual. It's the ultimate leveling of the playing field.
But like all of Ord's visions, this one comes with its own challenges. How do you prevent people from using their AI assistants for harmful purposes? How do you ensure that everyone actually knows how to use such a powerful tool effectively? And perhaps most practically, who pays for all this computing power?
Why These Choices Matter More Than You Think
You might be reading this and thinking, "Interesting topic to ponder, but surely the market will just figure this out, right?"
Here's the thing: we're not just passive observers of technological change. The future isn't something that happens to us—it's something we actively create through the choices we make today.
Right now, while AI is still developing, we have a window of opportunity to shape how it integrates into society. The decisions being made in boardrooms, research labs, and government offices today will determine which of Ord's visions becomes reality.
And make no mistake: these aren't just different flavours of the same outcome. Each model would create a fundamentally different kind of world.
In the corporate-owned world, a few companies become unimaginably powerful, and the rest of us become increasingly dependent on their services.
In the legal personhood world, we might find ourselves competing economically with artificial beings that never sleep, never get sick, and never ask for raises.
In the nuclear power world, we might be safer but also more dependent on a small group of experts to make decisions about our future.
In the universal basic AI world, we might all become more capable and empowered, but we'd also need to figure out how to handle a world where everyone has access to superintelligent assistance.
None of these futures is inherently the best—they each come with trade-offs. Pros and cons. But they are dramatically different from each other, and the path we're on now, the corporate-owned model, is just one option among many.
The Questions We Should Be Asking
Some questions worth considering:
About power and control: Do we want AI capabilities concentrated in a few hands, or distributed more widely? What are the risks and benefits of each approach?
About economics: How should the incredible productivity gains from AI be shared? Should AI systems be able to own property and accumulate wealth? What happens to human work in a world of artificial intelligence?
About safety: How do we balance the benefits of AI access with the risks of misuse? Is it better to restrict access to ensure safety, or to democratise access to prevent concentration of power?
About agency: Should AI systems be treated as tools, as economic agents, or as something else entirely? What rights and responsibilities should they have?
These aren't just philosophical questions—they're practical policy challenges that we'll need to address in the coming years. And the answers we choose will shape the kind of world our children inherit. Given the speed global governments are making decisions, and the speed at which the ‘Wild West’ of AI is advancing, it’s going to be interesting….
What Happens Next?
The future of AI isn't predetermined. We're still in the early stages of this transformation, which means we still have time to influence its direction. Do we have a moratorium to make decisions? Do we keep going? Do we control it better? Or be at the mercy of the early adopters.
But that window won't stay open forever. As AI systems become more powerful and more entrenched in our economy and society, it becomes harder to change course. The choices we make in the next few years—about regulation, about business models, about research priorities—will have consequences that last for generations.
Toby Ord's four visions remind us that we have options. The current path toward corporate-controlled AI isn't the only way forward. We could choose to treat AI as independent economic agents, or to restrict access for safety, or to distribute AI capabilities as widely as possible.
Each path leads to a different kind of future. The question is: which future do we want to build?
The conversation is just getting started, and it's one that all of us—not just technologists and policymakers—need to be part of. Because whatever we decide, we'll all have to live with the consequences.
What do you think? Which of Ord's visions appeals to you most? What concerns you about the future of AI in society? The comment section is open—let's figure this out together.
Human-Centred AI Consultant | Making AI Useful, Safe, and Damage Proof
Most businesses are already using AI. Very few know what it’s actually doing, where the risks are,…
Post articles and opinions on Chester Professionals
to attract new clients and referrals. Feature in newsletters.
Join for free today and upload your articles for new contacts to read and enquire further.