Who is AGI For?
Quotes and Promises that Ignore Human History
In 1997, I was just the right age to think that the movie Contact was awesome. I saw it at a small theater, since torn down, that stood just a few feet from Interstate 70 in the small town where I grew up. Jodie Foster, Tom Skerrit, Matthew McConaughey…I was all in. I really FELT it when [spoiler alert] Jodie Foster’s character returned from the trip through the black hole only to find out that from the vantage point of her colleagues, her pod dropped harmlessly through the core of the machine. Nothing she saw was recorded or could be corroborated by anyone else.
As I’ve gotten older, I’ve reflected on the story. In the beginning of the film, Jodie Foster, a research scientist at the Search for Extraterrestrial Intelligence (SETI) discovers a radio signal coming from space, near the star Vega. Once decoded, the message provides instructions on how to build a machine. A machine for what purpose? No one knew, but they had to build it to find out.
Trillions of dollars and one extremist bombing later, Jodie Foster finds herself in the pod in the middle of large spinning rings. From her perspective, she takes a journey through a black hole and encounters an extraterrestrial that takes the form of her father. From the perspective of those on Earth, nothing happens. It’s never clear whether Jodie Foster experienced what the film goers saw or not, that’s for you to decide. But that’s an interesting lesson in that 1997 film for the development of artificial general intelligence (AGI).
In the movie, governments, played by untrustworthy looking actors in plain black suits, get involved early. There is uncertainty about what the signal is and for what purpose, so defense elements get involved showcasing the dichotomy between scientific research and defense. Once the scientists prove the signal decodes into plans to build a machine, government partnerships are formed and they start building at a cost of trillions of dollars in 1997 money. But don’t worry, they build two using ambiguous Japanese subcontractors for trillions of dollars more on the off chance that someone blows up the first one.
What strikes me today about Contact is that the story centers around spending trillions of dollars on something we didn’t understand. In the film, James Woods’s character (a government official) poses the question of whether this machine could be a bomb or a way for a lot of hostile aliens to come popping out and do damage. In the movie he’s portrayed as a buzzkill, but these are valid questions.
Twenty-nine years later, we are doing something very similar with AGI. Companies like OpenAI, Meta, Google, and Perplexity are demanding trillions of dollars in data and compute construction and the electricity plants to fuel them to build something that none of us, not even the executives in charge of the companies building them, understand. This article will look at quotes from these very executives and popular justifications for the growth in AGI and ask who benefits. If the world is being asked to spend trillions of dollars, we should know what we are buying unlike Carl Sagan’s fictionalized world of Contact.
Economic Growth and Workforce
Displacing workers with AI is a current issue and not exclusive to AGI. Some AGI proponents say that a future AGI will simply “do the work” while humans are able to sit back and collect the cash. OpenAI CEO Sam Altman said it this way:
I think a lot of customer service jobs, a lot of data entry jobs get eliminated pretty quickly. Some people won’t work for sure. I think there are people in the world who don’t want to work and get fulfillment in other ways, and that shouldn’t be stigmatized either.
The second part of that quote is an excellent cover for the first part. The admission by many that AGI will eliminate jobs is being turned the idea that everyone on Earth will sit back and live in leisure while the AGI creates economic value for us. Such assertions ignore nearly all human economic history and a good bit of human psychology.
First, removing work from humans is not necessarily what humans themselves want. Humans have been seeking for millennia to reduce the amount of hard, physical labor they do as they invented tools and machines to do the physical labor for us. However, humans have not necessarily sought to do NOTHING. Many humans derive pleasure from making an impact, thinking critically, and being productive. While Altman is right that we should not stigmatize people that get fulfillment in ways other than their employment, we should equally not be actively building to remove that choice from them. Sam Altman, nor any tech CEO, does not speak for the desires of billions of humans. A for-profit company is attempting to decide the future of work because they are building a system that they themselves do not understand. Not everyone has the desire to spend their entire lives allowing AI to do work for it. Not everyone even thinks that sounds good. The toll on human psychology from such a future is difficult for us to understand today, and by the time we do, it may be too late.
Second, the idea that AGI will create so much value that humanity, everywhere, will be able to sit back and enjoy the spoils is, frankly, nonsense. Coming from some of the biggest capitalists in the world, this idea is awfully communist.
The logic goes that AGI will be able to create so much economic value that all humans will be able to not work and pursue other areas of fulfillment. Sounds great, but this means ALL humans…all religions, all races, ALL. Since when has humanity EVER been interested in distributing its combined wealth equally?
If not equally, the theory only works if all humans get at least some minimum of the profits from our AGI economic machine. This leaves some real questions like:
Who determines the minimum?
What is the tax structure?
Why do some people get the minimum and others get more?
For as much as I would personally love to believe this could be true, it ignores all human economic history. There will be then, as there are now, companies and individuals that will feel they should have more. That feeling leads to conflict and we will find ourselves in an economic world not so different from today except with more humans that aren’t being productive.
Just Build More
Much like the movie Contact, 2025 and early 2026 have been dominated by the demand to just keep building, even though we aren’t sure what we are building. Announced at the beginning of the Trump 2.0 Administration, the first data center from the Stargate program opened in Texas in late 2025. OpenAI has $1.4 trillion in commitments to build data centers with only $20 billion in annual recurring revenue. Disparities like this leave many observers asking whether we are seeing an AI bubble, a popular sentiment in late 2025 and early 2026. What is certain is that the companies building toward AGI are investing HUGE amounts of money into the infrastructure for an ambiguous goal. They assure us that something great is coming, but what exactly isn’t clear.
If we are building trillions of dollars of AGI-supporting infrastructure, we need to make sure that we are getting what we want from that investment.
The ambiguous nature of what AGI is poses a real counterpoint to the unambiguous nature of investment capital going into AGI. If we aren’t sure what AGI is, what it will do, or how it will affect us, what are we building? You can scour the internet for quotes from tech CEOs for this answer, but you are unlikely to find it. What you will find is perhaps the most honest quote, which is from Sundar Pichai:
The biggest risk could be missing out.
That’s right. FOMO.
Pichai goes on to say:
We must not let our own bias for the present get in the way of the future.
I agree, but right now we are letting our bias for the future get in the way of the present. As we are building more and more and spending trillions, we aren’t asking what this ambiguous AGI is really for. What are we building and to what end? Superintelligence? Can anyone tell me what that really means? The bottom line is that we are in the midst of an AI boom cycle and we are lucky to be experiencing it.
It is in the boardrooms and earnings calls that the imaginations are running wild and starting to construct a narrative of the future that includes no humans working in some kind of global communist utopia.
The danger is not “missing out” or our “bias for the present.” The danger is constructing a narrative about the future in our imaginations that does not align with the realities of our world and our societies. This is not a path to tech utopia. It is a path to global conflict and a mental health crisis at scale.
Who is it for?
The lengths to which technology companies are going to achieve AGI is well documented. This publication has written extensively on the idea of Sovereign AGI from multiple angles. What has always been unclear is what the goal of an AGI would be. We seem to know we want it, and we’ve built companies and poured trillions of dollars into it, but who is it for? Tech giants would tell you it is for the benefit of humanity. They follow those comments with demands for more energy than our planet has ever produced, more investment than some sovereign economies, and a utopian future that ignores all previous human history. So, who benefits?
In the short term, the pursuit of AGI is benefitting the companies pursuing it. It is not clear for multiple practical reasons whether what they are pursuing is possible, but they take the cash, nonetheless. Many people do not wish to be displaced by AI or AGI from their work.
It is also not clear whether AGI will be made available for use by everyone, free of charge. What seems more likely is that the current sovereign AI trend will continue and governments will have control of AGI, nationalized for their own purposes. Even today, the direct utility of foundational models to individual humans or communities is not clear. A lack of rigorous stress testing creates real potential for harm to humans, which will only scale as AI moves toward AGI.
Private companies can do what they wish with their capital. Though, when companies invest trillions of dollars toward a single, ill-defined goal that supposedly will fundamentally transform humanity, some explanations are due. What’s clear is that the pursuit of AGI is not for humans and communities but for the continued profit of those in the AI industry. This, too, would not be a shock in the course of human history. That a company or group of companies would seek to create and maintain the economic conditions that benefit them is consistent with human history.
Ignoring that is an example of letting our future bias impact our present.






Brilliant analysis here. The Contact analogy really nails the core issue, spending trillions on somthing nobody truly understands the endpoint for. What stands out most is the disconnect between the "global communist utopia" rhetoric from hardcore capitalists and thier actual track record on wealth distribution. I worked in tech policy for a bit and saw this exact pattern where companies would promise universal benefit while structuring deals that concentrated gains at thetop.