DOE Says We Need More Power for Our AI
Moonshot AI and Deepseek might challenge our assumptions
Who remembers the 1990s family sitcom Home Improvement? Fictional tool demonstration show host Tim the Toolman Taylor would each week treat audiences to an over-the-top demonstration of a tough and manly sounding tool from the Binford line. His practical and well-informed co-host Al would often beg Tim to heed his warnings, but Tim always had a singular demand. You remember it.
More power!
Tim Allen and Richard Karn were entertaining appointment viewer audiences well before the artificial intelligence (AI) models we know today but if you listen to their dialogue today, they may as well have been talking about AI.
A US Department of Energy report was released in July 2025, and it may as well have been written by the Toolman himself, prior of course to some sage advice from his enigmatic neighbor Wilson. The report says that the US needs more power. How much more? 100 gigawatt hours during peak hours. The report cites the closure of major power plants and the implementation of “intermittent” sources of power but includes an important distinction. According to the report, fully HALF of the 100-gigawatt hours of new required power are directly attributable to data centers. What’s more, the report specifically cites the growth of AI the need to “win the AI arms race” as a cause for the need for 50 gigawatt hours of new energy. The report claims:
…that blackouts could increase 100 times in 2030 if the US continues to shutter reliable power sources and fails to add additional firm capacity.
An increase in blackouts of that magnitude across the US would be devastating without question. But the report is saying that HALF the new capacity required is due to data centers and the imperative to support AI innovation. We should pause for a moment and talk to Wilson over the fence.
We are making a big assumption about the continued development of AI algorithms that will be globally significant, chiefly that they will require an unending supply of compute and data for growth.
We should also not ignore that we are potentially building AI in such a way that blackouts for certain populations of the US will be more frequent. Is that how we want to build AI? Does it reflect reality?
The DOE report is important, but we need to go a level deeper and question some assumptions. No doubt our energy grid needs some attention. Much of our critical infrastructure does. When it comes to the AI part of this discussion, we should ask whether we are Tim in this situation demanding more power or are we Al urging caution and practicality.
Correlation, Causation, or Neither
Nothing kicks off your second presidential term like announcing $500 billion in investment in the US. President Trump had this moment flanked by the heads of OpenAI, Softbank, and Oracle. It was a total power move. The investment was to build data centers and associated power plants to give America the advantage it needed to compete globally in AI. Say it with me…MORE POWER!
This is the kind of approach that would make the Toolman himself proud. But much like Al Borland, we should question the instinct to apply more power to get a job done.
We are building AI models and the data centers to support them under the assumption that the key to winning the “AI arms race” is having enough compute to do so. DOE said so explicitly in its report. One understands why that assumption is made when we live at a moment when all model builders are trying to get larger and larger training datasets and more and more model parameters. But what does a trillion, or 2 trillion, model parameters buy you? Does the required compute capacity return what we need from our AI models to be successful?
The Chinese AI company Moonshot AI released a trillion-parameter model called Kimi K2 in July 2025. Immediately after the release, testers set off comparing Kimi K2 to the big US-based models along performance benchmarks. Testers on LiveCodeBench reported an accuracy score of 53.7% to ChatGPT4.1’s 44.7%. On the Math-500 test, Kimi K2 scored 97.4% to ChatGPT4.1’s 92.1%. Even more, Moonshot AI seems to have done this at a fraction of the massive compute burn OpenAI is famous for showing innovations in training methods pointing toward…MORE EFFICIENCY! Doesn’t really have the same ring to it, but Al approves.
The DOE report tells us that we are innovating our way right into a series of blackouts and that we must keep going so that we win an alleged AI arms race. But the already stalled Stargate investment and the recent innovations of Deepseek and Moonshot AI tell another side of the story. AI innovation is not synonymous with more compute and more chips. There are innovators out there creating new methods for training and figuring out ways to build accurate AI without the use of the “world leading” hardware. It will be a real irony if the net result of the US’s chip sanctions is that a generation of Chinese AI developers creates the next line of AI models that definitively disproves the more compute thesis.
Efficiency as Strength
Efficient code is hardly a new concept. The more efficiently you can code an application, the faster it will run and the less computational power it will require from your laptop or phone. The same is true when building AI, but for some reason we don’t act like it. When you have nearly unlimited compute available to you, you can afford to be less efficient. When you don’t have all the compute in the world and the best GPUs, you must innovate. That’s the world of many Chinese AI developers and it is showing in their releases. This is not a commentary on whether there are security risks associated with Chinese AI models, but the testing of Deepseek R1 and Moonshot AI Kimi K2 show that there are real innovations in efficiency and training that should be catching the eyes of Stargate investors.
Americans are always impressed by bigness and AI is no exception. We think of models with hundreds of billions or trillions of parameters and think they must be better. We also look at larger and larger training data sets and assume that results in more accuracy. Those assumptions lead us to power our data centers by using Three Mile Island and it leads the DOE to conclude that we need 50 gigawatt hours of additional peak time capacity just to power new datacenters or risk a 100X increase in blackouts by 2030. But what if that assumption is wrong?
Instead of building for a trillion more parameters, we should be looking at the parameters we have and figure out which ones result in the most accuracy. We should fine tune those parameters and create specialist AI instead of generalist AI with low accuracy ratings. Maybe future models will not need trillions of parameters but will need low billions or even millions of highly accurate, finely tuned parameters. This may furrow the brows of some AI innovators, but the proof is in the numbers. Chinese AI is being built without the best hardware and innovations in training and computational efficiency are following. It is not hard to imagine that soon those innovations will lead to more precision in individual parameters and potentially models that are both more accurate and smaller. Is every parameter out of a trillion necessary for the accurate functioning of the model? Clearly not since accuracy of LLMs is still in question.
AI without the Blackouts
DOE put it in no uncertain terms. The US grid needs 100 gigawatt hours of new capacity, half of which is the result of data centers. The report similarly does not mince words regarding AI saying that continued building of energy capacity is an imperative to allow the US to compete in the AI arms race. These conclusions assume that the demand for compute will increase infinitely as will the demand for top end GPUs. These conclusions ignore that market disruptors are not all that interested in how things are. They are looking at how they can stand out, and they are. Consumers can make their own choice about whether they would rather use a US-based LLM or a Chinese LLM. Aligning your technology choices with your values is an important factor when choosing products. But whether you use it or not and whether it aligns with your values or not, the engineering does not lie.
The development of ChatGPT4.1 had the luxury of large compute resources and the best GPUs, and it tested below the accuracy rating of Kimi K2. Do we really think that all OpenAI needs is more data centers? And are we ready to cause blackouts to do it?
If we are in an AI arms race, we need to be smarter, not just looser with our cash, than those we are racing against. Our future AI innovations should not depend on the constant construction of new data centers because that is the very definition of a lack of resilience. Instead of aiming our resources at the next trillion parameters, we should be looking at refining and fine tuning our models and figuring out what parameters are the most impactful to our goals. Maybe there will be a future model that removes parameters and maybe that model will be more accurate along certain tasks, but will the US market reward such efficiency plays?
After all, we all laughed at Tim more than Al and maybe that’s what China is doing as well.