Is artificial intelligence (AI) increasing our demand for electricity on power grids? And if it is, can we deploy solar power fast enough to keep up with demand, or will we need to build new coal or nuclear power plants?

Although some analysts are predicting a rapid increase in data center electricity use due to AI that will impinge on our ability to transition to clean energy, this is unlikely to occur because AI is rapidly becoming much more energy efficient and will soon be running mostly on cellphones and other distributed computing platforms (like cars), not exclusively in data centers. We are already building much more solar than coal or nuclear power, because solar is much more affordable to build. We can very quickly and easily build many more solar arrays with battery backup, anywhere in the world, but we can’t easily build any new coal or nuclear power plants.

The reason I’m writing this insight is to put in context recent studies about how much electricity data centers are using. If you’ve been reading the news, you might think that AI will drive up electricity demand so much that we’ll be forced to build more coal and nuclear power plants. A report released by the United States Department of Energy in December 2024 found that “total data center electricity usage climbed from 58 terawatt hours (TWh) in 2014 to 176 TWh in 2023 and estimates an increase between 325 to 580 TWh by 2028.”

Assuming that report is accurate, over a period of five years, the United States will need to devote between 30 and 80 TWh more electricity per year for use in data centers. That increase in electricity demand will very likely put a damper on calls to shut down coal power plants and nuclear power plants. But consider that the United States is expected to add an average of 40 GW of new solar power per year—and with that new solar power, we can generate an additional 40 to 80 TWh of clean electricity every year.

With that much additional solar power coming online every year, we used to think that we could shut down all of our coal power plants immediately and start planning to shut down our nuclear fleet. Now it’s looking like we’ll need to keep running our existing coal and nuclear power plants a bit longer—unless we decide to accelerate our transition to solar power. There’s no real reason we couldn’t increase our investment in solar power and start installing 80 GW or 100 GW of new solar power per year here in the United States. China installed 277.17 GW of solar in 2024, and our countries are physically just about the same size.

The real question isn’t what will happen in the next 5 years, but what will happen in the next 30. As we build more powerful AI, it’s much more likely that its net effect over the long term will be to reduce, rather than increase, our demand for electricity on power grids, and to help, rather than hinder, our transition to clean energy.

Observation #1: AI Is Rapidly Becoming More Energy Efficient

AI is software that converts input tokens to output tokens—for example, submit a prompt of language tokens to a chatbot, get a response of language tokens—so the best way to think about AI’s energy efficiency is tokens per joule. We can use pricing as a proxy for energy, since electricity is a significant component of the cost to provide AI.

The cost to use AI models varies from a few cents to a few dollars per million tokens—and it’s falling fast. According to Sam Altman, CEO of OpenAI, “The cost to use a given level of AI falls about 10x every 12 months.” The primary reason for this rapid price drop is a rapid increase in energy efficiency: per joule of energy, newer AI systems can process more tokens than older AI systems.

Implication: Lots of Intelligence From Very Little Energy

Anyone who is seriously considering building a new coal or nuclear power plant is thinking thirty years into the future. If the cost of AI continues to drop by ten times per year, in thirty years it will be 10^30 times more energy efficient. At that rate of improvement, AI would be effectively free, since it would use so little energy.

Whether anything in the universe can improve by a factor of ten per year for thirty years is an open question. Most likely, efficiency gains will continue, albeit at a slower pace. I am betting (and investing my money on the idea) that AI will become “just” a billion times (10^9) more energy efficient by 2055.

Just so we’re clear, I’m betting that although there are good reasons to delay the retirement of existing thermal power plants so we don’t run into electricity constraints in the next couple of years, we will not need any new coal or nuclear power plants for AI, because the amount of intelligence we can get from one nuclear power plant today, in thirty years, we’ll be able to get from one solar module.

Observation #2: Energy-Efficient AI Can Run On Your Phone

You can already get large language inference models that can run on your phone. According to the Pew Research Center, 97% of Americans under the age of 50 (and 98% of those under the age of 30) own a cellphone powerful enough to run some kinds of AI today.

Energy efficiency and batteries have made smartphones possible. The Apollo 11 guidance computer that put men on the moon in 1969 could do 12,250 floating-point operations per second, using 55 watts. An Apple iPhone 12 made in 2020 can do 11,000,000,000,000 per second, using 3.9 watts. You can use a very small solar module to charge your iPhone.

Implication: Intelligence Will Be Distributed

It’s true that some AI models don’t yet fit on your phone—but you’d be betting against fifty years of evidence if you think we won’t be able to make smartphones a lot smarter very soon. The iPhone 16 features A18 chips capable of 35 trillion operations per second. The race is on to improve AI so that those 35 trillion operations can be put to better use every second with smarter software.

AI inference (using a trained AI model) will migrate from hundreds of data centers to billions of smartphones, which can be recharged from solar power anywhere on Earth. In other words, instead of using your phone to talk to a chatbot running in a data center somewhere, you'll talk to a chatbot running directly on your phone, which you can recharge by plugging into a wall outlet or by using a solar charger.

It’s also true that creating new models, rather than using existing models, can’t be done on smartphones and will require data centers. But when future AI models become much smarter, we don’t know precisely what their electricity requirements will be (or even what physical substrates they will use). With better algorithms and curated data, AI models will be able to improve themselves, using much less energy in the future than some people are predicting today.

Observation #3: New Power Plants Are Solar

You may have read announcements about using coal and nuclear power to run data centers. Locking in power supply from an existing power plant is the low-risk strategy to expand or build a new data center today. But we’re not seeing new coal and nuclear power plants being built—the deals are all about who gets to use the power from existing plants. New power plant construction is almost entirely solar now.

Implication: Future Solar Plants Will Be Solar

Solar power plants are now so affordable that recent policy attempts to slow down their deployment by reversing solar incentives can’t stop more from being built. While today’s energy-wasteful data centers are being sited next to existing electrical infrastructure so they can take grid power, tomorrow’s energy-efficient data centers will be sited where sunlight is plentiful. By co-locating a solar array, battery system, and data center, you can use less grid power. It is becoming more and more economical to make your own electricity from sunlight than to buy grid power.

Hypothesis #1: AI Will Become Much More Useful

Okay, so much for observations and implications. Now here’s my first hypothesis: During this decade, AI will become much smarter and more useful. A useful thing we’ve been able to do with AI is “distill” more AI. That means we can use AI to take a big model that can do something like simultaneous translation, but doesn’t fit on a cellphone, and squeeze it down so it fits on a cellphone and still works well.

The self-improvement capability of AI will pick up steam in the next few years. Besides shrinking itself down so it can run on cellphones and cars and other distributed devices, AI will help make itself smarter. Then it will enable us to run everything more efficiently.

Hypothesis #2: Efficient Processors Will Replace Inefficient Processors

If you just count projected shipments of processing units—whether central processing units, graphical processing units, or tensor processing units—you’d think that electricity use is about to skyrocket, because lots of CPUs, GPUs, and TPUs are being sold. But most of these units will replace older processing units, which will be retired rather than continue to be operated.

Nvidia is the current industry leader for the chips that run AI. Their GB200 Grace Blackwell chip is twenty-five times more energy efficient for AI inference than the previous generation chip, the Hopper GPU. They note this improvement in efficiency is not new: “Over the last eight years, Nvidia GPUs have advanced a whopping 45,000x in their energy efficiency running large language models.”

Imagine you own a data center. You pay annual real estate tax for your building and land, and you have an electrical service with a limited amount of power that can supply your data racks. Just by swapping out boards with Nvidia Hopper GPUs on them with new boards that have the newer GB200 Grace Blackwell superchip, you can sell twenty-five times more computing services from the same facility. With zero increase in your facility cost, you’ve increased your potential sales revenue by twenty-five times. The Rubin architecture from Nvidia, coming in 2026-2027, will constitute the next leap forward in compute power. Watch for the Vera Rubin.

Now, what happens if you don’t upgrade your chips? You will struggle to compete on price or performance with competitors that are twenty-five times more efficient than you. You can’t afford to leave a bunch of boards running last year’s chips—they take up valuable rack space that could be running this year’s chips and making a lot more money for you. The only reason to keep running chips that cost 25 times more to operate is if you just can’t get the latest and greatest chips.

This dynamic is why Nvidia is selling its chips like crazy, and its market capitalization shot up from $100 billion to $3,252 billion in the last four years.

Hypothesis #3: Solar Power Can Scale Better Than Coal or Nuclear

Coal and nuclear power have four inherent drawbacks compared to solar power that they cannot overcome:

  1. Coal and uranium are not free, whereas sunlight is free.

  2. Coal and nuclear power plants require water or molten salt and cooling systems, whereas solar arrays do not.

  3. Coal and nuclear power plants work better the bigger they are, whereas solar works just as well at any size.

  4. Coal and nuclear power turbines have moving parts, whereas solar modules do not.

These are inherent drawbacks of coal and nuclear power, due to physics. In contrast, two significant limitations of solar power today can be improved with the help of AI:

  1. AI can help with power controls, so solar arrays capture more energy from sunlight, and equipment can use more of that energy with fewer conversion losses.

  2. AI can accelerate material science to improve both solar modules and batteries.

In other words, AI can help make solar work a lot better, but it can’t make coal or nuclear power work that much better. That’s another reason to bet on more solar in a world with more AI.

Summary

In short, our clean energy future looks brighter with AI in the picture.