Powerful AI

(This was originally written as an assignment for my masters studies, I thought it might be interesting, read at your own risk etc.)

Are we offsetting or accelerating the green transition?

Prices for green energy are coming down [1] and given an assumption that this trend will continue, the price of reliable green energy may fall below that of closed cycle gas generation (currently one of the cheapest options [2]). However, the question remains that if demand added from new AI generation continues to pressure electricity prices upwards, then the marginal advantage of green versus conventional energy may remain an issue. That’s to say, even if it is profitable to build a new solar plant and all the solar energy companies industries are booming, if it is still profitable to build a new gas turbine plant then the gas turbine plant manufacturers will keep doing that too. In this sense even if AI energy demand accelerates the construction of new green power, it offsets the green transition.

I personally believe that a combination of government action, corporate goodwill (yes, it does exist) and market forces will lead to satisfactory progress, but none of this is set in stone and we must carefully monitor it.

The way of the car or the plane?

How will our models look in future, and how will they be used? Will they go the way of the plane, big, expensive, built by one of two companies and where you rent a seat and get where you want to go. Or will they be more like cars, small, personalizable, affordable, ubiquitous? This question matters. For all of the faults of air travel, the incentive structure of large, expensive planes encourages extreme fuel efficiency for cost saving reasons. From 2002 to 2018, aircraft fuel efficiency rose 71% [3], compared with 29% for ground vehicles over the same period [4]. One could attribute this to the fact that the auto industry is more mature, but the general mechanism whereby economies of scale encourage greater efficiency is well known.

This means that if a small number of efficient, cloud based AI vendors can optimize their models to provide the vast majority of GenAI needs ends up being the norm, we may be better off energy-wise than if everyone has personal GenAI being finetuned constantly and running on dedicated hardware. Likely there will be a balance of both these possibilities, but it’s worth noting that currently the former is the only relevant case.

Will “agents” make this much, much worse?

Right now if you want to use AI there is usually a human involved. This means that you head over to ChatGPT and ask it a question, or maybe you receive an email that an LLM summarises. Perhaps an artist might use photoshop’s AI inpainting tools. Essentially, for most current inference tasks, GenAI is done under supervision. Here, humans short attention spans actually works in our favor for once. In an effort to improve inference speeds (and lower cost) model makers have developed new methods, most notably quantization, to do more with less. This incentive aligns with reducing energy consumption and is not only attributable to improving hardware [5]. However, looking forward this may not be the case.

“AI Agents” is a poorly defined term but for the purposes of this discussion I will be looking at the common attribute that they involve less or even no human “presence”. In essence they are allowed to operate in the background. Devin is a “software engineer” AI system [6] which you can only communicate with on slack. You ask it to do something and it goes off on its own, sometimes for a period of days, to try and achieve what you asked. No doubt it would be nicer for this process to be faster, but the pressure to optimize this is missing a component of human impatience.

Further still, if we give new CoT reasoning models like openai’s o3 more compute, they tend to give better answers [7]. If you are anyways running these models in the background, why not give them the “best” chance of coming up with a good answer? Given that sometimes these agents get stuck in loops [6], would we not want to do that since an agent getting into an action loop is also bad from both a cost and energy use perspective?

In summary, agents may bring about a significant change in what is prioritized in model development, and to my mind it will not be in favor of energy efficiency. That could couple with changes in how the average model is used, with more personalized finetuning negating the oft-used argument that “you only need to train them once”. This new energy usage being bad is premised on the fact that new demand will be partially filled by conventional energy, but as mentioned, the capacity of green energy isn’t just based on cost, but on comparative advantage.

References

[1] https://www.irena.org/Publications/2024/Sep/Renewable-Power-Generation-Costs-in-2023

[2] https://www.goldmansachs.com/pdfs/insights/goldman-sachs-research/the-push-for-the-green-data-center/aidatacenters.pdf

[3] https://www.sciencedirect.com/science/article/pii/S1352231020305689

[4] https://www.lendingtree.com/auto/fuel-efficiency-study/

[5] https://openai.com/index/ai-and-efficiency/

[6] https://www.answer.ai/posts/2025-01-08-devin.html

[7] https://arcprize.org/blog/oai-o3-pub-breakthrough