الأمن السيبراني

Confronting the AI Energy Drain


AI technology is one of the defining developments of our era. We can use voice commands to turn our lights on and off. We can ask programs to generate images of nearly any scenario: a fox standing in a Norwegian forest, a celebrity kissing another celebrity they’ve never even met. And we can demand that they generate entire articles based on a few directives. More consequentially, these programs now control everything from airplane trajectories to how power is distributed — with varying levels of input from human operators.  

The enormous utility of these programs comes at a cost. AI is hungry — its appetites rival those of disruptive technologies such as blockchain. Increasing concern was raised about how much power we can expect AI to consume in the near future. We already know that the data centers powering our current digital ecosystem are exerting increasing demands on the power grid. 

And the costs do not stop there — AI also draws on natural resources and human labor, as illustrated by this visual essay from 2018.  

Researchers are now investigating how we can make AI green — or at least greener. Hundreds  

of papers have been published on the subject in the last several years. Some have found energy savings of up to 115%. Proposals range from adjusting hardware to devising more efficient coding to relocating the data centers that power this insanely disruptive new technology.  

Related:Sustainable AI: Wishful Thinking or Corporate Imperative?

But are these solutions tenable? And how interested is the industry in implementing them? One group of researchers found that only 23% of papers published on the subject included direct industry involvement. The rest were simply laboratory experiments.  

Here, InformationWeek plumbs the literature on green AI and offers insights from David Beach, datacom market segment manager for power connector firm Anderson Power, and Gisela Glandt, vice president of virtual power plants for AutoGrid, a company that offers green energy grid software solutions.  

Why AI Is So Power Hungry 

AI makes unique demands on the power grid due to the complexity of the programming that produces the seemingly miraculous results that we now take for granted. It often requires graphics processing units (GPUs), central processing units (CPUs) and tensor processing units (TPUs). GPUs use four times as much power as CPUs. 

“There are not as many iterative calculations in most operating schemes as you would find in AI,” Beach explains. “The more iterations that you are putting through a processor, the more time it takes. Obviously, the more time you’re spending, the more power it uses. In some cases, they’re running parallel operations.” 

Related:Special Report: What’s Next for the GenAI Market in 2024?

David_Beach_Anderson_Power.jpg

AI programs that use giant data sets and run enormous numbers of experiments to obtain a more accurate model are known as “red AI” — the opposite of green AI. Companies are generally not forthcoming about how much energy they put into developing their models and there is significant controversy over whether red AI or green AI is preferable. Some claim the intensive computational energy used in red AI ultimately results in more efficient models while others claim efficiency should be the goal from the outset. 

There are three phases to AI development: data engineering, training, and inference. Engineering the data is essentially organizing information so that the model can interpret it. Training teaches the model to utilize that information and produce useful results. And inference is using the model to distill useful responses to queries from that information. The latter two processes are very energy intensive. 

Sources are divided on which process is more energy intensive. Training may take the most power according to some. Training requires running millions of queries, which rely on memory and processing power. One expert suggests training ChatGPT may use up to a billion times more power than running it does. Historical data from Google’s efforts, however, suggests the inverse: 60% of power used goes toward inference and 40% goes toward training. And the volume of ChatGPT queries may actually mean that running it now takes more power than training it did. 

Related:How to Budget for Generative AI in 2024 and 2025

These ratios are contingent upon what exactly the AI is expected to do. If it is assigned a narrow task, it will likely use less power. If it is expected to provide answers to a broad array of queries, it will likely use more. 

The Carbon Demands of AI 

It is challenging to separate the power demands of AI programs from other digital technologies, especially because AI is now integrated into so many basic online services. And the carbon footprints of data centers are not limited to the power they use daily.  

The power used in their construction and in the construction of the servers they house also play into the equation. Some 50% of the carbon footprint of an AI model may be due to the resources used to produce the hardware upon which it relies. So, any estimate of AI power use is bound to be rough at best. Still, an individual hyperscale data center uses around 20 – 50 megawatts of power a year — enough to power nearly 40,000 private homes. At least partially driven by the increasing uptake of AI technology, hyperscale capacity is likely to triple in the next six years. 

AI alone could account for some 4% of global power use by 2030. Overall, emissions have increased by a factor of 100 since 2012. 

These calculations are highly dependent on the hardware used, the amount of time required to train a model and the intensity of carbon use by the grid that the model draws from. If the grid uses sustainable sources such as solar or wind power, it may be less energy intensive. Researchers have found that models designed in highly industrialized countries such as the United States and China, with heavy fossil fuel reliance, are more carbon intensive than those in countries with grids that are more reliant on renewables.  

Researchers who have compared various models with the grids upon which they rely suggest that the model with the longest training time could have reduced its emissions by around 30 times had it been relying on the least carbon intensive grid in their analysis. Grid information is, however, sometimes difficult to obtain, depending on the region in which the model is developed. In some countries, only very vague, national estimates of grid use are available. 

The power demanded by AI technologies alone is believed to have quadrupled every year in the past decade and a half. NVIDIA, the main manufacturer of AI servers, is likely to produce some 1.5 million units in the next three years. According to one of the most attention-grabbing papers on the subject, just that number is set to consume up to 85.4 terawatt hours of power every year — more than a small country. A McKinsey report suggests that power demands will reach 35 gigawatts by the beginning of the next decade. 

The energy required for various AI tasks differs substantially. Generating text is significantly less energy intensive than generating an image, for example. Still, training a single large language model may generate as many as 300,000 kilograms of carbon dioxide — as much as five cars might generate during their entire lifespan. 

Carbon is not the only concern. Escalating water use to support AI demands has also received attention. Microsoft saw upticks in water use of some 34% as a result of its increasing emphasis on AI in 2022. Estimates suggest that AI may demand as much as 6.6 billion cubic meters of water by 2027. 

“The resources required — power and water — are really going to suffer. We have to find a solution for this,” Glandt exhorts. 

How to Make AI More Sustainable 

A wide variety of solutions to the AI power drain problem have been proposed — from shrinking chip sizes to moving data centers to areas with sustainable grids to designing more efficient AI algorithms. These solutions are in varying stages of implementation, though the industry is relatively close-lipped about how it plans on handling the demands it will surely make in the coming years.  

Estimating the power that a model might use in the first place is one essential step. Tools such as Code Carbon and the Experiment Impact Tracker offer AI designers means of calculating how much energy the training of their models will use and offering solutions for reducing their carbon footprint. The accuracy of such tools has been questioned, though — they appear to account for only a fraction of the carbon emissions created by a given model. 

Some of the solutions are tangible and easily understood. For example, servers often rely on two power cords but reducing their input to one cord may make them more efficient.  

Setting up the inevitably expanding array of data centers in regions that have established renewable power grids or require less power in the first place because they do not need as much cooling due to lower ambient temperatures may be another approach. Training a model in a more carbon neutral region may reduce emissions by as much as two thirds. Scandinavia has emerged as a major player due to both its emphasis on renewables and its chilly climate, for example. Still, these locational solutions have their own problems. It takes more power to push data from distant locations to large population hubs elsewhere. This is mostly an issue when AI products are implemented. Training them in more remote locations is still feasible. 

Running data centers on DC power may also improve efficiency, Beach suggests. Power typically enters a data center as AC, is then converted to DC, and then back to AC. “If we make everything run on DC after we bring it into the data center, we don’t have to do some of these conversions and therefore we get some efficiency back,” he explains. Anderson Power designs connectors that can manage both AC and DC power with a smaller physical footprint. 

Designing smaller, more efficient chips may also have an impact — though we appear to be reaching a plateau. Moore’s law indicates that the number of transistors that fit on a single chip double every couple of years. And Dennard scaling suggests that these transistors get smaller while retaining equivalent efficiency. We may be reaching the limits of these principles but pushing them further may help to make AI more sustainable at least in the short term. More tightly integrating processing and memory hardware will likely be helpful. 

Other solutions are a bit more obscure. Some AI scientists are urging the development of algorithms that make more efficient calculations and rely on more refined data sets, thus using less energy. Increasing speed and decreasing memory use are key factors. Algorithms can be designed to use less energy without sacrificing accuracy. One study found that energy use during inference for one program could be reduced by 77% by reducing accuracy from 94.3% to 93.2%. Removing redundant data points can reduce processing time. So, too, implementation can be made more efficient by removing redundant neural nodes

The more precise implementation of algorithms will probably be helpful — using smaller, more energy efficient algorithms for precise tasks rather than using large ones. The lessons learned in larger models can be transferred to smaller models and vice versa. Challenges, such as the Low-Power Image Recognition Challenge, have emerged to accelerate more efficient models.  

And software solutions that manage the various energy demands of a particular product have been developed as well. AutoGrid, for example, offers what is known as a virtual power plant (VPP), which balances supply and demand in a way that maximizes use of sustainable power.  

“We’re offering a very powerful piece of software to manage all the distributed energy resources that are out there,” Glandt explains. 

Gisela_Glandt_-AutoGrid.jpg

Edge computing offers further promise: By outsourcing implementation to individual machines rather than relying on data centers, AI can run with reduced latency. 

Consumer pressure may also help to facilitate greater AI efficiency. If consumers become aware of how much power the products upon which they rely are using and demand that their manufacturers reduce that load, companies will likely respond. And at least some consumers may be more reluctant to casually use AI technology when they don’t need it. Certainly, there will always be people who will ask an AI to generate an image of Brad Pitt making out with Cary Grant without a second thought. But customers who are more attentive to the impact of their choices on the environment will be inclined to put pressure on the companies they patronize. 

Can AI Make Itself More Sustainable? 

Some AI proponents have suggested that the technology is capable of self-correcting if properly applied.  

Smart grids hold some early promise. The fact that AI can harvest billions of data points from the various components of a given grid suggests that it may facilitate more efficient use of available power resources. But these AI programs are not necessarily as energy draining as the ones they aim to compensate for. 

“The solution to this optimization problem relies on machine learning,” Glandt claims. “But it’s not the same as the deep neural networks that drive ChatGPT. [These programs are] much less power hungry.” 

By determining when sustainable resources such as wind and solar are most likely to be available due to natural variations — increased sunlight due to weather patterns, for example — AI technologies can modulate when data centers can expect to use them and when they can’t. This may allow them to generate cost savings by using that power during periods when it is in lower demand thus avoiding conflicts with other industries and saving money by avoiding such restrictions as time-of-use tariffs, which tax energy use during peak periods at higher rates.  

Providers can also use these tools to prognosticate the level of availability so they can sell the power likely to be generated by sustainable resources before the power is produced.  

AI can also improve maintenance regimens in data centers, allowing failing equipment to be identified before it causes a problem and interrupts energy supply. Assets can be replaced before they become a significant energy drain. 

Beach sees this as a possibility but suggests that work remains to be done. “Perhaps AI, regulating some of the energy management across industries across a variety of applications pays for itself and offers an overall reduction. But we still have to get to that point,” he says. 

Industry and Regulatory Action 

The ethical obligations of AI are something of a gray area. There seems to be little transparency from AI producers as to exactly how much energy they are expending. 

Many legislative efforts to regulate AI have emerged, though they largely fail to address energy demands. The OECD AI Principles and European AI Act attempt to provide guidance for the development of AI but are largely agnostic on how much energy it might be acceptable to use. 

As Beach points out, some of the energy drain may be limited by local limitations on the energy grid. “You just may not get the permit to build a data center in a given region if they can’t support it from a power standpoint,” he explains. 

Researchers have pointed out that regulations are likely to become quickly outdated given how rapidly this technology is accelerating. And designing effective regulation would require a deep understanding of a highly complex and dynamic technological landscape. AI technology is not discrete and self-contained — it has already spread its tendrils into nearly every economic sector.  

Lobbyists are sure to slow the progress of any further legislation. In the meantime, pressure from ethicists and from consumers, along with the activities of industry players who care about how their work affects the environment, will be our only safeguards against AI’s ravage of the energy grid. Perhaps, if we’re lucky, AI will decide to solve the problem on its own. 





Source link

زر الذهاب إلى الأعلى