Skip navigation

Hello World

AI Environmental Equity: “It’s Not Easy Being Green”

A conversation with Shaolei Ren

Photo collage of a person in a hazmat suit holding a computer chip, vignettes of an oil refinery showing smoke billowing out of towers, and closeups of computer servers.
Photo illustration by Gabriel Hongsdusit/The Markup. Photographs by luismmolina, AerialPerspective Images, and sinology

From our work at The Markup, we know that adoption of technology so often means that some bear the costs while others enjoy conveniences. We challenge technology, and those who use it for decision-making, to grapple with that reality and to do better. 

AI is no different. While we’re well-versed in algorithmic bias and inequity in our work at The Markup, the environmental inequity of technology is an area we’re starting to explore.

A few months ago, I spoke with Shaolei Ren, as associate professor of computer science at University of California, Riverside, and his team about their research into the secret water footprint of AI. Recently, Ren and his team studied how AI’s environmental costs are often disproportionately higher in some regions than others, so I spoke with him again to dig into those findings.

His team, which includes UC Riverside Ph.D. candidates Pengfei Li and Jianyi Yang, and Adam Wierman, a professor in the Department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology, looked into a path toward more equitable AI through what they call “geographical load balancing.” Specifically, this approach attempts to “explicitly address AI’s environmental impacts on the most disadvantaged regions.”

Ren and I talked about why it’s not easy being green and what tangible steps cloud service providers and app developers could take to reduce their environmental footprint. 

This interview has been edited for length and clarity.

Headshot of Shaolei Ren
Caption: Shaolei Ren Credit:UC Riverside

Nabiha Syed: In the public conversation around AI so far, we don’t often consider environmental impacts. For folks new to the space, can you tell us about AI’s environmental impacts? 

Shaolei Ren: AI models, especially large generative models like GPT-3, are typically trained on large clusters of power-hungry servers housed in warehouse-scale data centers. For example, even after adopting the industry’s best practices to curb AI’s resource usage, AI models at Google have taken up about 10 to 15 percent of its total energy consumption. As a result, AI has a huge hidden environmental cost. Even putting aside the environmental toll of chip manufacturing and the noise pollution of running AI servers, training a single large language model like GPT-3 can easily consume hundreds of megawatt-hours of electricity, generate many tons of carbon emissions, and evaporate hundreds of thousands of liters of clean freshwater for cooling. As the AI industry booms, concerns with AI’s environmental costs are also growing, especially in marginalized communities that often rely on coal-based energy sources and/or are vulnerable to extended droughts. 

Syed: Obviously these environmental concerns impact some regions more than others—drought-ridden places, for example. Your research proposes mitigating that harm through an “equity aware” approach. Explain how this works?

Elevated carbon emissions in an area may increase local ozone, particulate matter, and premature mortality.

Ren: AI’s environmental costs have significant local and regional impacts. For example, thermal-based electricity generation produces local air pollutants, discharges pollution into water bodies, and generates solid wastes (possibly including hazardous wastes). Elevated carbon emissions in an area may increase local ozone, particulate matter, and premature mortality. Staggering water consumption can further stress limited local freshwater resources and worsen megadroughts in regions like Arizona. 

While the whole world is enjoying the benefits of AI, the negative environmental costs are borne by local communities where the AI models are trained and deployed. Even worse, AI’s environmental costs are not evenly distributed and can be disproportionately higher in certain (sometimes already-disadvantaged) regions than in others. This has raised serious concerns about AI’s environmental inequity, which is stealthily emerging but not yet as widely known as AI’s algorithmic inequity (e.g., prediction biases against certain groups or individuals).

International organizations such as the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organization for Economic Co-operation and Development (OECD) have explicitly called for efforts to address AI’s environmental inequity in order to support healthy and responsible development of AI. The AI Now Institute compared the uneven regional distribution of AI’s environmental costs to “historical practices of settler colonialism and racial capitalism” in its 2023 Landscape report.

The great news is that AI models can be trained and deployed in different data centers, which allows us to do a lot to address AI’s environmental inequity by flexibly and equitably distributing its regional environmental impact. For example, air pollution by freeway traffic can negatively impact nearby communities, and it can be challenging to reroute traffic once we build the freeways, but we can easily redistribute AI workloads to different data centers depending on real-time local information, such as the current percentage of coal-based energy sources and water efficiency. By moving AI workloads around from one data center to another, we also move AI’s environmental costs around, which can make AI’s regional environmental impacts more balanced.

The key idea is that we can explicitly minimize the most significant negative environmental impacts (e.g., local impacts of water and carbon footprints) among all the data centers by optimizing which data centers we use and when.

Syed: Equity is one of those phrases that often escapes a shared operating definition. Tell us more about how you chose to define it here? 

Ren: The definition of equity varies in different contexts and is an independent study on its own. Our goal is not to blindly equalize AI’s regional environmental cost, which may artificially elevate the environmental footprints in otherwise advantaged regions and provide a false sense of equity. Instead, we aim at minimizing AI’s highest regional environmental cost—reducing AI’s environmental impact on the most affected regions. Also, we can consider proportional equity by taking into account an area’s environmental cost compared to the total capacity of its data centers, since a larger data center generally has a larger environmental impact than a smaller one. 

Syed: Do existing carbon- and water-saving approaches inadvertently amplify environmental inequity—and how so?

Ren: Some of the early studies focus on minimizing the electricity costs and/or minimizing the total latency. For example, if one data center has a lower real-time energy price than others, more workloads will be routed to this data center, even though it can disproportionately increase this data center’s environmental cost. More recent approaches focus on minimizing the total environmental footprint among all the regions, but this does not mean all regions are treated equitably. Imagine two data centers, and for the next hour, one is twice as efficient as the other in its water usage. To minimize the total water footprint, all workloads will be routed to the more water-efficient data center for the next hour, but such aggressive “exploitation” is certainly unfair. Instead, for more equitable workload distribution, we may want to schedule two-thirds and one-third of the workloads to these two data centers, respectively.

The key idea is that we can explicitly minimize the most significant negative environmental impacts … by optimizing which data centers we use and when.

Of course, the real problem is much more challenging, as the environmental costs are more than just water footprints, and there are additional regional concerns and constraints, such as whether a region is already environmentally disadvantaged, workload scheduling, and latency requirements, that we must take into account.

Syed: Is there a tradeoff between environmental equity and performance? Put differently, how do you think about latency when it comes to switching between geographically distant data centers?

Ren: There can be a tolerable tradeoff between latency and environmental equity for AI inference (basically, when it’s actively being used to make predictions), but geographical load balancing is a fairly mature technology that AI systems can easily adopt with minimum impacts on latency. For example, even routing traffic between Mexico and Hong Kong wouldn’t noticeably affect the experiences for average users. 

For AI training, we don’t expect any significant performance impacts, as AI training is even more flexible and typically doesn’t have as strict deadlines as inference. Also, we don’t have to move one single job of AI training back and forth between multiple data centers; we just need to balance the AI system’s overall long-term regional environmental impacts.

Syed: You describe some of the practical challenges in optimizing for equity. What are they, and how might we work around them?

Ren: When we dynamically schedule AI workloads in real time, we can’t possibly know all future information, such as workload demands and water and carbon efficiency. We also need to maintain a certain level of AI model performance and quality. To address this, we can leverage machine learning predictions to estimate future water and carbon efficiency and workload demands, but the estimates will probably be noisy. We have a separate line of work to utilize noisy machine learning predictions to help us improve the decision quality.

Syed: How should cloud service providers incorporate your research? What about end users and app developers?

Ren: Our research is among the first to address AI’s environmental inequity. A potential way to incorporate our research into the existing AI workload scheduler is to add equity cost to the equation, or assign a total environmental footprint target for each region, as part of how a company optimizes its AI workload management. App developers can make their AI models more computationally efficient to reduce AI’s burden on the environment. AI’s environmental cost is real, but it’s often hidden from the end users, so end users may want to avoid wasteful usage of AI.

It’s not easy to achieve environmental equity for AI, just as it’s not easy being green. We hope that our work can make the research community and the general public aware of AI’s emerging environmental inequity. When we build sustainable AI, let’s not forget about environmental equity.


Earlier this week, The Markup copublished with Grist a piece on how AI is hurting the climate in non-obvious ways, including through the carbon emissions Shaolei talked about. The piece also dives clearly into how it’s not all bad: AI is already helping climate scientists with their research. At the end of the day, AI’s climate impact (like many other things) depends on who is using it and how.

Thanks for reading.

Nabiha Syed
Chief Executive Officer
The Markup

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now