Sustainable AI: Environmental Impact and Governance

May 28, 2025

Sustainable AI: Environmental Impact and Governance

📄 Download PDF with Sources

Executive Summary White Paper

Overview: The sustainability of artificial intelligence has become a dual imperative, encompassing both the environmental footprint of AI systems and the policy frameworks needed to govern them responsibly. This executive summary distills key insights across two pillars – (1) Environmental Impacts of AI (energy use, carbon emissions, hardware lifecycle) and (2) Policy & Governance Frameworks (regulations, mandates, procurement standards) – with a global perspective (EU, US, and Global South comparisons). It provides a visual-forward synthesis, highlighting critical data points and scenarios (a potential “Green AI” future vs. a “Grey AI” status quo) to inform decision-makers in government and industry.

AI’s Growing Environmental Footprint: Current research reveals that AI’s carbon and resource footprint is exploding. By 2030, AI workloads could account for ~3–3.6% of global greenhouse gas emissions – on par with the entire aviation industry. Similarly, AI’s electricity consumption is projected to reach about 3.5% of global power demand by 2030, doubling the energy used by countries like France. Training a single large AI model can emit hundreds of thousands of kilograms of CO₂; one study equated training a big language model to ~300,000 kg CO₂ – roughly 125 round-trip flights between New York and Beijing. Each use of these models incurs further costs: an average generative AI query consumes 10× more energy than a standard Google search, and producing a 100-word response with GPT-4 can use ~0.5 liters of water for cooling. The lifecycle impact extends beyond energy and carbon into water usage and e-waste. Data centers supporting AI may guzzle an estimated 4.2–6.6 billion cubic meters of water annually by 2027 – exceeding 50% of the UK’s total water use. Meanwhile, rapid hardware upgrades contribute to mounting electronic waste (global e-waste projected to reach 74.7 Mt by 2030, nearly double 2014 levels, with only ~17% recycled). These trends paint a stark “Grey AI” scenario: unchecked AI growth would intensify climate change and resource strain.

Toward “Green AI” Futures: Encouragingly, a sustainable “Green AI” pathway is feasible. Advances in hardware and efficiency are already yielding gains – for instance, Google’s latest AI accelerators achieved a 3× improvement in compute carbon intensity (CO₂ per unit of compute) compared to previous generations. Clean energy adoption can mitigate operational emissions: over a 6-year lifecycle, roughly 70–90% of an AI hardware’s carbon footprint comes from electricity consumption (vs. manufacturing), so powering AI with renewables can dramatically cut emissions. If paired with smart policies and design choices, AI could even become an enabler of sustainability (e.g. AI-optimized energy grids and efficiency gains in other sectors have the potential to offset 5–10% of global emissions by 2030). The Executive Summary White Paper will feature charts and infographics contrasting these scenarios – illustrating, for example, AI’s projected energy/carbon trajectory in a “Grey AI” baseline (high emissions) vs. a “Green AI” scenario where efficiency measures and renewable energy flatten the curve. It will highlight the urgent actions needed to bend the curve: from greener data center designs and algorithmic efficiency to robust governance that aligns AI development with climate goals.

Sustainability Scorecard / Framework (Draft)

Purpose: To provide stakeholders with a clear scorecard for evaluating AI systems on sustainability criteria. This draft framework compares AI models, data center setups, or AI-powered products across standardized metrics in three dimensions: Energy, Ethics, and Scalability. The goal is a practical tool for policymakers and builders to rate and compare the sustainability of AI systems at a glance.

Draft Layout: The scorecard could be presented as a table or dashboard comparing several AI systems (e.g. GPT-4, Small Open-Source Model, Efficient Vision AI). Columns would list Energy Use (kWh/query, CO₂e), Hardware & Data Center (efficiency, PUE), Lifecycle Emissions (manufacturing + operation), Ethical Compliance (Yes/No or score), Scalability (e.g. supports load without linear cost increase). Each system gets a color-coded rating per column, facilitating quick visual comparison. This framework will allow institutions to set sustainability benchmarks (e.g. minimum efficiency standards for AI procurement) and enable engineers to identify improvement areas (e.g. if their system is “Red” on carbon, they might prioritize optimization or cloud migration to green data centers).

Case Studies & Ecosystem Map (Draft)

Profiles of Pioneering Efforts: This section will highlight real-world case studies of organizations and initiatives leading the way toward sustainable AI, as well as a draft ecosystem map visualizing key players and their relationships. By showcasing concrete examples, it grounds the discussion in practice and illustrates the growing network of “sustainable AI” champions across tech, policy, and academia.

Ecosystem Map (Visual): We envision a visual map that charts the landscape of sustainable AI players and efforts globally. This map would be clustered by categories such as:

Lines or arrows can indicate collaborations (for example, a line connecting Microsoft and ADNOC/Masdar for their joint report on AI in energy, or connecting G42 (UAE) with Kenyan regulators for the geothermal data center). The ecosystem map provides a snapshot of the multi-faceted movement toward sustainable AI, showing who is doing what and how they interrelate. It helps policymakers see potential partners and model programs, and helps industry players identify coalitions and guidelines to join.

Open Database / Dashboard (Conceptual Mock-up)

Vision: We propose an open-access Sustainable AI Dashboard – a conceptual tool that aggregates data on AI systems’ environmental and ethical performance, allowing users to filter and compare by various criteria. This deliverable sketches the layout and features of such a dashboard, which could serve researchers, regulators, and engineers as a one-stop transparency platform.

Critical Insights Memo – Counterintuitive Findings

Briefing Note: In the course of this research, several counterintuitive or overlooked insights emerged. This memo highlights a few key findings that defy conventional wisdom or often escape notice, to inform strategic discussions among policymakers and AI developers:

  1. Operational Emissions vs. Hardware Emissions – the Dominant Factor: It’s often assumed that manufacturing AI hardware (chips, servers) is the primary source of AI’s carbon footprint. In reality, operational energy use dominates. Over a typical AI hardware lifespan, 70–90% of total emissions come from electricity consumed during model training and inference, far outpacing the embodied emissions of manufacturing. Insight: Prioritizing clean energy and efficiency in operation will yield greater carbon reductions than focusing only on greener manufacturing. (This doesn’t mean hardware production is trivial, but it means running AI on a coal-powered grid is far worse than the one-time emissions to make the hardware.)

  2. The Hidden Water Footprint: Discussions on AI’s sustainability focus heavily on electricity and carbon, but water usage is a critical and often overlooked piece of the puzzle. Data centers require vast water for cooling and also indirectly via electricity generation (for thermoelectric plants). Projections show AI’s water consumption reaching billions of cubic meters annually in just a few years. To put in perspective, AI’s water use by 2027 could exceed half of the U.K.’s total annual water usage. Insight: Water scarcity may become a limiting factor for AI growth in certain regions. Strategies like advanced cooling (air cooling, liquid cooling with recirculation), siting data centers near abundant water or using non-potable sources, and improving energy efficiency (thereby drawing less water for power plants) will be increasingly important.

  3. E-Waste and Hardware Lifecycles – A Growing Challenge: AI’s rapid progress drives short upgrade cycles for hardware (GPUs, TPUs, etc.), which can lead to a significant electronic waste problem. Globally, e-waste is on track to nearly double from 2014 to 2030, yet recycling systems are not keeping up (over 80% of e-waste is not formally recycled). The pursuit of ever-more-powerful AI chips could exacerbate this, as old hardware gets decommissioned. Insight: Extending hardware lifetimes and improving recyclability is an often under-prioritized aspect of sustainable AI. Circular economy principles (refurbishment, component reuse, material recycling) need to be integrated into AI hardware procurement and decommissioning. Policymakers might consider incentives or requirements for tech companies to handle e-waste responsibly – for instance, take-back programs or minimum recycled content in new devices.

  4. Transparency vs. Market Disincentives – The EU Dilemma: The EU’s aggressive stance on AI governance includes demanding energy transparency from AI providers (e.g. requiring disclosure of resource use for foundation models). While this is intended to spur accountability, it has had a paradoxical early effect: some companies have expressed reluctance to deploy certain AI services in Europe, citing strict and unpredictable requirements. In fact, concerns arose that mandatory reporting of energy usage (Annex IX of the AI Act draft) might expose proprietary data or simply deter companies unwilling to share such details. Insight: Policymakers must balance the need for transparency with creating a level playing field globally. If only one jurisdiction requires detailed carbon reporting, companies might geo-fence advanced AI away from that market, potentially reducing local innovation. International coordination, phasing requirements in gradually, or protecting sensitive information (while still holding companies accountable) could help mitigate this risk. The bigger picture: global standards for AI sustainability disclosures would prevent regions with stricter rules from being at a competitive disadvantage.

  5. Not All AI is Equal – Task Variance in Energy Use: It’s easy to speak of AI’s footprint in general terms, but a granular look reveals huge variability depending on the type of AI task. Counterintuitively, some seemingly simple AI tasks can have outsized impacts. For instance, generating images with AI (e.g. using diffusion models for art) can be far more carbon-intensive per output than generating text. A recent analysis found the most carbon-intensive image model emits the equivalent of driving a gasoline car for ~4 miles per just 1,000 images generated, whereas an efficient text model emits as little as the equivalent of a few millimeters of driving for 1,000 sentences. Likewise, an LLM answering a single query might use 10× the energy of a search, but if that LLM replaces a very long process or heavy human effort, the comparison changes. Insight: Efficiency opportunities lie in tailoring AI solutions to the task. High-volume simple queries might be best served by smaller, specialized models that are much more efficient, whereas large general models should be reserved for tasks that truly require them. This also suggests that AI architects should consider “right-sizing” models to the problem – a form of sustainable design that avoids using a sledgehammer (giant model) for a nail (simple task).

  6. Global Inequities – Resource Extraction vs. Benefits: There is a sustainability paradox in the global AI ecosystem: many resource impacts of AI are externalized to the Global South, even as AI benefits accrue mostly to wealthier nations. For example, minerals for AI hardware (like cobalt, lithium) are largely mined in African and Latin American countries; data centers are increasingly built where power is cheap (sometimes emerging economies), and e-waste often ships to developing countries for disposal. These countries thus face environmental degradation risks while often benefiting minimally from AI-driven economic growth. Insight: Sustainable AI must incorporate environmental justice. International policy should support technology transfer and capacity building so that Global South countries can participate in AI value creation, not just bear its costs. Notably, some countries are responding: Brazil’s draft AI law makes sustainable development a guiding principle of AI governance, and the UAE is advocating for global AI sustainability standards, partly to address cross-border impacts. Recognizing these asymmetries is the first step to addressing them; e.g., by including Global South voices in setting AI standards and by investing in mitigation (like funding e-waste recycling facilities in the regions affected).

These critical insights remind us that a holistic view is required – one that spans energy, water, materials, geopolitics, and beyond. They will inform the detailed findings and recommendations in the full report.

Policy & Design Recommendations

Finally, we present tailored, actionable recommendations for different stakeholder groups, bridging high-level policy measures with on-the-ground design practices. Building sustainable AI is a shared challenge requiring coordination between policymakers, institutional leaders, AI engineers, designers, and startup founders. Below, we break down guidance for these groups:

For Policymakers & Institutional Stakeholders (e.g. EU Commission, UN, Government Agencies):

For AI Industry Builders (Engineers, Designers, Startup Founders):

In Conclusion:

The pursuit of sustainable AI is not a one-time effort but a continuous journey of innovation, policy evolution, and cross-sector cooperation. By implementing the above recommendations, policymakers can create an enabling environment that aligns AI advancement with global climate goals, while engineers and entrepreneurs can drive technical solutions that make AI not only smarter but also cleaner. The balance of “Green AI vs. Grey AI” futures will be determined by actions taken now: whether we succeed in fostering AI that augments human well-being and respects planetary boundaries, or whether AI’s unchecked growth exacerbates sustainability challenges. This comprehensive research and its modular deliverables aim to equip all stakeholders with the knowledge and tools to steer us toward the former – a future where AI is an ally in achieving environmental sustainability, not an adversary.

Contents