AI laws overlook environmental damage – here’s what needs to change
By integrating sustainability into AI laws, the planet can be somewhat safeguarded alongside AI’s rapid expansion.
More than 200 laws have been developed to regulate AI in more than 100 countries. Many of them focus on issues such as privacy, bias, disinformation, security and cybersecurity rather than the environmental consequences of AI.
AI is an energy-intensive and thirsty industry. It leads to huge greenhouse gas emissions, pollution and loss of nature. These impacts arise partly from the manufacture and use of energy-, carbon- and water-intensive “complex computer chips”, called graphics processing units (GPUs), for the training of AI models as well as increasing e-waste.
My research into the regulatory responses to AI in the EU and the UK highlights how laws often ignore the environmental implications of this big tech. The lack of stringent obligation in AI law and policy is concerning.
There are environmental consequences at all stages of the AI lifecycle. From the manufacture of AI hardware, training of AI models, deployment and use of AI right through to the disposal of AI hardware.
The manufacture of components relies on the extraction of rare earth elements. This can contaminate soil and water, pollute the air and lead to loss of nature and forest habitats. Training AI models is incredibly energy- and water-intensive. A team of researchers estimated in 2025 that training GPT-3 – a large language model released by OpenAI in 2020 – consumed an estimated 700,000 litres of freshwater for electricity generation and cooling of data centres.
Even though AI models are becoming more energy efficient, as models become larger and AI proliferates, overall energy consumption and associated emissions are rising. And the energy consumed in the use of AI, including to generate text or images, vastly outweighs that used during training.
However, it’s difficult to accurately measure the environmental effects of AI, partly due to the lack of transparency of technology companies.
When the EU’s AI Act came into force on August 1 2024, it was the “world’s first comprehensive law” on AI. The AI Act acknowledges some of AI’s environmental consequences. It also requires that “AI systems are developed and used in a sustainable and environmentally friendly manner”.
It outlines that AI providers must disclose information on “known or estimated energy consumption data of the model”. But while promising, this information only needs to be provided when requested by the AI Office, which has been established within the European Commission.
Industrial cooling towers in every data centre require vast amounts of water. sutthilak.c10/Shutterstock
Further measures include preparing codes of conduct to assess and minimise “the impact of AI systems on environmental sustainability”. But this is not compulsory. Overall, the AI Act is intentionally anthropocentric. It states that: “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human wellbeing.”
The UK has no AI-specific legislation. AI is currently only regulated by existing laws. The UK government’s 2023 white paper on AI regulation, which proposes a regulatory framework for AI, doesn’t prioritise sustainability at all. Although the white paper acknowledges that AI can contribute to technologies to respond to climate change, it does not specifically address any environmental risks:
The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to … sustainability. These are important issues to consider … but they are outside of the scope of our proposals for a new overarching framework for AI regulation.
A transparent future?
More transparency starts with AI developers having to disclose information about how much energy and water is consumed, how much carbon is emitted, the rare earth elements extracted and how much plastic is used during the AI production process.
This data then provides a baseline. Then appropriate targets and limits can be set for energy efficiency, carbon emissions and water use to improve the sustainability of AI.
Several proposals have been made for how reduced carbon emissions and water consumption could practically be achieved, such as training AI models on less carbon-intensive energy grids or in less water-intensive data centres.
Warnings about environmental effects could tell consumers how much carbon dioxide is emitted or water consumed for each query. In addition, an AI labelling system could mirror the EU’s existing energy efficiency labelling schemes, which clearly indicate the energy efficiency of appliances, ranking them from most energy-efficient (dark green) to least energy-efficient (red).
Proposals include an AI “energy star” rating system and a social and environmental certification system. This would help consumers to make informed choices about which AI systems to use or whether AI should be used at all. Tax incentives and funding incentives could also encourage tech firms to make more sustainable choices.
By integrating sustainability into AI laws, through these types of measures, the planet can be somewhat safeguarded alongside AI’s rapid expansion.
The Conversation AI
https://theconversation.com/ai-laws-overlook-environmental-damage-heres-what-needs-to-change-279047Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

How Microsoft Nearly Lost OpenAI (And Wasted a Trillion Dollars Doing It)
You've probably heard the phrase "too big to fail." Microsoft spent the last few years proving there's a corollary: too big to think straight. A bombshell Substack post from a former senior Azure engineer is making the rounds on Reddit and Hacker News right now — and it paints a picture of organizational dysfunction so absurd it reads like satire. Except it's not. It's a first-hand account of how a 122-person engineering org spent months seriously debating whether to port half of the Windows kernel to a chip the size of a fingernail. Let's unpack what actually happened, why it matters for every engineer reading this, and what you can do when your own company starts drifting this way. The Setup: Azure's Secret Weapon To understand the disaster, you first need to understand what Azure Boost

Series Week 20 / 52 — Differentiating Patching differences between Exadata On Prem and OCI Databases (DB Systems and ExaCS)
%0A%20%20%20%20%20%20%20%20.libutton%20{ %0A%20%20%20%20%20%20%20%20%20%20display:%20flex; %0A%20%20%20%20%20%20%20%20%20%20flex-direction:%20column; %0A%20%20%20%20%20%20%20%20%20%20justify-content:%20center; %0A%20%20%20%20%20%20%20%20%20%20padding:%207px; %0A%20%20%20%20%20%20%20%20%20%20text-align:%20center; %0A%20%20%20%20%20%20%20%20%20%20outline:%20none; %0A%20%20%20%20%20%20%20%20%20%20text-decoration:%20none%20!important; %0A%20%20%20%20%20%20%20%20%20%20color:%20#ffffff%20!important; %0A%20%20%20%20%20%20%20%20%20%20width:%20200px; %0A%20%20%20%20%20%20%20%20%20%20height:%2032px; %0A%20%20%20%20%20%20%20%20%20%20border-radius:%2016px; %0A%20%20%20%20%20%20%20%20%20%20background-color:%20#0A66C2; %0A%20%20%20%20%20%20%20%20%20%20font-family:%20"> { Abhilash Kumar Bhattaram : Follo




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!