A lot has been said about the EU AI act. AI companies operating in the EU need to be aware of the implications of this legal framework. For us, the most important question is: is the AI act actually helping making the AI industry green? We dissected the it line by line and this is what we know about the environmental considerations of the AI act.

What is the AI Act
The EU AI Act is the world’s first comprehensive, risk-based legal framework for artificial intelligence. It was approved in 2024 and will enter full application in 2026. The regulation bans unacceptable uses of AI such as social scoring, heavily regulates high-risk AI systems, and introduces transparency requirements for general-purpose AI models. Its primary goal is to ensure safety and the protection of fundamental rights.
The official EU AI Act website provides useful information and a compliance checker that helps organisations understand how the regulation affects them. The full legal text is available here. The AI Act will apply from 2 August 2026, so we must be prepared.
However, an important question remains: Will the AI Act make the AI industry environmentally sustainable? (Spoiler alert: the answer might disappoint you).
What does the AI Act say about sustainability
The EU AI Act recognises the environmental impact of artificial intelligence, although sustainability is not one of its central regulatory pillars. Instead, sustainability appears mainly through requirements related to documentation, reporting, and transparency.
The regulation introduces deliverables related to reporting and documentation processes intended to improve AI systems’ resource performance. These include reducing the consumption of energy and other resources during the lifecycle of high-risk AI systems, as well as encouraging energy-efficient development of general-purpose AI models. Environmental considerations also appear in [Annex XIII].
When determining whether a general-purpose AI model has systemic capabilities or impact, the European Commission considers several criteria, including:
- The number of parameters of the model
- The quality or size of the dataset, for example measured through tokens
- The amount of computation used for training the model, measured through floating point operations or estimated through training cost, training time, or energy consumption
These criteria implicitly acknowledge that the scale of model development is closely linked to environmental impact.
What does it imply for AI companies
The codes of practice do not provide concrete guidance on how organisations should measure or report energy efficiency or environmental sustainability indicators.
There are diverging opinions on whether the AI Act benefits the European AI ecosystem. There is no doubt that the regulation comes from a positive intention to ensure AI is safe and socially responsible. However, concerns have been raised about whether increased compliance requirements may reduce Europe’s competitiveness and limit the development of innovative AI solutions. Some critics fear this could lead to European organisations relying more heavily on AI services developed outside the EU.
Much has already been written about these broader economic implications. In this article, we focus on a specific aspect: the implications of the regulation for energy reporting and sustainability in AI development.
The sustainability-related obligations primarily affect high-risk AI systems and general-purpose models.
The AI Act announced the publication of voluntary codes of practice that aim to promote sustainability. According to Article 95, these codes should support the voluntary application of requirements across AI systems and define objectives and performance indicators to assess and minimise environmental impact. These include energy-efficient programming and techniques for efficient design, training, and use of AI systems.
In fact, the codes of practice published in July 2025 were less comprehensive than many practitioners expected. The regulation clearly requires that “technical documentation must be securely stored for at least ten years and made available to regulators and downstream users upon request. Public disclosure of such information is encouraged in order to promote transparency.”
An interactive guide to the codes of practice is available at https://code-of-practice.ai/. However, the codes of practice do not provide concrete guidance on how organisations should measure or report energy efficiency or environmental sustainability indicators.
For anyone that has a bit of experience reporting sustainability metrics, it is well known how difficult it is to produce reliable and comparable measurements without standardised methodologies. It is also unclear how energy-related information stored in non-public documentation will effectively drive the adoption of energy-efficient AI technologies.
A first step, but not enough
The AI act risks creating additional compliance overhead for AI development teams without necessarily producing measurable sustainability improvements.
Any initiative that introduces sustainability into AI governance is valuable. The AI Act represents an important first step by acknowledging that environmental impact is part of responsible AI development. However, in its current form, the regulation risks creating additional compliance overhead for AI development teams without necessarily producing measurable sustainability improvements.
Private reporting requirements alone are not enough to encourage to resource optimisation. We believe the EU has an opportunity to step up its game by focusing on two key areas: 1) technical guidance for reporting and 2) development processes that enable energy efficiency.
Provide meaningful technical guidance
Organisations need practical and reliable methodologies that enable them to measure and report AI energy consumption consistently. Clear indicators, measurement protocols, and verification mechanisms are essential for sustainability reporting to have real impact.
Promote lightweight development processes that enable energy efficiency
AI practitioners need accessible development processes that encourage resource-efficient AI design. Sustainable development should not require significant additional expertise or excessive costs. Existing frameworks such as the Green Software Practices™ already provide valuable inspiration and practical guidance.
Green AI should be an asset, not a burden
AI companies should be able to adopt sustainable development practices without sacrificing competitiveness or passing additional costs to customers. Many sustainability improvements can be implemented through low-overhead engineering practices that simultaneously reduce operational costs and environmental impact.
At present, energy efficiency requirements in the AI Act may be perceived primarily as compliance costs. While this increases demand for sustainability support and certification services, we strongly believe that Green AI should become a strategic advantage rather than a regulatory obligation.
Looking ahead
The AI Act includes a commitment to review the recommended approaches for reporting and analysing sustainability indicators. The first review is expected by 2028, followed by updates every four years. This review process offers an opportunity to strengthen sustainability requirements and introduce clearer technical standards.
We remain optimistic that future iterations of the regulation will provide stronger guidance and create measurable environmental impact across the AI ecosystem.
How GreenSeal can help
GreenSeal.dev supports software and AI companies in designing, measuring, and certifying sustainable digital products. Our goal is to make Green AI practical, measurable, and economically viable.
If you want to better understand the environmental footprint of your AI systems, implement sustainable development practices, or prepare for upcoming regulatory requirements, feel free to schedule a free call with us. We are always happy to discuss how to help organisations build sustainable AI products.