Google Signs EU AI Code Of Practice, Here’s What That Means

The EU AI Code of Practice is a voluntary agreement created to guide how general-purpose AI is built and released in Europe. It sits alongside the Artificial Intelligence Act, which became law in June 2024.

The Code looks at models that are used for a range of tasks, like ChatGPT or image generators. It covers how companies handle copyright, label AI-generated content, and design systems to avoid illegal outputs. Those who sign it are expected to follow the AI Act closely and may face fewer inspections. Those who do not may be more closely watched.

The rules take effect on 2 August 2025. AI tools already in use will have two years to meet the requirements. Anything launched after that must follow the rules straight away. The Code is part of a bigger plan to make AI safer and more responsible, while keeping businesses active in the European market.

Kit Cox, CTO and Founder of Enate, “The EU AI Act is Europe’s response to protecting citizens from the worst excesses of AI. But many organisations still have a clear opportunity to use AI within the rules to do useful work, particularly where the real risk to humans is boredom from manual work or burnout from overwork, not harm caused by rogue AI.

“The Act reminds us that we’re at an awkward stage where the technology is promising, but early adopters may face unknown risks. It’s worth remembering that laws and compliance protocols are typically designed to catch the outliers, eg. edge cases where AI might behave unpredictably, cause harm, or be used irresponsibly. Most organisations aren’t operating in those extreme scenarios and can safely make progress without pushing those boundaries.

“The Act is mostly about making sure AI doesn’t take over tasks where human judgment still matters. When people use AI as a tool, rather than handing over full control, that kind of collaboration is unlikely to raise legal concerns.”

 

Who Has Signed The Code?

 

Google has agreed to sign the Code. Kent Walker, president of global affairs at Alphabet, said the company will join other developers in supporting it. He said the Code has improved since early drafts and now fits better with Europe’s economic plans.

Google sees value in staying involved. The company expects AI to help boost the European economy by 8% a year by 2034, which would mean around €1.4 trillion annually. Google says the Code helps create some stability in how rules are applied across the region.

Even though it is signing, Google has been vocal about how it has some concerns. It says that slow approval timelines and rules that ask companies to expose trade secrets could hurt European AI work. It also refers to parts of the Code that do not match current copyright law, which could cause friction with other legal systems.

 

 

Who Has Refused And Why?

 

Meta has chosen not to sign… The company, which owns Facebook and Instagram, believes the rules go too far and could block useful AI work. Meta has said it does not agree with how the rules were built and wants more space to experiment.

Other groups have pushed back as well. Some rightsholders think the Code weakens copyright law by allowing AI models to learn from protected content without proper limits. These groups say the Code was rushed and did not fully consider how content owners would be affected.

The European Commission plans to release a full list of signatories on 1 August. This will show which companies are willing to work within the EU framework and which are keeping their distance.

 

Is The Code Slowing AI Development?

 

Google believes it is. In its public note, the company said that the current direction may slow down how AI gets built and shared in Europe. It is worried that developers may take their work to regions with fewer delays and less red tape.

Under the AI Act, tools that are marked as high risk must meet extra conditions. This includes those used in medical devices, education, law enforcement and migration control. These tools must be added to an EU database and reviewed both before and after they enter the market. They also need clear oversight, with users able to file complaints to national authorities.

Generative AI tools like ChatGPT are not considered high risk, but they still have to follow strict rules. These tools must clearly state when content has been made using AI. Developers must publish summaries of any copyrighted material used during training. They also need to build their systems in ways that block them from creating illegal outputs.

Some companies say these rules are costly and hard to manage. Even with testing zones for small businesses, the extra work and long waiting times may stop developers from building tools in the EU.