Expert Comment: What Do The EU’s New AI Laws Mean For Businesses?

Yesterday, European lawmakers passed the most detailed legislation to date on artificial intelligence (AI), establishing new regulations for AI developers and companies across Europe.

The law passed with 523 votes in favour, 46 against and 49 abstentions. The act will most likely be rolled out in May, though exact timings have not yet been confirmed.

The landmark legislation will provide stricter rules and regulations around how AI can be used by businesses. These regulations will not only affect the 27-nation markets in the EU, but also any companies whose software is used in the EU – including companies based in the US.

But what does this mean for companies? We asked the experts.

 

Our Experts:

 

  • Keith Fenner, SVP and GM EMEA at Diligent
  • Curtis Wilson, Staff Data Scientist At The Synopsys Software Integrity Group
  • Marcus Evans, Partner and European Head of Data Privacy at Norton Rose Fulbright
  • Bruna de Castro e Silva, AI Governance Specialist at Saidot

 

For any questions, comments or features, please contact us directly.

techround-logo-alt
 

Keith Fenner, SVP and GM EMEA at Diligent

 
“With the EU AI Act now endorsed by all 27 EU Member States and approved by the EU lawmakers today, the onus is now on British and Irish businesses to prepare for compliance.

“The potential for hefty fines – up to €35 million or 7% of global turnover for breaches – has meant becoming and remaining compliant is increasingly important. To best prepare, GRC professionals should build and implement an AI governance strategy. This will involve mapping, classifying and categorising the AI systems that they use or are under development based on the risk levels in the framework.

“Next, business leaders and GRC professionals will need to perform gap assessments to evaluate if current policies and regulations on privacy, security, and risk can be applied to AI. The aim is to establish a strong governance framework, encompassing both in-house and third-party AI solutions.

“But compliance is just the tip of the iceberg. To truly thrive in this new era, UK/Irish business leaders need to reimagine their approach to AI. This means finding the right balance between innovation and regulation.”
 

Curtis Wilson, Staff Data Scientist At The Synopsys Software Integrity Group

 
“The greatest problem facing AI developers is not regulation, but a lack of trust in AI. For an AI system to reach its full potential it needs to be trusted by the people who use it. Internally, we have worked hard to build this trust using rigorous testing regimes, continuous monitoring of live systems and thorough knowledge sharing sessions with end users to ensure they understand where, when and to what extent each system can be trusted.

“Externally though, I see regulatory frameworks, like the EU AI Act, as an essential component to building trust in AI. The strict rules and punishing fines will deter careless developers, and help costumers be more confident in trusting and using AI systems.

“The Act itself is mostly concerned with regulating high risk systems and foundational models. However, many of the requirements already align with data science best practices such as risk management, testing procedures and thorough documentation. Ensuring that all AI developers are adhering to these standards is to everyone’s benefit.”
 

For any questions, comments or features, please contact us directly.

techround-logo-alt

 

Marcus Evans, Partner and European Head of Data Privacy at Norton Rose Fulbright

 
“It is now crucial that businesses create and maintain a robust AI governance programme to make the best use of any AI technology and ensure compliance with the new regime.

“Businesses can expect more detail in the coming months on the specific requirements, as the EU Commission establishes and staffs the AI Office, and begins to set standards and provide guidance on the Act.

“The first obligations in the AI Act will come into force this year and others over the next three years, so companies need to start preparing as soon as possible to ensure they do not fall foul of the new rules.”

 

Bruna de Castro e Silva, AI Governance Specialist at Saidot

 
“The EU AI Act continues its unstoppable march as Europe shows that it is ready to set a responsible pace of innovation for AI. This is the culmination of extensive research, consultations, and expert and legislative work, and we’re glad that the first major regulation around AI is founded on a solid risk-based approach,  which is pragmatic, impact-based, and crafted following years of industry consultation.

“The Act will ensure that AI development prioritises the protection of fundamental rights, health, and safety while maximising the enormous potential of AI. This legislation is an opportunity to set a global standard for AI governance, addressing concerns while fostering innovation within a clear responsible framework.

“While some seek to present any AI regulation in a negative light, the final text of the EU AI Act is an example of responsible and innovative legislation that prioritises technology’s impact on people. When the EU AI Act comes into force, 20 days after its publication in the official journal, it will enhance Europe’s position as a leader in responsible AI development, establishing a model for the rest of the world to follow.”
 

For any questions, comments or features, please contact us directly.

techround-logo-alt