The EU’s Artificial Intelligence Act comes into force today, 1 August 2024, and essentially this Act is a set of regulations that places AI systems in categories according to how much of a risk they are. This is the first of its kind around the world, as other nations have not gone to this extent as of yet, and its created to manage how AI impacts different industries.
The AI Act affects all companies using AI technology, whether they are developing new systems or maintaining existing ones. As of the enforcement date, companies have set periods ranging from three to six months to align their operations with the new regulations, depending on the scale of AI usage in their day-to-day when working.
How The Act Will Impact Businesses And SMEs
This legislation could really change how businesses use AI, with a focus on remaining compliant and ethical. About 85% of AI firms are placed under the “minimal risk” category, according to the EU, so the changes they see won’t be too impactful. Those with higher risk categories, however, may have to prove the integrity of how they use and train their AI.
And although some may argue that the new regulations may dampen innovation in the AI sector, EU officials reassure them that the Act is meant to support AI development while keeping user interests and security in mind in more proactive ways than before. Companies are encouraged to set up internal governance structures to review and manage their AI applications proactively.
Experts Views On The New Act
We’ve gathered experts to share their view on how the Act will impact the region, across industries as a way to start conversations. These experts have shared their differing takes on what the Act will do for businesses, and more interesting insights:
Our Experts
Steve Lester, CTO, Paragon
Sebastian Gierlinger, VP of Engineering, Storyblok
Steve Bates, Chief Information Security Officer, Aurum Solutions
Hugh Scantlebury, CEO and Founder, Aqilla
Sridhar Iyengar, Managing Director, Zoho Europe
Charlie Bromley-Griffiths, Senior Legal Counsel, Conga
Julian Mulhare, Managing Director, EMEA, Searce
Greg Hanson, GVP of EMEA North, Informatica
Dr Ellison Anne Williams, Founder and CEO, Enveil
Steve Lester, CTO, Paragon
“The EU AI Act will reshape how businesses operate, especially for UK companies engaging with the EU market. Compliance with the Act’s requirements is not optional for businesses; it applies to any AI systems that affect EU citizens or markets.
“Businesses must prioritise transparency and ethical AI practices. For customer communications, this means clearly disclosing when AI is used and ensuring that targeting and personalisation strategies adhere to the Act’s guidelines. The prohibitions on practices like biometric categorisation require a re-evaluation of existing AI strategies to align with ethical standards.
“Companies should ensure they conduct thorough audits of their AI systems, invest in staff training on AI ethics, and establish robust governance frameworks. While these steps can be challenging, compliance will improve customer trust and provide a competitive edge in the EU market.
“It’s all about maintaining trust with EU customers, showing them that responsible AI development can go hand in hand with risk management, compliance, and ethical AI practices.”
Sebastian Gierlinger, VP of Engineering, Storyblok
“The EU AI Act has established a framework of best practices to govern the use of AI systems. As the AI Act comes into force on 1 August 2024, experts have called for an international agreement on using copyright-protected content to train AI models, such as watermarking AI-generated content with significant human input, intellectual property (IP) protection of copyright-generated content, an enforced liability for infringements when creating content with Gen AI, and a remuneration scheme for rightsholders.
“The emergence of generative AI has raised concerns over copyright law in the EU and US. The EU AI Act, although not intended specifically for copyright law, provides a legal framework to assign and categorise responsibilities to providers and users of AI systems. These include publicly disclosing the use of copyright-protected training data.
“The EU AI Act includes a transparency requirement for “publishing summaries of copyrighted data used in training.” Some get-outs allow for data mining of copyrighted works in instances such as for use by research institutions. This is not considered a viable defense for AI companies with public and commercial generative AI systems. But while big tech puts pressure on governments to hold off on legislation, AI systems continue to train on copyrighted content.
“Many AI companies have assumed that they’re allowed to use whatever content they want from the web and have hit out against governing policy as detrimental to growth and innovation. Mustafa Suleyman, CEO of Microsoft AI, said as much in an interview with CNBC at the Aspen Ideas Festival. To back this up, last month the Chamber of Progress, a tech industry coalition whose members include Apple, Meta, and Amazon, launched a campaign to defend the fair use of copyrighted works to train AI systems.
“The AI Act will introduce limited exceptions for text and data mining and recognise the importance of balancing copyright protection with promoting research and innovation. It acknowledges the need for proportionality in compliance requirements for startups and SMEs.
“The AI Act requires transparency from providers, ensuring accountability and enforcement of copyrights. AI companies will be required to provide comprehensive information about the datasets used.
“With the implementation of the AI Act, companies must develop a comprehensive AI policy that serves as a framework for responsible and transparent AI deployment. It is important to have an AI policy that ensures that the technology is used ethically, legally, and effectively.”
Steve Bates, Chief Information Security Officer, Aurum Solutions
“The act is a positive step towards improving safety around use of AI, but legislation isn’t a standalone solution. Many of the act’s provisions don’t come into effect until 2026, and with this technology evolving so rapidly, legislation risks becoming outdated by the time it actually applies to AI developers.
“Notably, the act does not require AI model developers to provide attribution to the data sources used to build models, leaving many authors of original material unable to assert and monetise their rights on copywrite material. Alongside legislative reform, businesses need to focus on educating staff on how to safely use AI, where it should and shouldn’t be deployed and identifying targeted use-cases where it can boost productivity.”
“AI isn’t a silver bullet for everything. Not every process needs to be overhauled by AI and in some cases, a simple automation process is the better option. All too often, firms are implementing AI solutions just because they want to jump on the bandwagon. Instead, they should think about what problems need to be solved, and how to do that in the most efficient way.”
Hugh Scantlebury, CEO and Founder, Aqilla
“Trying to regulate the technology right now is like trying to control the high seas or bring law and order to the Wild West. For an AI regulation to be effective, it would have to be global—and such an agreement seems unlikely any time soon. If just one region regulates AI and establishes a “safe framework,” developers will just go elsewhere to continue their work. And that’s before we consider those already based outside the EU. Would a global agreement stop state-sponsored or independent developers in countries like Russia, China, Iran, and South Korea?
“The birth of AI is second only to the foundation of the Internet in terms of its power to fundamentally alter our lives—and some people even compare it to the discovery of fire. But hyperbole aside, AI is still in its infancy, and we have only scratched the surface of what it could achieve. So, right now, no one is in a position to legislate—and even if they were, AI is developing at such a pace that the legislation wouldn’t keep up.”
More from News
- Bank Of England Holds Interest Rates At 4.25%, What Does This Mean For UK?
- One Of The Largest Data Breach In History Leaked 16 Billion Passwords
- 23andMe Co-Founder Bids To Buy Back Data After Company Announces Bankruptcy
- How Is The UK Boosting The Cyber Sector?
- Starlink Is Bringing More Connectivity Options To The UK, Here’s How
- Is AI To Blame For Recent Big Tech Job Cuts?
- Experts Share: How In-App Whatsapp Ads Will Affect The Overall User Experience
- UK’s NayaOne Enters Saudi Market With AstroLabs, Launching First Fully Saudi-Hosted Fintech Platform
Sridhar Iyengar, Managing Director, Zoho Europe
“The EU AI Act is a welcome roadmap for the future of AI, putting guardrails in place to promote its safe and trustworthy development. This is especially true given the fast pace of its evolution during the past 18 months. AI has increasingly become a central part of business operations, automating tasks such as data analysis, forecasting and customer services, and giving businesses a competitive edge, but that cannot come at the expense of trust and safety.
“Additional safety measures on AI models and systems, particularly those deemed to be high risk, is crucial to protect businesses and their customers. Implementing robust business policies for the use of AI, alongside the guidance of the EU AI Act, will enable greater agility for organisations to react to market trends and serve customers more effectively.”
“Our Digital Health Study highlighted that 46% of UK respondents wanted increased regulation from the Government to protect businesses from the threat of AI, so we also hope to see the UK promoting safety measures around AI and considering its own forms of localised safeguards.”
Charlie Bromley-Griffiths, Senior Legal Counsel, Conga
“The legal countdown has started. The EU AI Act will come into force as of Thursday 1st August, and will be fully applicable within 24 months. From a compliance perspective, businesses need to move fast. Indeed, many organisations still need to educate AI and train these systems, but this is very much reliant on their own internal data architecture. Companies need to ensure all data is accurate and readily available, and that their current AI applications are compliant with the new regulation.
“The Act is characterised by a risk-based approach, which is also reflected in the structure of the transitional periods. It refers to any AI system within the EU that is ‘on the market’ and affects both AI providers (vendors) and deployers (the organisations using these systems). Some businesses initially concerned with the scope of the requirements labelled it as a ‘minefield’ that needs to be unravelled carefully.
“For example, the Act categorises different types of AI according to risk, which are split into four tiers: high, unacceptable, limited and minimal AI. Systems presenting only limited risk would be subject to ‘very light transparency obligations’, whereas high risk systems are subject to a set of requirements and ‘obligations’ to gain access to the EU market. Others have said that, whilst we need to reduce risks and be more transparent, there’s a risk to stifling innovation through added complexity.
“Nevertheless, moving forwards, organisations will need to be scrupulous with their data management and ensure they have measures in place to comply with the new law and the evolving regulatory landscape. Given the pace at which this technology has evolved, it’s likely that more regulatory shifts are on the horizon, especially as the political bodies continue to map out these laws and better understand the technology and its capabilities. As such, organisations should remain proactive, ensuring they meet current requirements while also in the best position to adapt to future legislation.
“There is no doubt that enterprises have big projects ahead of them. All businesses will have to review their operations to ensure all current and future AI applications are compliant. This will be particularly challenging for those organisations that have rushed their AI programmes and adopted multiple solutions across their business over the last year. The penalties for noncompliance are severe. Companies could face fines up to 30 million euros or seven percent of their global annual revenue, and different tiers apply depending on the violations as defined by the Act itself.”
Julian Mulhare, Managing Director, EMEA, Searce
“With the EU AI Act starting this week, businesses need to understand their new obligations to remain compliant and avoid crippling fines. Compliance with copyright laws and transparency is crucial for both general-purpose AI systems, like chatbots, and generative AI models. Detailed technical documentation and clear summaries of training data, especially for GenAI models, will be necessary.
“To remain agile, companies need modular AI processes for easy updates – avoiding a complete overhaul. A dedicated team and budget for AI maintenance are essential here. As AI becomes increasingly integrated, it will impact all business areas. Investing in compliance infrastructure, enhancing documentation and transparency, and instilling robust cybersecurity measures will be imperative to mitigate financial risks and align with regulatory standards. Now, for the UK and Europe, this is the only way businesses can continue to leverage the benefits of AI while ensuring ethical standards are met.
“Lastly, given the pessimism around European’s AI regulatory measures, regulators must strive to continuously evolve and collaborate with tech experts to ensure safe, equitable and innovative AI deployment so that the EU doesn’t fall behind.”
Greg Hanson, GVP of EMEA North, Informatica
“The clock is now ticking for AI compliance. And while it will still take time for the codes of practice and guidelines that underpin the EU’s AI Act to be released, businesses need to start planning and preparing now.
“The phased approach that the EU has favoured, gives most companies up to a year to ensure they have introduced mechanisms that ensure AI has been responsibly integrated into their business operations. However, many are still encountering roadblocks with the adoption of AI – 43% of UK businesses that have adapted AI say AI governance is the main obstacle, closely followed by AI ethics (42%).*
“It will take time to get the responsibility, guardrails, and controls around AI to the right place as its use evolves. But as a starting point, organisations should focus on protecting the integrity of AI systems by ensuring the foundations and controls for AI tools are robust. They need to have full transparency of the data used to train AI models. Organisations need to measure, correct and report on the quality of data fed into AI models to ensure decisions are being made on trusted, well governed data. And they need to understand the decisions AI models are making and why.”
Dr Ellison Anne Williams, Founder and CEO, Enveil
“The EU’s AI Act is another significant milestone in promoting the responsible, safe and secure adoption and implementation of AI across organisations and industries. It also emphasises the importance of data privacy in shaping the future of AI innovation.
“The US and EU have set a precedent for other regions to follow by laying the foundations for organisations to address risks, uphold privacy and prioritise security when using AI. In time, I hope to see more global collaboration, new frameworks emerge and greater support for Privacy Enhancing Technologies that will further enable organisations to safely and securely leverage AI to enable transformative business benefits and innovation, while safeguarding data privacy and security at the same time.”
Caroline Carruthers, Chief Executive, Carruthers and Jackson
“AI is moving incredibly fast. To control how it’s being used and implemented, legislation has to keep up with innovation, but up until now, it has lagged behind.
“The EU AI Act does a great job of thinking through best conduct in every specific scenario, but, I would have liked to have seen more high level regulation shared more quickly, as I think the industry required some broader guardrails following the release of ChatGPT, arguably the watershed moment in the current global AI race.
“It’s a welcome development, but the industry can’t determine if the EU AI Act has been a success just yet, as it hasn’t yet been rolled out in real world applications. Inevitably, as businesses engage with the Act, they will push the boundaries of the regulation in ways lawmakers didn’t expect. Therefore, it’s only after it’s been integrated into businesses that regulators can determine if they have successfully controlled the areas they set out to in practice or if the law is having unintended consequences, such as making some things harder.
“For example, GDPR entered into force in 2018, but it’s only now that it’s been imposed for a few years that we can look back and evaluate its overall success. It’s likely with the EU AI Act, that there will also be a period of iteration after it comes into force, where organisations begin to understand what it actually means in practice.
“The task ahead of us is that AI is an incredibly powerful tool, and we need legislation in place which helps establish the infrastructure around it to take full advantage of its capabilities. Going forward, legislation and innovation will go hand in hand, because as AI develops, regulation will have to evolve alongside it.”