Data Strategy

How will the EU AI Act impact AI innovation?

The EU AI Act will regulate AI for the first time. But its impact on AI development will vary massively from industry to industry. Here’s what you need to know...

Silhouette of woman with cropped hair against a cloudy sky
6 minutes to read

With sweeping regulatory changes set to change the way we think about AI, now’s the time for a cross-industry mindset shift about the processes behind responsible AI and technology. 

For years, ‘move fast and break things’ has been the battle cry of big tech. And with that mindset proving to be the surest route to innovation, regulations are often seen as nothing more than bothersome speed bumps. 

But what if we move so fast that the thing getting broken can’t be fixed? And what if that thing is society itself?

That’s exactly the worry some experts have about AI, which is why it’s now incumbent on governments, industries, and organisations to work together to legislate against possible risks through the right AI regulation

The EU’s proposed AI Act is aiming to put those guardrails in place, but what does it mean for the companies putting artificial intelligence to use? And – crucially – how can they standardise the processes behind more transparent AI?   

The AI Act: time to act 

The EU AI Act, which is currently under review, has been designed to identify and preemptively quash human risks around AI systems and generative models.  

‘For a long time, AI wasn’t regulated at all. Now governments, industries, and diverse organisations must work together to legislate against possible risks’. 

For the most part, the risks in question are centred on the avoidance of violating human rights – like limiting AI-powered social scoring or behavioural manipulation. But is it too little too late? 

‘This is essentially a reaction to the rapid development that we've been seeing in AI over the past year’, says Dr Lisa Falco, Lead Data & AI Consultant at Zühlke. ‘For a long time, AI hasn’t been regulated at all, so companies could just bring out any product without any type of control – and we’ve already started seeing some pretty bad consequences of putting these algorithms out in the wild.  

'So the purpose here is to enforce the responsible application of AI; it’s a way of guaranteeing that whatever product you implement doesn't have unintended consequences, and that you have the greatest possible understanding of its potential risks'. 

The only problem? AI is moving fast, and – as always – legislation is on the back foot.  

‘Things are definitely behind in this regard’, Lisa says. ‘But it's tough because nobody actually knows the consequences of this technology. Not even the companies that are bringing out AI applications like ChatGPT, for instance, know what their long-term consequences will be. And you can’t legislate for something that doesn’t yet exist’. 

Taking stock and moving forwards 

The impact of upcoming AI regulation will vary massively from industry to industry. Lisa, for example, works with a range of Zühlke clients in the medtech industry, where all-encompassing rules and safety measures are the norm: 

‘If something’s considered a medical device, whether it uses AI or not, that device has to follow regulations. You need to clarify the intended use of the product, prove that it’s safe, and show that your intended use is being fulfilled. So that’s already a way in which we’re used to working’. 

Even medically-focussed consumer technology has to follow suit here. The Apple Watch, as an example, has attained FDA approval for some of its features because, as Lisa puts it, ‘they can have potentially life-threatening consequences if they’re wrong’. 

For companies well versed in that world, the AI Act won’t prove to be game-changing. Others won’t be so lucky, though – and they need to start planning now if they want their use of AI to comply with the EU’s proposition. 

‘This has a big impact on quality assurance’, says Zühlke’s Michael Denzler, who works with clients in the consumer and corporate technology sector.  

'With AI development, even though many companies were already working with “best practice” principles, they still need to grasp which AI solutions are impacted. If you have high-risk applications (as the EU defines them), then that’s tricky to solve’, he says. ‘So it’s prompting big discussions around security, and how you can adapt with quality assurance, capability gap analysis, and legal changes’. 

‘You need to change how you work in-house', Lisa says in agreement. ‘You now need to have a quality management system if you develop with AI – you need to have the right processes, people, data controls, and data governance in place’. 

Surprisingly though, these discussions aren’t slowing things down. For companies that can adapt their processes quickly, new technology is still eclipsing any potential stall in innovation. 

‘The AI Act is definitely drawing a lot of attention and resources’, Michael adds, ‘but it's slowing things down less than the generative AI push is actually accelerating them. So right now AI use is still growing faster than ever’. 

Ultimately, adapting to risk-managing rules should be about more than just box-checking. It should, in theory, come from a place of standardised best practice.  

Achieving that is down to every individual company, but what the AI Act can do is lay an effective foundation that stops bad actors from letting AI run before it can crawl… 

A bedrock of transparency  

As even the EU itself recognises, the AI Act is far from perfect; there’s actually a section on the act’s website dedicated to its potential pitfalls and loopholes. 

‘These shortcomings limit the Act’s ability to ensure that AI remains a force for good in your life’, the EU concedes, adding that the law’s inflexibility is a big Achilles heel: ‘If in two years’ time, a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as “high-risk"’. 

The point, then, isn’t about being an absolute resource in perpetuity, but about pushing companies to implement a responsible foundation for future AI development. 

‘Once you have that foundation’, Lisa says, ‘there’s no reason why your product development should slow down significantly. It will present some hurdles, but it also adds safeguards. Bringing anything to market has potential reputational or legal consequences, after all – the AI Act is really just an incentive to prevent that from happening in this new space’. 

‘The key to all this is transparency, which really means being able to validate and explain the data and models you’re using in an AI application – and that you can clearly interpret the results for the good of the end user’.  

‘You have to be transparent about the fact that you're interacting with an AI, but at Zühlke we'd encourage going further – that whenever you implement an AI solution, you work to validate, understand and relay the data going into, and coming out of, any generative model you’re using’. 

That thinking is what’s led to the formation of Zühlke’s responsible AI framework, a multi-layered process we’re currently helping a range of clients adopt in order to future proof themselves against risk and regulatory ire. It’s a framework spanning human, data, model, and sustainability vectors, effectively covering any and every AI application eventuality.

‘The key thing here is that you don't want to block positive innovation’, Lisa says of the Zühlke approach. ‘You just want to lay a foundation that bakes AI responsibility into your everyday planning and processes’. 

To learn more, check out our framework for responsible AI, or get in touch to explore how you can create new value with responsible and human-centred AI applications

lisa Falco
Contact person for Switzerland

Dr. Lisa Falco

Lead Data Consultant

Lisa Falco is passionate about the positive impact that AI and machine learning can bring to society. She has more than 15 years of industry experience working in medical applications of data science and has helped bringing several AI driven MedTech products to market. Lisa has a PhD from EPFL, Switzerland, in Biomedical Image Analysis and an MSc in Engineering Physics from Chalmers, Sweden.  

Contact
Thank you for your message.