Data Strategy

One step at a time: how to build the future of AI regulation

As AI tech advances rapidly, we need to consider how we regulate an ever-evolving environment. How can regulation support innovation while preventing unintended bad consequences? And will the European AI Act go far enough? 

Outdoor scene with green trees and motion blur commuters
6 minutes to read
With insights from...

With AI technology changing faster than many people can keep up with, we need to consider how we can regulate an ever-evolving environment… 

In May 2023, the Centre for AI Safety issued a statement warning that artificial intelligence carries an innate ‘risk of extinction’ to humanity, adding that mitigating this risk should become an immediate global priority. The statement was supported by ‘Godfather of AI’ Dr. Geoffrey Hinton and Sam Altman, chief executive of OpenAI, among a host of others in the field.  

But are these claims exaggerated, or are we in serious trouble? And if it’s the latter, how do you regulate AI in a way that fosters innovation and commercial competition without dooming an entire species? 

Most folks will already know that AI regulation is a pretty thorny subject, but it’s not an impossible one to unravel; we just need to look at past precedents, today’s progress, and where we can improve both our readiness and willingness to adapt… 

Why is AI regulation so critical? 

Statements like those from the Centre for AI Safety are certainly headline-makers, but what’s driving this debate? 

The general consensus is that as AI continues to proliferate, so does the need for tighter guardrails. But what’s interesting is that regulatory and moral, societal needs are closely interconnected. 

'Standards are there to ensure the AI will not have unintended bad consequences further down the line'.

From the regulatory perspective, legislation is about managing AI implementation – making sure it’s developed according to quality standards. But those standards are ultimately there to ensure the AI will not have unintended bad consequences further down the line. So the issue of regulatory control goes hand-in-hand with moral and societal concerns.  

Some influential thinkers in this space are saying ‘we need to look at this’ without being legally forced to – and the closest analogy we have here is in how companies today deal with their ESG policies. We all know we're living in an environment we can’t continue to exploit, so many companies are working to do better for society simply because it’s the right thing to do. AI is no different; businesses are already beginning to regulate themselves simply because they want to be more responsible citizens. 

AI regulation must be viewed through a human lens that accounts for the moral and societal dimension. This holistic view is a good foundation for creating guidelines around the use and application of emerging AI technology. But the proof is in the pudding, so it’s worth us taking a look at where things currently stand… 

Building the barriers: the European AI Act 

The best hope we have currently, in terms of AI regulation that will meaningfully land, is the European Union’s proposed EU AI Act, which is being voted on at the time of writing. The act looks specifically at risk.  

If, for example, you’re using AI to provide an app that shows someone how different kinds of makeup might look, that’s not terribly risky. But other use cases categorised as ‘high risk’ will need to follow strict standards – for example, where AI is used to progress job candidates after scanning their CVs, or where discounts are applied based on people’s credit score ratings.   

'In terms of AI regulation that will meaningfully land, the EU AI Act is the best hope we have'.

So that’s really the first thing any AI regulation needs to do: assess risk to the end consumer. There is a problem, though. Whilst the EU has proven its track record with rulings like GDPR, what’s really important with AI is that the repercussions for missing the mark need to be incredibly tough. Some nations and businesses, for example, have an approach to GDPR where they’ve routinely preferred to pay small fines than change their processes.  

And that’s a danger. The old adage of asking for forgiveness rather than permission simply can’t be the case for AI; the stakes are too high. 

The answer, as the EU AI Act puts it, is to bake conformity in at a process level. Here, the presumption is that any new AI tool needs to be certified – and the road to certification is to follow a set process.  

In a way, that model almost removes the actual technology from the equation; even if we see incredible advances in the future, businesses will still have to follow this same duty of care in bringing their products to market.  

If you get that right, you’ve regulated the corpus, rather than the corporation. 

A matter of willingness and readiness 

It’s worth thinking about how regulation and business needs intersect in this space – because that’s arguably where the biggest issues lie in AI control right now.  

'Outside the EU, the two largest stumbling blocks in both government and commercial interests are willingness and readiness'.

Typically, commercial entities don’t like being controlled, but that ethos rubs up against the need for responsible AI. Being open, transparent, and sharing information as part of a wider data ecosystem is essential for solving complex environmental and societal issues, but big tech likes its existing position. Just think about environmental issues; for every company genuine in its ESG promises, there are several actively participating in greenwashing. So AI regulation needs to work at that ‘process’ level to fight that inclination. 

From a governmental standpoint, the EU is demonstrating strong willingness here because it has always been a leader in corporate and citizen protection – and it’s five years ahead of the curve as a result.  

Nations like the US and China need to play catch up, but the issue Stateside is regulatory fragmentation between the states. Most Americans would say they’re in favour of stronger AI regulation. But no one group can organise to make it happen. Different legislative departments are looking at AI regulation from different angles. Even then, tech industry lobbies ensure that it’s not a legislative priority. 

So just how much traction those departments can get – and how loud those voices can be – remain open questions.  

Brick by brick 

Ok, so that’s willingness. But what about readiness? Can any regulation keep pace with the runaway progress of modern artificial intelligence technology? 

'The EU AI Act is a great first step, but we need to continue to be aware of the problem, and evolve our approach accordingly'.

The simple answer is one step at a time. We didn't have the regulatory frameworks we now have in other fields overnight. Responsible AI is human-centered and ethical, but it’s also explainable and transparent. Achieving all of those goals will take time, but it’s something that happens brick by brick, rather than wholesale.  

Even the EU Act website has a section dedicated to its own foibles. That shows both willingness and readiness to admit imperfection and an understanding that things will need to be changed further down the line. 

From the EU Data Act to the EU AI Act, we're in the early days of data and AI regulation. The AI Act is a great first step, but we're going to need to continue to be aware of the problem, and evolve our approach accordingly.  

That’s something you can only do in increments, and only with a sense of willingness and readiness on behalf of every stakeholder.  

Speak to us today about how our ISO-accredited strategists, scientists, and engineers can help you create new value at scale with the right data strategy and AI solutions