Data Strategy

AI predictions: how to prepare for AI-empowered business

Big changes are coming in 2024, with artificial intelligence set to play an even more prominent role in our lives. Here we explore our top AI predictions plus some of the tangible steps you can take to become an AI-empowered business.

""
8 minutes to read
With insights from...

It may be the topic du jour, but AI is much more than a flash in the pan. In 2024, the evolution of artificially-intelligent software will continue to rock the boat – for organisations, governments, and the public alike. 

So, what’s coming down the line? How will the technology evolve over the next 12 months? And what might this mean for businesses that need to work with and alongside AI? 

We’ve identified five AI predictions for 2024 – plus the steps your organisation can take to raise the bar with artificial intelligence. 

Five AI predictions for 2024 

1. Responsible AI becomes a business imperative with the AI Act  

At the core of the EU’s fledgling AI Act are a set of rules and processes designed to stop more sinister uses of the technology – like biometric categorisation systems, behavioural manipulation, and social scoring.  

The Act requires any business using AI to self-declare the risk levels of those systems, with fines for those who misrepresent their products.  

There’s no regulatory body for this, so businesses must get to grips with AI regulation quickly. And start codifying what responsible AI means for their organisation. 

Of course, the timetable for compliance will be staggered. Some organisations will be expected to comply sooner than others based on categories and risk levels laid out in the draft Act.  

‘The incoming AI Act brings much needed guidance, but businesses must move quickly to codify what responsible AI means for their organisation’. 

The Act’s impact on business innovation will also vary widely from one industry to the next. For highly regulated sectors like medtech, where extensive safety measures are already the norm, the AI Act won’t mean a great deal of change. And the likes of embedded systems in medical devices will reportedly have much longer to comply.    

Ultimately, the EU AI Act  brings much needed guidance around responsible AI development. It provides more clarity on the subject than we've ever had before, helping organisations ensure the AI systems they’re using and developing do not have unintended bad consequences. 

The regulation should help rather than hinder AI innovation for organisations that adopt responsible AI frameworks and bake transparency and ethical practice into their innovation and AI development processes. 

Best practice here will be to fully audit your AI use and put processes in place that adhere to the AI Act at every step of the development and production of AI-based applications.  

At Zühlke, we’ve been helping clients cement responsible AI practices. You can explore how to develop and scale your platforms, products, and processes in a human-centred and responsible way with our four-part responsible AI framework.  

2. Generative AI rewrites the rulebook on software development 

Fire up the latest version of ChatGPT and it’s hard not to marvel at just how far we’ve come with large language models (LLM) in the space of a year. And the fact that these models are publicly available.  

But while it seems a bit trite to say ‘this is just the beginning’, there’s one field where that’s precisely the case: the field of software development. 

Generative AI is already getting good at spitting out basic code. But 2024 will be the year in which AI truly redefines how software development works. Smarter, more robust LLMs – built directly into commercial products like Microsoft’s CoPilot – will reshape the entire software development field. Along with redefining how it’s taught. 

A software engineer uses a Mac and second screen to code

What's imperative here though, is that we succeed in combining the strengths of humans and machines in the software development process.  

Artificial intelligence may write (parts of the) code. But to really improve efficiency and effectiveness, we have to ensure that humans will always be in the loop. That’s why we need to start thinking of AI as the tool, rather than the solution. Adobe Photoshop didn’t replace designers, for instance. It just made their work much more powerful. 

‘2024 will be the year in which AI redefines how software development works...The challenge will be finding the optimum combination of humans and machines’. 

That human part of the equation is going to be key to managing AI use in customer-facing contexts too. Call centres that use solutions based on large language models, for example, will need to find ways to give customers and regulators confidence that those models don’t have bias or inaccuracies. Or, in Chevrolet’s case, that customers don’t use AI chatbots to their own ends

Ultimately, this is about finding an optimum combination of humans and machines, with the right safeguards in place to create real benefits for business and society – while preventing any rogue deployments.  

3. Data lineage holds AI content to account 

The misinformation and disinformation space is, unfortunately, only likely to become busier and more complex in 2024. Increasingly, we’ll all need to become vigilant when looking at any piece of media – whether it’s text, imagery, or video – and thinking about its lineage.  

‘The trend here will be in the ability to differentiate primary and secondary data, with the key question being around verifying the history and point of origin of anything we consume and share’. 

The 2024 US election race is likely to heat this up, for obvious reasons. Photos, videos, written articles, and the data that links them will all need to be verified. If we can’t ascertain the lineage of this data, then it can’t really be trusted.  

This will also have a compound effect on large language models and generative AI. What happens, for example, if the small number of behemoths who own these models train them on faulty, secondary data?  

Most organisations don’t have anywhere near the resources needed to own and train language models themselves. So the onus is on those tech giants to train models in the ‘right’ way, avoid ‘black box’ systems, and enable citizens and business users to understand the model’s predictions.  

For businesses using these models, the best course of action is to adopt a responsible AI framework that facilitates explainable and interpretable AI – and helps you demonstrate and share data lineage, from source to sea. 

4. Green computing steps into the limelight  

With great power comes great responsibility. That’s set to be the battle cry of climate-conscious AI computing solutions in 2024, as the proliferation of useable products brings with it an explosion of computing power requirements. 

‘This growth will result in a ‘hockey stick’-shaped leap on energy consumption charts. The raw power consumption that’s required from server farms to power AI solutions will come into sharp relief’. 

The climate impact of cloud computing is already bubbling up in public interest. But as we see some of 2023’s proof-of-concept products translate into 2024’s operational rollouts, their electricity needs will need to become a more obvious part of corporate ESG responsibility programmes. 

A server room with added visualisations that give the sensation of digital information flowing between the hardware

Not every AI application or solution will be inherently positive from an environmental standpoint, but AI stewardship and making sustainability a part of your responsible AI framework will enablep you to focus on the environmental impact of AI solutions from the outset. 

Best practices include opting for a green cloud provider and aiming for solutions that are resource sensitive, with efficient use of energy and hardware. 

These are decisions that should be thought about upfront and on an ongoing basis, rather than as an afterthought. 

5. Businesses clarify their AI aspirations and strategies 

Gen-AI-empowered business was very much in an experimental phase in 2023, with many organisations experimenting with proof of concepts for internal and external ChatGPT use cases.  

‘This experimental phase will continue into 2024, with many businesses unable to deploy generative AI at scale’. 

In the next 12 months, some organisations will struggle to turn their AI prototypes into reliable, secure, scalable, and human-centric solutions that deliver ongoing value. And they might need to reset their AI aspirations when it comes to implementation. 

Why? Because some companies still lack the robust data foundation that’s needed to reap the rewards of AI – from defining a holistic and human-centred AI strategy, to implementing the right data platform, capabilities, culture, safety controls, and more. 

Many organisations lack or will struggle to define the processes required to productise or ‘operationalise’ AI. For others, AI-empowered business is still a distant aspiration.  

The hard truth is that, if you’re still struggling to convert your data into business value, you’ll struggle to reap the rewards of AI.  

How to meet 2024’s AI opportunities 

So, these are our AI predictions for 2024 – and the challenges and opportunities we foresee. But how can your organisation prepare for these changes? 

‘In a nutshell, our advice is to prioritise transparency at a process level and focus on getting the data ‘basics’ right. From trust and access, to adopting a responsible AI framework’. 

In 2024, being open and crystal clear about the source and use of data is a business imperative. It’s becoming critical for helping people understand how technology works. And how, at every level, it’s been made in a human-centred, principled, and responsible way.  

For your business, this could mean working directly with lobbyists and regulatory bodies, sharing data openly to navigate antiquated antitrust or anti-privacy concerns, and being prepared to explain your processes at every step. 

That level of data transparency will increasingly become a legal requirement. It also makes competitive business sense in a world where trust fosters adoption.  

But having proper data with clear lineage is not the only imperative. To maximise value and accelerate innovation, you need to develop AI based on a robust responsible AI foundation. And accelerate your journey towards becoming a data-empowered organisation. 

Speak to us today about how you can create new value at scale with the right data strategy, platform, and human-centred AI solutions.