Tech Tomorrow Podcast
Transcript: Can executives balance AI innovation with societal responsibility?
DAVID ELLIMAN
Hello and welcome to Tech Tomorrow. I’m David Elliman, chief of Software Engineering at Zühlke. Each episode we tackle a big question to help you make sense of the fast-changing world of emerging tech. Today I am joined by a very special guest, Lord Clement-Jones, who is a liberal Democrat pier and the party's digital spokesperson in the House of Lords.
A former chair of the Lord's AI Select Committee and co-chair of the All-Party Parliamentary Group on AI; he is a leading advocate for responsible digital policy. He is also the author of the 2024 book, Living with the Algorithm: Servant or Master?. Today, he is going to leverage his extensive experience to help me answer the question: can executives balance AI innovation with societal responsibility?
Welcome, Lord Clement-Jones. For those who do not know you, could you provide a brief introduction and share a bit about your background?
LORD CLEMENT-JONES
Hi, David. I’m absolutely delighted to reconnect. I suppose you could say I'm a former recovering general counsel of a major law firm, but I suppose the main reason why I'm coming on your podcast is, as a former chair of the AI Select Committee in the House of Lords, co-founder and co-chair of the AI All-Party Parliamentary Group, and, on an international basis I've had quite a lot to do with the work of the Council of Europe, the OECD and so on, in terms of trying to set international standards in this whole area.
DAVID ELLIMAN
I just wondered if your legal background shapes your perspective on AI regulation.
LORD CLEMENT-JONES
Yes, it does. People often think of lawyers as being all about being risk averse and so on, but I’ve been brought up, I think in a corporate lawyering school, if you like, as a former company secretary, legal director, commercial lawyer of finding solutions. And that is exactly my motivation as far as AI is concerned.
I am a techno optimist, but I am very keen on making sure that there is an appropriate level of regulation. Which means that people have the confidence to adopt AI solutions, but also that the people they work with, whether it is B2C or B2B, have the trust in the use of those technologies, including AI.
So, I have never seen AI as, in a sense, the enemy of innovation. I have always seen it, regulation being the enemy of innovation. I have always seen it as the friend, potentially the enabler of innovation because it creates consistency and certainty for business if you get it right. Now, how, and what form regulation should take, is a very important debate.
DAVID ELLIMAN
You touched on many of these things in your book, Living with the Algorithm. What inspired you to write it and what did you hope to achieve with it?
LORD CLEMENT-JONES
Well, it was partly frustration. You know, if you listen to some politicians, especially under the previous government - but under this government, the message has not really changed - is precisely that point about regulation being the enemy of innovation.
You know, people without business experience, they run up entirely the wrong tree in my view. They do not carefully look at what can stimulate innovation, which is certainty about standards, and what basically hinders it, which is far too much bureaucracy and red tape and inappropriate regulation.
So, I was anxious to set out my stall before regulation came down the track to demonstrate what kind of regulation would be useful. And I also wanted to preach a particular gospel to businesses who are not necessarily AI developers, but adopters. I wanted to tell them that they needed to get their act together and that AI was a fantastic opportunity to reexamine their values because, I am a great believer in business for a purpose.
And you know, I think every business needs to define what is it for, what is it about, what is it trying to do? And quite often, you sit within a societal context that is starkly illustrated by how you deal with AI. Are you just going to fire everybody and institute AI instead of your employees? Or are you going to make your employees life rather better when they are doing their jobs by giving them AI to assist them?
I mean, that's a bit of a black and white kind of approach, but you know, you need to decide, exactly how AI is going to be used, what kind of benefit it's going to have, how you're going to build public trust in it from your consumers and your business customers, etc. And then you go on to talk about the role of the board and accountability and all those kinds of areas which corporate governance manuals now, including those produced by the IOD, are meant to advise you on.
DAVID ELLIMAN
And what would you say is the biggest risk that boards face if they do not take into account societal responsibility with respect to their use of AI?
LORD CLEMENT-JONES
I think if you start just simply thinking it's all about innovation, the move fast and break things type thing, you're going to lose trust amongst your employees because for instance, if you suddenly start, bringing in AI for performance purposes or AI for recruitment, and employees think they're being surveilled on a 24-hour basis, that's a pretty good killer of trust amongst your employees.
And then, if your customers think that they are simply the object of an algorithm being used to assess their spending habits and everything else, then again, that is going to breed a mistrust.
Now, of course, we all know there are benefits from using AI, because you can actually serve your customers and businesses with more specificity about their particular requirements. But if you do not explain, for instance, what you are going about, if you have data that has biased, and you are discriminating against certain customers, then that is going to lead to a huge loss of trust.
DAVID ELLIMAN
So, we have to look at this from a practical point of view. What can boards do to introduce AI tools that are helpful and trusted rather than disruptive and mistrusted? Look, there are three things that actually work in practice.
First, mandate explainability and human oversight from day one. Do not ask what AI can do, ask how people will work alongside it and how they will use it.
Secondly, pilot ruthlessly before scaling. Deploy with volunteer early adopters and collect performance data and trust indicators. If your pilot users will not recommend it, you are not actually ready.
Third, invest as heavily in change management as technical implementation. The best AI architecture means nothing if people reject it. And finally, establish real governance. Cross-functional committees with actual oversight that include the people who will use these tools daily. Trust comes from transparency about both capabilities with limitations.
To make sound decisions, you need a clear ethical framework to guide you.
LORD CLEMENT-JONES
Well, I think they can start by understanding what they mean by an ethical framework for a start. We mean actually the principles that were set out by the OECD as far back as 2019.
You know, we are talking about transparency, we are talking about freedom from discrimination. We are talking about benefit, as opposed to detriment. You know, we're talking about a number of different aspects, which, you can call them ethics, you can call them rights, but they are principles that, if you introduce a new technology, it would be extremely sensible for the corporate reputation of a business to adopt.
It may be that people are slightly allergic to using the word ethics, but frankly, if they talked about operational principles or something, I would be perfectly happy because they are almost exactly the same thing.
Why did people adopt ESG historically? Well, this is what companies need to do. They need to demonstrate that they care for the environment, they care for society, and they care for corporate governance, and AI absolutely is the spur to that. And this is what boards need to understand, and they need to absolutely understand that they are responsible, and they need to be in the driving seat.
DAVID ELLIMAN
It is almost like that critical thinking that's required to navigate ethics as a topic is required at the board level to negotiate what a meaningful ethical framework actually is, what it means, and their relationship with the people that they service and indeed their brand and their perception in the market.
LORD CLEMENT-JONES
Completely. And then, and in terms of the board skillset, this is exactly the sort of thing that they need to ask themselves. Do we have the skills at board level to understand this stuff and understand what we need to do in the face of this new technology?
I mean, it is incredibly fast moving. But how many boards really have people who understand what the hell is going on? It may be that the biggest companies do, and we've seen that they've been the most effective adopters, but SMEs, they don't and this is a big risk for any business to plunge into something without understanding what the unintended consequences could be or what the risks are.
DAVID ELLIMAN
It is almost like the responsible tech ESG reporting, it is always seen as a sort of a separate add-on, rather than being just as important as the financial results, for example.
LORD CLEMENT-JONES
Yes. Regrettably, that is the case. Those who have fully adopted this kind of ESG approach will tell you that they now see it as an integral part of the business. The usual thing used to be that it was the first bit that bore any budget cuts and stuff like that.
DAVID ELLIMAN
So, from a practical point of view, then, what can Boards and Executive Committees do to make sure they are ethically implementing AI tools?
LORD CLEMENT-JONES
There is no one size fits all for this. You have to have an oversight mechanism for your adoption of critical AI into the business. And you have to work through what the adoption means. I mean, if it is back office, it is one thing. If it is something inside a consumer product, then you have to start thinking very carefully about things like transparency, data use and so on.
But then, companies have a great deal of experience in this. We have had the data protection laws for quite some period of time. We are used to having to examine products, Internet of Things products for instance, in terms of its impact on data sharing and personal data and so on.
So, I do not think that this should be something that is seen as impossible and the quality of data is something that every business should be absolutely concerned about.
I mean, relying on historical data that is biased, is pretty useless when you are trying to recruit people. If, all you are doing is recruiting people in the image of everybody who has gone before, that cannot be a sensible route forward.
So, I would say that AI does write things large in that sense, but the issues are not necessarily completely different. Compliance with regulation is something that every business has to come to terms with. But what this does mean because of the trust aspects are so big, bigger than almost any previous dimension, that this has to be done at board level. And everybody who has ever commented on this will tell you that the adoption of AI is a strategic issue for the Board, basically.
DAVID ELLIMAN
So, one aspect that interests me is the nature of AI. AI obviously has been around for many decades, and yet in the public discourse in the last few years, we tend to think of generative AI and AI as synonymous, because it is captured both investors and the public imagination to some extent.
The fields of AI that are effectively non-deterministic, we do not always know what is going on in terms of the decisions that they make. So being able to say, here's an algorithm, we've got some inputs, and we might not know what the outputs are, given the data and the parameters, but we know where if we were to run some tests, we know why it came to the decision that it did.
And yet in generative AI, we see potentially emergence happening. We do not really know why it made the decision that it did. It might not be repeatable. So, the whole explainable AI movement that was and is popular, tends to become a little bit difficult. So, the only answer that we have at the moment is that humans have to be involved. I just wondered if you had any thoughts on that.
LORD CLEMENT-JONES
Well, I think that ethics by design is something that is absolutely crucial. And if I was talking to a board, I would talk about upfront design to a much greater extent than people tend to talk about.
One of our problems is, you say to the CTO or the CIO, look, go out, and procure something that will do the job. Whereas actually, that means that if you are not careful, you are going to find a black box type of solution. And that is the last thing you want.
It's really important that everybody understands exactly what's going in, how it's weighted in terms of decision making, for instance, if it is some kind of AI that makes a decision, or even before, it gets to the human in the loop, so to speak.
So you have to upfront understand how it's being designed and when you are designing it, whether you are buying it off the shelf or whatever, you may have to assure yourself that it was designed by the people you're buying it off, by the vendor. But that sort of upfront design is the only way you are going to do this. Retrofitting ethics into an AI model, an AI tool is impossible.
DAVID ELLIMAN
We know that retrofitting an ethical framework into AI systems is incredibly difficult. It is like trying to retrofit safety features into a building after construction, expensive and often impossible.
And here is the thing, ethical considerations need to be baked into foundational decisions from the start. Senior leaders need to be looking out for certain things during procurement of these AI tools.
Leaders need to become intelligent customers by asking uncomfortable questions to vendors that they will not naturally volunteer. Demand specifics on training data, not just things like internet scale data, but exactly what was used and how bias was addressed. Ask about performance across different groups, not just average accuracy.
A system that is highly accurate on average but fails catastrophically for certain demographics is worse than simpler systems that fail gracefully. Require explainability. If you cannot audit the system, you cannot govern it responsibly. Understand what control you have. Can you adjust behaviour if outputs are unacceptable? And look at vendors' ethical track records. If they cannot or will not answer these questions, walk away.
As with any change, one of the biggest challenges is managing how teams or Boards respond.
LORD CLEMENT-JONES
Well, I think you have got to set yourself a whole set of to-dos, really. First of all, I think you have got to have digital literacy on your board. And if you have got that, then you can have a sensible discussion about strategy. Then I think, you have got to work out, where AI is actually going to benefit your business.
And then you have got to have an impact assessment basically internally as to what the impact of that AI would be given that you are pretty sure there are benefits to be gained, what are the other aspects? What are the risks? How are you going to monitor how it progresses? What kind of data, are you going to be using? How transparent are you going to be with the people that are either using it in the business or the customers that are seeing it embedded in their product or the clients who are seeing it in their product? What regulation is out there, basically, can you be ahead of the game?
All that kind of aspect I think is really important and you have got to consider incorporating that into your KPIs. You know, if you're going to have milestones in your project, you've got to make sure that this isn't just about implementation, this is about reporting back and in compliance with the standards that you've set right at the very beginning. And it should be at a high level.
Whether it is the Board or the audit and risk committee, for instance, reporting to the Board or some new oversight mechanism that you are setting up if it is critical AI. These things are really, really important.
DAVID ELLIMAN
Indeed. One of the issues that people like me commonly find is that, talking to and understanding the Board's needs and maybe suggesting new ways of thinking or doing things, is one challenge, but then there's a much bigger challenge about when it hands off into the Executive Committees and the execution side of things.
Things can become sort of diluted and lost in the everyday of their current commitments. And it is incredibly difficult to try and push these things through into a reality.
LORD CLEMENT-JONES
You got to have reporting mechanisms that are set out very clearly in advance.
DAVID ELLIMAN
Yes. One thing I loved about your book, when I read it last year, was the servant or master subtitle to it. And it strikes me as though if have to balance this tension between innovation, accountability, governance, etc. then there needs to be some sort of roadmap to that.
It strikes me as though normally you come up with something, you go to a company, you say, here is this great idea. It is all about value creation or optimization or whatever. And, and you just sort of work through scenarios in which to employ it to execute it, so forth.
Yet with this, you have this constant tension that you have to make sure that you are using technology to serve not just the needs of the company, but indeed the people that your company serves. And I think that is an amazing insight from the book. I just wondered if you would like to elaborate on that a bit.
LORD CLEMENT-JONES
Well, it is a general statement that is what technology should do? It should serve us. We should not become, in a sense, the slave of it. And you know, what worries me is that it is people thinking, innovation has a merit all by itself, irrespective of whether it benefits us. And that is why in a sense, the word progress, is quite a useful thought because progress implies it is for human benefit.
That is why I am a techno optimist, because I do think that we know that if humans are going to have a future, we have to have technology under our control, and we are not just simply being propelled forward. And we are going to be thrown on the scrappy as humans and machines are going to take over.
I do not believe that, and I think that we can harness AI for our benefit, whether by the time Artificial General Intelligence comes along, we are able to do that. I don't know, and that's why I think at this moment, we need to make sure that AI incorporates the best rather than the worst at this stage, so that we're not overtaken by it at a later stage, without any regard for our interests.
I mean, we're going to go through a very rough 10 years or so, as AI takes away an awful lot of the entry level jobs, and we all learn to discover what kind of new jobs are going to be out there, what are the skills we need to acquire? What university courses should we all be doing, what do professionals do when there are no entry level jobs? What is going to happen out there if the only jobs available going to be those for plumbers and electricians because it cannot be done by AI.
We have a whole bunch of societal issues. So actually what we need to do is at least lay some ground rules now in my view, so that at least we've got some idea of what the future holds in terms of our ability to control, AI models and AI tools, and that they are being used for our benefit.
DAVID ELLIMAN
Responsible AI rests on three pillars: purpose, proportionality, and accountability.
Purpose means AI serves clearly defined human needs, not technology for its own sake. I have seen too many organisations deploy AI because competitors are doing it. Without articulating what problem they are solving.
Proportionality means matching AI autonomy to decision stakes. Low stakes decisions like content recommendations can be highly automated, but high stakes ones like loan approvals need human decision makers.
And accountability means clear ownership. Every AI system needs identifiable humans who understand it, monitor it, and can intervene when needed.
So, what kind of information or skills do boards need to have access to make this possible? Boards need a balanced portfolio.
First, foundational AI literacy for all members. Secondly, at least one board member with genuine technical credentials who can interrogate the claims. Access to independent technical advisors for deep dives on specific systems and structured reporting that surfaces. AI health indicators, not just business metrics, but fairness scores, incident reports, and trust measures.
And direct engagement with effective stakeholders. The most effective boards treat AI governance like they do, the financial governments as specialized domain requiring ongoing education and structured oversight.
Lord Clement-Jones has his own thoughts on the roadmap to responsible AI use, not just in business but across society.
LORD CLEMENT-JONES
We are in a battle currently. You and I have not talked about AI and copyright and training using copyright content, and so on, but that is an illustration of the difficulty of getting a message through to this government in particular. That they should not just simply allow the destruction of our creative industries for the sake of training new AI models largely developed in the United States.
There is a big misunderstanding out there, a lack of understanding, about the rights and wrongs of this, quite honestly, and what the consequences will be if the right decisions are not taken.
So, I mean, I am a techno optimist in terms of being conditional upon regulation coming in and also, transparency requirements for training and so on. But I am not overly optimistic that this particular government yet gets it and I have some faith in the powers of persuasion, of the forces of light, so to speak, going forward.
But there is no guarantee that this government is not just going to simply continue to listen to U.S. big tech who have no interest in this kind of regulation at all. They have turned from being philanthropic in their outlook to being pretty bluntly commercial really. Now, you know, our experience with the online safety legislation has been that they pushed very hard against that.
So, it took a long time, but we got there in the end, and we are still debating some of the finer points and we are still trying to make it more robust. And I think we can do the same with AI regulation, but I think it will be even more difficult but, we really must try and get there.
DAVID ELLIMAN
So, given everything we have spoken about, do you think that executives can balance AI innovation with societal responsibility?
LORD CLEMENT-JONES
Absolutely. And there is a huge amount of self-interest involved in here, you know, even before you start talking about regulation. I have been talking about AI governance for a very long time, and it struck me that the first thing that should happen is that the corporate governance structures should be right before we even start thinking about regulation.
And of course, I am thinking not so much about the developers, where I do think that you need pretty careful regulation. I was thinking really more about the adopters and of course once you have got retail AI systems like Chat GPT or Claude or whatever, that can be used by your employees, your corporate governance becomes even more important.
And then of course, you are not just talking about trust as far as your consumers or your customers are concerned. You are talking about trust of your employees. So, as a board you have got a huge responsibility to make sure that you get it right when you start thinking about AI adoption.
DAVID ELLIMAN
So, Lord Clement-Jones, thank you so, so much for this. This has been fantastic.
LORD CLEMENT-JONES
Great stuff. Thank you very much, David. It has been a great conversation. Thank you.
DAVID ELLIMAN
Thank you so much.
So, can executives balance AI innovation with societal responsibility? And I think that the answer's yes, but here is the thing. We need to reframe the question. It implies a trade off as if innovation and responsibility, somehow oppose each other.
From 20 years of delivering complex transformations, I have learned this is a false choice. Done right, responsibility actually accelerates sustainable innovation. Think pragmatically systems designed with ethical constraints avoid costly rework, reputational damage, regulatory penalties.
Organisations that earn trust through responsible AI gain permission to innovate further. The executives who succeed do not balance these forces. They integrate them. They recognize that irresponsible innovation is not innovation at all. It is technical debt, and that is eventually going to come due.
Thanks for listening to Tech Tomorrow brought to you by Zühlke. If you want to know more about what we do, you can find links to our website and more resources in this episode's show notes. Until next time.