Charting the UK’s role in the AI era
AI is everywhere – but is it delivering? Our panel explores how regulation must evolve to unlock its full potential

In 30 seconds
AI adoption is growing in UK businesses, but behind closed doors, many business leaders admit productivity gains remain limited
As an adopter and not a generator of AI technologies, Britain is being held back by regulatory gaps and fragmented government responses
Our panel, led by LBS Professor of Strategy and Entrepreneurship Michael G. Jacobides, calls for right-sized, human-led regulation that empowers innovation while safeguarding society
“On this sweltering day, I think hot air is an apt metaphor to open with,” quips Michael G. Jacobides, Sir Donald Gordon Professor of Entrepreneurship, Innovation and Strategy at London Business School.
In a packed dining room in the Houses of Parliament, Michael is introducing the panel discussion on what policymakers need to know about AI to an audience of parliamentarians, business leaders and academics. The purpose of the event, he outlines, is to examine the current state – and future trajectory – of AI adoption and regulation in the UK: “What is hype, and what is reality?”
Michael, who has helped set up LBS’s new initiative on Data Science and Artificial Intelligence, stresses that while the rhetoric surrounding AI has reached fever pitch, real-world transformation remains elusive. “If you read LinkedIn, and you listen to the excitement of people on the corporate side, you would think that we're already living on another planet,” he observes, “that the improvements are massive. But if you start looking at productivity, and if you speak behind closed doors at C-suite levels, people will whisper, has anyone really seen the benefits yet?”
“The dirty answer,” he continues, “is mostly not. There are very few things we can see as serious end-to-end transformations.”
Discover fresh perspectives and research insights from LBS
"Has anyone really seen the benefits yet? The dirty answer is mostly not"
Michael G. Jacobides
Discourse versus reality
This disconnect between popular discourse and on-the-ground reality is echoed by Dr Erin Young, Head of Innovation and Technology Policy at the Institute of Directors (IoD). Drawing on recent survey data from the IoD’s 20,000 members, she reports that while half of member organisations are using AI in some form or another – often to boost productivity or automate administrative tasks – concerns are widespread.
“What really struck me,” Erin notes, “was that even when we were asking members about the benefits [of AI], they were very, very quick to talk about the risks.” She describes how the ethical, social and environmental implications related to wide-scale adoption of these technologies came up time and time again. IoD members also expressed concern about the reliability of AI systems and the risk of “hallucinations”, particularly in Large Language Models (LLMs). (Hallucinations are where the system generates information that is factually incorrect, misleading, or entirely fabricated, but presents it with confidence and fluency.)
Respondents also expressed a sense of disillusionment with overblown promises made by AI vendors. “We found a huge pushback against the kind of generalised hype that we see, oftentimes, starting from Silicon Valley, versus what members are actually finding in terms of tangible impact and ROI in their business,” she explains.
The policy lag
When it comes to Government, Michael suggests, there’s plenty of well-intentioned work ongoing, but it’s siloed, with some departments focusing on encouraging AI and others on setting perimeters around what constitutes allowable activities. This means there’s a lack of consistency, and legislation is lagging behind.
Lord Chris Holmes, a Conservative Peer who has put forth a Private Members Bill on AI Regulation which is currently being debated in the House of Lords has been championing a human-centred approach to AI: seizing the opportunities that AI offers, but mitigating harm at the same time. The UK’s current regulatory landscape is woefully inadequate, he argues, warning that “where we are now is very much a non-touch approach to AI,” carried forward from the previous government.
“That raises questions around consistency, around clarity, around optimisation and around certainty, be you an investor, an innovator or an individual,” he asserts. “The reality is, if we’re going to seize the opportunities for the UK, for business, for government, for citizens, for our communities, then action is needed on the regulatory front.”
Lord Holmes calls for right-sized, adaptive regulation – framed in the tradition of English common law – that is light-touch but far-reaching, focused not only on safeguarding but also on enabling innovation.
“The most important clause in my bill,” he emphasises, “is around public engagement. Because this is impacting people already – in social, economic, psychological, democratic ways – oftentimes without them even knowing.”
“This is already impacting people – often without them even knowing”
Lord Chris Holmes
The next geopolitical frontier
Bringing a global lens to the discussion, Nikolaus Lang, the Director of BCG’s Henderson Institute and Head of BCG’s Geopolitics institute, outlines the shifting geopolitics of AI. “I think geopolitics of tech is the next frontier of geopolitics,” he claims. Of the world’s most advanced LLMs, 65 of the top 75 come from just two countries – the US and China. These AI-generating superpowers create legislation to nurture their own domestic model champions.
But countries like Britain, Nikolaus argues, must prioritise frameworks that enable adoption and meaningful integration into industry and public services.
“We’re going to see a shift from ‘tech-oriented’ to ‘use-oriented’ regulation,” he predicts. “I think that evolution is something that we’ll see in the years to come; application-focused and much more flexible regulation.”
“We're going to see a shift from 'tech-oriented' to 'use-oriented' regulation”
Nikolaus Lang
A moment of reckoning
AI is already shaping how people work, interact and are evaluated, often invisibly – although the extent to which it’s making a tangible impact is still up for debate. Meanwhile, the UK’s current policy landscape remains fragmented and inconsistent, with multiple departments and agencies pursuing uncoordinated initiatives.
“I think we need a much greater stake in the ground as to what the Government believes the UK's role and opportunity is within this space,” contends Lord Holmes. “The Government’s leadership could best be demonstrated through the right codification, the right legislation, the right regulation. An AI regulation bill would be good for innovation, good for investment, good for citizens, good to optimally position the UK in this space.”
“This will only work if we conceive of AI as human-led technologies, in human hands. We decide, we determine, we choose.”
In Britain, where productivity growth has been sluggish for over a decade, the stakes are high. Michael calls for a deeper understanding of how AI interacts with organisational structures and strategy, not just the technology itself, drawing on his recent project with the IoD. “We need to go beyond the hype,” he insists, “and try to understand how AI transforms organisations, and also how regulation can support this. Only then can we really drive productivity.”
“We need to go beyond the hype – and try to understand how AI transforms organisations”
Michael G. Jacobides
As the recent BHI / Evolution Ltd paper Michael coauthored with Nikolaus Lang notes, regulation has a key role to play in this regard. “The problem is- left to its own devices, regulatory and administrative inertia will end up stalling progress. If we are bold, though, and build a responsive, geopolitically aware, modular and use-focused framework with flexible application, regulation can be a staircase to success, not just a guardrail,” said Michael.