Artificial intelligence (AI) and machine learning are a huge aspect of the global digital economy, making them vital for all organizations and governments to master. With the UK professing a global digital outlook, its strength in AI reflects its ability to compete confidently on the world stage; so, it’s important to analyse whether the fundamentals are in place for the UK to go even further and take pole position.
In April 2022, the Westminster eForum (WEF) held a conference billed as “Next steps for Artificial Intelligence in the UK — the National AI Strategy, market development, regulatory and ethical frameworks, and priorities for societal benefit”. The WEF, which has no policy agenda of its own, enabled a discussion of the UK government’s AI strategy, with speakers representing the AI Council, industry, research institutes and education.
The UK’s National AI Strategy was unveiled in September 2021, six months after the AI Council published its “AI Roadmap” report that outlined 16 recommendations to help develop Britain’s strategic direction for AI. The AI Council, an independent expert committee, has members from industry, the public sector and academia, and works to support the growth of AI in the UK.
The National AI Strategy outlines a 10-year plan that represents a step-change for AI in the UK. It recognizes that maximizing AI’s potential will increase resilience, productivity, growth and innovation in the private and public sectors. Although the government’s strategy was welcomed by speakers at the WEF, it was clear from discussions that the UK still had more to do to strengthen its position if it’s to become an AI superpower. Several considerations and concerns were highlighted, with skills and education, trust and regulation sitting at the top of people’s minds.
Same old story: addressing the UK skills gap
AI skills are crucial if the UK wants to roll out its strategy quickly; but a recent Microsoft report on UK AI skills showed that British organizations were less likely to be classed as AI pros compared with the global average, at 15% versus 23%. It also reported that 17% of UK employees were being re-skilled for AI, compared with 38% globally, raising concerns that the UK faces a skills gap that could leave businesses struggling to keep up with global competitors.
As adoption of AI looks to more than double in the next 12 months, so too will the demand for AI skills — the challenge will be finding and training the talent needed to close this gap. A labour market report by the AI Council showed 100,000 unfilled job postings in AI and data science, with almost 50% of companies saying that job applicants lacked the technical skills needed. Organizations will need to proactively minimize their own talent gaps by providing training to their existing workforce, ensuring they can fully exploit every investment made in digital transformation.
The UK AI Council’s road map concluded that Britain needs to significantly scale up its programmes at all levels of education if it’s to ensure new entrants to the workforce. But Rokhsana Fiaz, mayor of the London Borough of Newham, claimed that the UK lacks coherent career pathways, professional standards and equitable opportunities for people seeking careers in AI and data. Local government initiatives are being undertaken in Newham to address the issue, but there’s also a call for the country to generate more practical AI skills nationally. A programme involving collaboration between the government, industry leaders in AI and digital training providers, pushed forward using a cloud-native approach, would make it accessible to all in the UK.
Can a sector-based approach lead to AI regulation?
Regulation of AI is vital, and responsibility lies both with those who develop it and those who deploy it. But according to Matt Hervey, head of AI at law firm Gowling WLG, the reality is that there’s a lack of people who understand AI, and consequently a shortage of people who can develop regulation. The UK does have a range of existing legislative frameworks governing data protection, but they lag the EU, where regulations are already being proposed to categorize all AI systems. British companies doing business in the EU will most likely need to comply with EU law if it’s at a higher level than our own.
In this rapid digital technology market, the challenge is always going to be the speed at which developments are made. With a real risk that AI innovation could get ahead of regulators, it’s imperative that sensible guard rails are put in place. This points to the UK’s sector-based approach, with an emphasis on industries most likely to drive innovation such as finance, automotive, transport and healthcare. Since a fundamental part of law and regulation is how the safety of AI systems is assessed, legislation will be very much determined by their specific application in different sectors. This approach could see spill-over effects, such as sector-specific regulations being widened to apply to different industries.
Building trust to promote innovation
Concerns about risk, including ethical risk, are major blocks to AI innovation in industry. Research from the Centre for Data Ethics and Innovation (CDEI) claimed that over 25% of medium-to-large businesses identified uncertainty on how to establish ethical governance as a barrier to innovation. The CDEI leads the UK government’s work on enabling trustworthy innovation in data and AI.
Industry can’t innovate without ethical risks being addressed or the development of data governance that fosters public trust. In 2021 the CDEI developed an algorithm transparency standard, helping public sector organizations provide clear information about the algorithmic tools they use and why they’re using them. Currently being piloted, its success could be a big factor in building public trust in data, and subsequently in AI-driven technologies.
The things independent bodies and governments do well
The next big focus for the AI Council is on ambitious programmes that focus on the climate crisis, health, defence and science. AI can be used to massively improve the efficiency of production for everyone, building more-resilient and adaptable energy systems and helping deliver operations with net zero emissions. A recent study developed by Microsoft and PwC estimated that AI-powered environmental applications could save up to 4% of greenhouse gas emissions by 2030 and contribute to 4.4% of GDP.
Such an outcome, while beneficial for the planet, also offers the potential of creating 3.8 million new jobs. Lessons learned from the pandemic show AI’s capacity for improving health outcomes for patients and freeing up staff time. The defence industry is seeing more AI companies collaborate with the UK Ministry of Defence, GCHQ and other partners.
It would seem the vital role that the AI Council can offer — aside from ensuring the government continues to scan the horizon for new opportunities — is in providing the framework that encourages collaboration between academia, business and end users. Equally important is making sure that the fundamentals are in place, and that systems and processes are properly designed and delivered. The council, together with the government, is perhaps better placed than private sector industries for gathering the widest array of stakeholders. Getting people working beyond existing boundaries and organizational structures to build new relationships, networks and common languages heralds the strength of such consortia in playing to AI’s problem-solving strengths.
The US and China are racing ahead in delivering large-scale foundational AI models, with the danger of leaving the UK trailing behind if it stays reliant on outdated solutions. Prioritizing an innovative and flexible approach, and making the right calls on AI regulation and governance, could carve out a new path between existing and forthcoming regulations from the US, EU and China.
Partnering for progress
Although the UK government’s strategy has made progress, there’s still some way to go. There’s a lot of ambitious talk at what still feels like a very early stage of the game, seeming slightly disconnected from what the industry is actually doing. Suppliers from most sectors are already well on their AI journey, building technical tools and developing new solutions to sell. One question we must ask is why more companies innovating in AI aren’t represented on the AI Council; if industry is already so entrenched in developing AI solutions, their practical knowledge and services could provide the valuable insights needed to charge forward.
There was a strong feeling during WEF — almost a call to arms — for stakeholders to come together to help promote the UK’s National AI Strategy. We’ll be watching with increasing interest to see who was listening ahead of the next conference, titled Adoption of AI Technologies.
A version of this blog was published on 17 June in Computer Weekly; you can access it here.