Talk of AI Dangers Overlooks Current Harms

Artificial intelligence (AI) is the latest tech buzzword to penetrate the public consciousness following the launch of generative AI tools like ChatGPT and Stable Diffusion. Disruptive technology like AI inevitably prompts a debate on the effect it could have on society: this cycle goes as far back as the invention of the printing press — and probably further, but how would we know, without the printing press?

These debates lead to prognostication, with the outlandishness of potential outcomes scaling with how misunderstood the technology in question is. This often leads to regulation, limiting the circumstances in which a disruptive technology can be used.

Typically, objectors to technology aren’t the people building it. Drastic exceptions exist — such as J. Robert Oppenheimer — but these are at best edge cases. As a result, the recent open letter from the Center for AI Safety appears alarming. Dozens of scientists including the founders of OpenAI and Anthropic, as well as OpenAI CEO Sam Altman, Bill Gates and, for some reason, the musician Grimes, have signed a statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

This is a fine objective — no one is in favour of extinction, pandemics or nuclear war, and it’s reasonable to take steps to regulate AI. Despite the professed concern, Mr Altman warned that OpenAI “will try to comply, but if we can’t comply, we will cease operating ” in response to proposed regulation requiring transparency in the data used to train AI models like ChatGPT. Mr Altman walked this back days later, but the contrast of equivocating AI risk with nuclear war and exiting the EU because of regulation strains the credibility of either statement.

Even taken literally, the statement is still faulty. Inflating the risks of AI to the level of a pandemic or nuclear war is actively counterproductive: it minimizes the negative effects that the uncritical use of AI is already known to have for marginalized groups.

For example, AI can be used to reinforce systems of discrimination, making it more difficult for women, racial or ethnic minorities and people with disabilities to find employment, and, in the context of law enforcement, it can lead to racial profiling. These aren’t new concerns: Harvard Business Review reported on the former in 2019, and MIT Technology Review reported on the latter in 2020. By shifting the narrative to human extinction, these real and present problems are likely to be brushed aside.

My characterization of this is deliberate: the problem lies in the uncritical use of AI. The output of an AI model or program isn’t thought, it isn’t conscious, it isn’t self-aware. Taken positively, large language models are extremely robust pattern-matching systems. Taken literally, the technology is “spicy autocomplete“. If a human operator doesn’t verify the output they receive from AI, the potential harm could ruin lives and livelihoods.

This isn’t a rejection of AI: the preceding statement remains true if you substitute “AI” with “a computer program”. The British Post Office scandal — a decades-long saga in which the output of a defective computer accounting system was used as evidence in dozens of criminal prosecutions of sub-postmasters for theft — is proof of the negative consequences that can occur when bureaucracies uncritically accept the output of a computer program. By 2021, 45 convictions in the Post Office scandal were overturned or quashed.

And the presence of AI doesn’t necessarily reduce risk. For example, in the Dutch childcare benefits scandal, an AI-generated risk profile resulted in authorities penalizing tens of thousands of families for false assessments of fraud.

So, how can effective regulation be created for AI?

If regulators are concerned with regulation for an unlikely scenario, the resulting laws — perhaps written in conjunction with the entrenched suppliers of AI technologies mentioned above — are likely to be ineffective at preventing negative societal outcomes of AI. Even worse, they could potentially introduce arduous licensing requirements, limiting the ability for start-ups to effectively enter the market.

Calls for a temporary suspension of AI development are effectively unenforceable, as noted by Margrethe Vestager, the European Commissioner for Competition. Such calls may also be for competitive purposes: Elon Musk — a signatory of a campaign in March 2023 to pause AI development for six months — was reportedly acquiring thousands of GPUs needed to train AI models and offering AI researchers at other firms positions at a newly formed start-up intended to build AI models.

It’s impractical to expect regulation to limit the development of AI, but using regulation to limit how AI can be used is an effective lever in some cases. For example, the current draft of the EU AI Act would ban the use of AI for mass biometric surveillance and predictive policing, which is a positive outcome. Requiring companies to disclose the use of AI for things like mortgage eligibility, processing of job applications or other financially consequential decisions would likewise inform the public of how the use of AI by companies affects their lives and provide potential avenues for appeal or redress.

Ultimately, there’s no regulatory silver bullet that will predict and pre-empt the societal harms that the misuse of AI could cause. Avenues for regulating the use of AI in producing propaganda and disinformation are narrow, particularly if the user is a hostile foreign government. Lawmakers will need to act swiftly as the technology evolves and rely on multiple sources — including those with no financial interest — for advice and information on the development of AI.