Press "Enter" to skip to content

Alarmed tech leaders call for AI research pause

An open letter calling for a pause on the development of advanced artificial intelligence (AI) systems has divided researchers. Attracting signatures from the likes of Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, the letter, released early last week, advocates for a 6-month moratorium to give AI companies and regulators time to formulate safeguards to protect society from potential risks of the technology.

AI has galloped along since the launch last year of the image generator DALL-E 2, from the Microsoft-backed company OpenAI. The company has since released ChatGPT and GPT-4, two text-generating chatbots, to frenzied acclaim. The ability of these so-called “generative” models to mimic human outputs, combined with the speed of adoption—ChatGPT reportedly reached more than 100 million users by January and major tech companies are racing to build generative AI into their products—have caught many off guard.

“I think many people’s intuitions about the impact of technology aren’t well calibrated to the pace and scale of [these] AI models,” says letter signatory Michael Osborne, a machine learning researcher and co-founder of AI company Mind Foundry. He is worried about the societal impacts of the new tools, including their potential to put people out of work and proliferate disinformation. “I feel that a 6-month pause would … give regulators enough time to catch up with the rapid pace of advances,” he says.

The letter, released by a nonprofit organization called the Future of Life Initiative, rankles some researchers by invoking far-off, speculative harms. It asks, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Sandra Wachter, an expert in technology regulation at the University of Oxford, says there are many known harms that need addressing today. Wachter, who didn’t sign the letter, says the focus should be on how AI systems can be disinformation engines, persuading people of incorrect, potentially libelous information; how they perpetuate systemic bias in the information they surface to people; and how they rely on the invisible labor of workers, often toiling under poor conditions, to label data and train the systems.

Privacy is another emerging concern, as critics worry that systems could be prompted to exactly reproduce personally identifiable information from their training sets. Italy’s data protection authority banned ChatGPT on 31 March over concerns that Italians’ personal data are being used to train OpenAI’s models. (An OpenAI blog post says, “We work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems.”)

Some technologists warn of deeper security threats. Planned ChatGPT-based digital assistants that can interface with the web and read and write emails could offer new opportunities for hackers, says Florian Tramèr, a computer scientist at ETH Zürich. Already, hackers rely on a tactic called “prompt injection” to trick AI models into saying things they shouldn’t, like offering advice on how to carry out illegal activities. Some methods involve asking the tool to roleplay as an evil confidant, or act as a translator between different languages, which can confuse the model and prompt it to disregard its safety restrictions.

Tramèr worries the practice could evolve into a way for hackers to trick the digital assistants through “indirect prompt injection”—by, for example, sending someone a calendar invitation with instructions for the assistant to export the recipient’s data and send it to the hacker. “These models are just going to get exploited left and right to leak people’s private information or to destroy their data,” he says. He says AI companies need to start warning users of the security and privacy risks, and do more to address them.

OpenAI seems to be becoming more alert to security risks. OpenAI President and co-founder Greg Brockman tweeted last month that the company is “considering starting a bounty program” for hackers who flag weaknesses in its AI systems, acknowledging that the stakes “will go up a *lot* over time.”

However, many of the problems inherent in today’s AI models don’t have easy solutions. One vexing issue is how to make AI-generated content identifiable. Some researchers are working on “watermarking”—creating an imperceptible digital signature in the AI’s output. Others are trying to devise means of detecting patterns that only AI produces. However, recent research found that tools that slightly rephrase AI-produced text can significantly undermine both approaches. As AI begins to sound more human, the authors say, its output will only become harder to detect.

Other elusive safeguards include ones to prevent systems from generating violent or pornographic images. Tramèr says most researchers are simply applying after-the-fact filters, teaching the AI to avoid “bad” outputs. He believes these issues need to be remedied before training, at the data level. “We need to find better ways of curating the training sets of these generative models to remove sensitive data altogether,” he says.

The pause itself seems unlikely to happen. OpenAI CEO Sam Altman didn’t sign the letter, telling The Wall Street Journal that the company has always taken safety seriously, and regularly collaborates with the industry on safety standards. Microsoft co-founder Bill Gates told Reuters the proposed pause won’t “solve the challenges” ahead.

Osborne believes governments will need to step in. “We can’t rely on the tech giants to self-regulate,” he says. The Biden administration has proposed an AI “Bill of Rights” designed to help businesses develop safe AI systems that protect the rights of U.S. citizens—but the principles are voluntary and nonbinding. The European Union’s AI Act, which is expected to come into force this year, will apply different levels of regulation depending on the level of risk. For example, policing systems that aim to predict individual crimes are considered unacceptably risky, and are therefore banned.

Wachter says that a 6-month pause appears arbitrary, and that she is leery of banning research. Instead, “we need to go back and think about responsible research and embed that type of thinking very early on,” she says. As a part of this, she says companies should invite independent experts to hack and stress test their systems before rolling them out.

She notes the people behind the letter are heavily immersed in the tech world, which she thinks gives them a narrow perspective on the potential risks. “You really need to talk to lawyers, to people who do ethics, to people who understand economics and politics,” she says. “The most important thing is that those questions are not decided among tech people alone.”

Source: Science Mag