Artificial intelligence and machine learning tools are already being used to power voice assistants and self-driving cars, determine what users see on the internet, and guide drug design and chemical syntheses. But there are concerns about their ability to push disinformation, compromise cybersecurity, and engineer harmful biological materials. Governments around the world hope to mitigate those risks without quashing progress in the problems that AI seems poised to solve.
A recent executive order by US president Joe Biden announced measures to make AI systems safer, such as requiring their developers to search for ways that bad actors could exploit the tools. Shortly after the order’s announcement, government and corporation representatives gathered in the UK for a summit on the risks of AI; 28 countries signed a declaration that supports continuing development of the technology but calls for more research into its potential risks.
Many parts of the chemical enterprise are not yet employing AI for research. But chemists who use one group of specialized AI tools have started conversations focused on how to build safeguards into their systems, including those at the Institute for Protein Design( IPD), home of the team behind one leading AI protein design tool.
One measure that researchers and regulators agree could help is for biological synthesis firms to screen all incoming orders for potentially harmful sequences. While some companies already check requested sequences against a pathogen database and vet the people who place them, the US will soon require such testing for all federally funded life sciences research. According to Gigi Kwik Gronvall of the Johns Hopkins Center for Health Security, that move will push companies worldwide to adopt and strengthen screening. The IPD says it is working with the International Gene Synthesis Consortium to improve screening for molecules designed from scratch.
Regulators must continue to study the risks that AI raises, Gronvall says: concerns that are top of mind now, such as bioterrorism, may never be realized, while unanticipated threats could emerge.
This story was updated on Nov. 9, 2023, to correct the date that the executive order was issued. It was released Oct. 30, not in November.