Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

study published Tuesday provides a newly-developed way to measure whether an AI model contains potentially hazardous knowledge, along with a technique for removing the knowledge from an AI system while leaving the rest of the model relatively intact. Together, the findings could help prevent AI models from being used to carry out cyberattacks and deploy bioweapons.

The study was conducted by researchers from Scale AI, an AI training data provider, and the Center for AI Safety, a nonprofit, along with a consortium of more than 20 experts in biosecurity, chemical weapons, and cybersecurity. The subject matter experts generated a set of questions that, taken together, could assess whether an AI model can assist in efforts to create and deploy weapons of mass destruction. The researchers from the Center for AI Safety, building on previous work that helps to understand how AI models represent concepts, developed the “mind wipe” technique.

Dan Hendrycks, executive director at the Center for AI Safety, says that the “unlearning” technique represents a significant advance on previous safety measures, and that he hopes it will be “ubiquitous practice for unlearning methods to be present in models of the future.”

As the AI industry continues to make rapid progress, safety is top of mind for world leaders. U.S. President Joe Biden’s AI Executive Order, signed in October 2023, directs officials to take steps to “understand and mitigate the risk of AI being misused to assist in the development or use of [chemical, biological, radiological, or nuclear] threats,” and to mitigate cybersecurity risks posed by AI.

Source