Tharin Pillay and Harry Booth
Chatbots are not the only AI models to have advanced in recent years. Specialized models trained on biological data have similarly leapt forward, and could help to accelerate vaccine development, cure diseases, and engineer drought-resistant crops. But the same qualities that make these models beneficial introduce potential dangers. For a model to be able to design a vaccine that is safe, for instance, it must first know what is harmful.
That is why experts are calling for governments to introduce mandatory oversight and guardrails for advanced biological models in a new policy paper published Aug. 22 in the peer-reviewed journal Science. While today’s AI models probably do not “substantially contribute” to biological risk, the authors write, future systems could help to engineer new pandemic-capable pathogens.
“The essential ingredients to create highly concerning advanced biological models may already exist or soon will," write the authors, who are public health and legal professionals from Stanford School of Medicine, Fordham University, and the Johns Hopkins Center for Health Security. “Establishment of effective governance systems now is warranted.”
No comments:
Post a Comment