Pages

9 November 2023

How AI Is Shaping Scientific Discovery

Sara Frueh

Physicist Mario Krenn sees artificial intelligence as a muse — a source of inspiration and ideas for scientists. It’s a description born from his past research and his current work at the Max Planck Institute for the Science of Light, where he and his colleagues develop AI algorithms that can help them learn new ideas and concepts in physics.

His efforts began years ago, when a research team Krenn was part of struggled to come up with an experiment that would let them observe a specific type of quantum entanglement. Krenn, suspecting that their intuition was getting in the way, developed a computer algorithm that can design quantum experiments.

“I let the algorithm run, and within a few hours it found exactly the solution that we as human scientists couldn’t find for many weeks,” he said. Using the blueprint created by the computer, his colleagues were able to build the setup in the laboratory and use it to observe the phenomenon for the first time.

In a subsequent case, the algorithm overcame a barrier by reviving a long-forgotten technique and applying it in a new context. The scientists were immediately able to generalize this idea to other situations, and they wrote about it in a paper for Physical Review Letters.

“But, if you think about it, none of the core authors of this paper came up with the idea that is described in the paper,” said Krenn. “The idea came completely, implicitly from the machine. We were just analyzing what the machine has done.”

Krenn was among the speakers at a recent two-day meeting hosted by the National Academies that looked at the present and future of AI in advancing scientific discovery.

AI is advancing science in a range of ways — identifying meaningful trends in large datasets, predicting outcomes based on data, and simulating complex scenarios, said National Academy of Medicine President Victor Dzau in his welcoming remarks. As the technology develops, it may acquire the ability to carry out independent investigations.

“As we envision AI for the future and using it to do independent scientific inquiry, there’s a lot to consider,” said Dzau. “We have to be very careful about understanding the potential of [emerging technologies] possibly affecting society in many different ways … cost, access, equity, ethics, and privacy.” He noted that ongoing committees at NAM are exploring potential impacts in such areas.

Already speeding science

AI is accelerating research on complex neurodegenerative diseases like Alzheimer’s disease and Parkinson’s disease, explained Steven Finkbeiner, a senior investigator at the Gladstone Institutes.

When his team began using AI to analyze images of cells, “one of the very first things that surprised a lot of the biologists in my group was how rich their data might be, and it may contain information that basically we can’t see as humans, or have overlooked,” he said.

His team employed a deep-learning algorithm to try to identify the point at which a cell becomes destined to die — something human scientists have struggled to do, and a key endpoint in understanding neurodegenerative diseases. After being trained with 23,000 examples, the team’s deep-learning network was able to identify changes in the cell nucleus that could predict with high accuracy which cells were destined to die.

Finkbeiner’s team is now using deep learning to identify even earlier changes in a cell that predict its eventual death — early enough that intervening in the process may eventually be possible.

Amy McGovern, a professor at the University of Oklahoma, explained how AI is being applied to meteorology. Initially AI has been used to correct biases in existing weather prediction models, which can improve forecasts and save lives and property.

“Now we are using it to try to improve our foundational understanding of the science of specific events,” she said. For example, researchers are using AI to generate synthetic storms and identify new precursors to tornadoes. Tornadoes are rare enough that real ones alone don’t generate enough data to inform that effort.

Autonomy in the future?

Going forward, AI will likely be developed to go beyond tasks like identifying patterns in data and designing experiments. Speakers explored whether there will eventually be “AI scientists” that are able to act independently and also partner with human scientists.

Doing so would mean that AI scientists would have the capacity to perform scientists’ core competencies, explained Yolanda Gil, principal scientist at the University of Southern California’s Information Sciences Institute. This includes not only tasks like gathering and analyzing data but also a reflection process — what’s a good hypothesis to work on? — and the creativity to come up with new paradigms and ideas. “These are big challenges for AI,” said Gil.

Hiroaki Kitano, CEO of Sony AI, explained his proposal for the Nobel Turing Challenge — to come up with AI systems by 2050 that can make major discoveries autonomously, at the level of discoveries worthy of a Nobel Prize. “Can AI form a groundbreaking concept that will change our perception?” he asked.

It’s both a challenge and a question, Kitano said. “If we manage to build a system like that, is it going to behave like the best human scientists, or does it show a very different kind of intelligence? Are we going to find an alternative form of scientific discovery that is something very different from what we do today?”

Navigating ethical dilemmas

Deborah Johnson, professor emeritus of engineering and society at the University of Virginia, expressed concern about the use of the words “autonomy,” “autonomous,” and “AI scientist,” because they seem to distance human scientists from responsibility for the AI systems they create and any negative impacts that result. “I worry that this is going to lead to a deflection of accountability and responsibility for what happens.”

Johnson was on a panel that explored ethical and societal issues that AI research raises — including how the benefits it yields can be distributed widely rather than reserved for a few.

“Much of the investment and excitement in the areas I work in — in medical artificial intelligence — is about pushing frontiers,” said Glenn Cohen, deputy dean of Harvard Law School. “It’s taking the work of top dermatologists or top brain surgeons and making it even better, helping people who already have access to very high-quality oncology survive longer.”

While that’s great, Cohen continued, much of the benefit of AI lies in its ability to democratize expertise — taking the expertise of average doctors and scaling it up to make it available to people in rural areas and all over the world. Right now, the investment and intellectual property and funding incentives don’t match that ethical goal, and we need to think seriously about how to restructure those incentives, he said.

Vukosi Marivate, ABSA UP Chair of Data Science at the University of Pretoria, said that governance of AI is a team sport; ethical decisions and responsibility shouldn’t rest solely with AI developers and scientists. Society should have a voice in what the expectations for limits on these technologies should be.

“And for society to have a voice, they must understand what is going on,” said Marivate. “It can’t just be that you have these discussions about societal impact, and then society’s not there.” AI developers and scientists should not be making decisions on their own that affect other people broadly, he said.

Moderator Bradley Malin, a professor at Vanderbilt University, emphasized the need to set up an ongoing process to reason about AI-related societal and ethical issues as they inevitably, unpredictably emerge. “These dilemmas are going to arise, and it’s probably unlikely that we’re going to know all of them beforehand.”

No comments:

Post a Comment