Can AI Really Make Scientific Discoveries?

The Sakana AI system can make scientific discoveries in the area of machine learning in a fully automated way. But do its claims stack up, and would an army of AI scientists really be good news for science?
Can AI Really Make Scientific Discoveries?
Photo by NOAA on Unsplash

Can AI Really Make Scientific Discoveries?

Scientific discovery is one of the most sophisticated human activities. It requires understanding existing knowledge, identifying significant gaps, formulating research questions, designing and conducting experiments, and analyzing and interpreting results. But can a process this complex be automated?

Recently, Sakana AI Labs announced the creation of an “AI scientist” – an artificial intelligence system that can make scientific discoveries in the area of machine learning in a fully automated way. Using generative large language models (LLMs) like those behind ChatGPT and other AI chatbots, the system can brainstorm, select a promising idea, code new algorithms, plot results, and write a paper summarizing the experiment and its findings, complete with references.

An AI scientist at work

The Sakana system claims it can undertake the complete lifecycle of a scientific experiment at a cost of just US$15 per paper – less than the cost of a scientist’s lunch. But do these claims stack up? And even if they do, would an army of AI scientists churning out research papers with inhuman speed really be good news for science?

The Language of Science

A lot of science is done in the open, and almost all scientific knowledge has been written down somewhere. Millions of scientific papers are freely available online in repositories such as arXiv and PubMed. LLMs trained with this data capture the language of science and its patterns. It is therefore perhaps not at all surprising that a generative LLM can produce something that looks like a good scientific paper – it has ingested many examples that it can copy.

“The ability of LLMs to judge the quality of research is also an open question. My own work shows LLMs are not great at judging the risk of bias in medical research studies, though this too may improve over time.” - Karin Verspoor

The Problem of Interestingness

What is less clear is whether an AI system can produce an interesting scientific paper. Crucially, good science requires novelty. But is it interesting?

Scientists don’t want to be told about things that are already known. Rather, they want to learn new things, especially new things that are significantly different from what is already known. This requires judgment about the scope and value of a contribution.

The Sakana system tries to address interestingness in two ways. First, it “scores” new paper ideas for similarity to existing research (indexed in the Semantic Scholar repository). Anything too similar is discarded.

Second, Sakana’s system introduces a “peer review” step – using another LLM to judge the quality and novelty of the generated paper. Here again, there are plenty of examples of peer review online on sites such as openreview.net that can guide how to critique a paper. LLMs have ingested these, too.

AI May Be a Poor Judge of AI Output

Feedback is mixed on Sakana AI’s output. Some have described it as producing “endless scientific slop”. Even the system’s own review of its outputs judges the papers weak at best. This is likely to improve as the technology evolves, but the question of whether automated scientific papers are valuable remains.

AI Tools to Support Scientists, Not Replace Them

AI researchers have been developing systems to support science for decades. Given the huge volumes of published research, even finding publications relevant to a specific scientific question can be challenging.

Specialized search tools make use of AI to help scientists find and synthesize existing work. These include the above-mentioned Semantic Scholar, but also newer systems such as Elicit, Research Rabbit, scite, and Consensus.

Text mining tools such as PubTator dig deeper into papers to identify key points of focus, such as specific genetic mutations and diseases, and their established relationships. This is especially useful for curating and organizing scientific information.

Machine learning has also been used to support the synthesis and analysis of medical evidence, in tools such as Robot Reviewer. Summaries that compare and contrast claims in papers from Scholarcy help to perform literature reviews.

All these tools aim to help scientists do their jobs more effectively, not to replace them.

AI Research May Exacerbate Existing Problems

While Sakana AI states it doesn’t see the role of human scientists diminishing, the company’s vision of “a fully AI-driven scientific ecosystem” would have major implications for science.

One concern is that, if AI-generated papers flood the scientific literature, future AI systems may be trained on AI output and undergo model collapse. This means they may become increasingly ineffectual at innovating.

However, the implications for science go well beyond impacts on AI science systems themselves.

There are already bad actors in science, including “paper mills” churning out fake papers. This problem will only get worse when a scientific paper can be produced with US$15 and a vague initial prompt.

The need to check for errors in a mountain of automatically generated research could rapidly overwhelm the capacity of actual scientists. The peer review system is arguably already broken, and dumping more research of questionable quality into the system won’t fix it.

Science is fundamentally based on trust. Scientists emphasize the integrity of the scientific process so we can be confident our understanding of the world (and now, the world’s machines) is valid and improving.

A scientific ecosystem where AI systems are key players raises fundamental questions about the meaning and value of this process, and what level of trust we should have in AI scientists. Is this the kind of scientific ecosystem we want?

The intersection of AI and science

Conclusion

The Sakana AI system is an interesting development in the field of AI research. While it has the potential to support scientists in their work, it also raises important questions about the role of AI in science and the implications of a fully AI-driven scientific ecosystem. As the technology continues to evolve, it is essential that we consider these questions carefully and ensure that the benefits of AI are realized while minimizing its risks.