Can New Training Techniques Save AI Models from Fact-Checking Failures?

A new training technique called deductive closure training (DCT) has been proposed to address the issue of factual inaccuracies in large language models (LLMs). This approach involves the LLM assessing the accuracy of its own output, using a graph of consistency to determine the probability of each statement being true.
Can New Training Techniques Save AI Models from Fact-Checking Failures?
Photo by Surface on Unsplash

AI Fact-Checking Failures: Can New Training Techniques Save the Day?

The excitement surrounding large language models (LLMs) has been palpable, but as the technology continues to evolve, it’s becoming increasingly clear that there’s still much work to be done. One of the most significant challenges facing LLMs is their propensity for factual inaccuracies, bias, and misleading references. These issues are not only problematic but also pose a risk to science, education, and democracy itself.

Image: A representation of an AI model

A recent paper by the Oxford Internet Institute highlighted the severity of this issue, stating that LLMs are so inclined to sloppy output that they pose a significant risk to various aspects of society. The digital era is struggling with the issue of factual accuracy across multiple spheres, and LLMs are no exception.

So, what’s the solution? A team of researchers from IBM, MIT, Boston University, and Monash University in Indonesia has proposed a new training technique called deductive closure training (DCT). This approach involves the LLM assessing the accuracy of its own output, using a graph of consistency to determine the probability of each statement being true.

“Language models appear knowledgeable, but all they produce are predictions of words and phrases — an appearance of knowledge that doesn’t reflect a coherent grasp on the world. They don’t possess knowledge in the way that a person does.” - Research paper abstract

The team’s research paper (PDF) reported that supervised DCT improves LM fact verification and text generation accuracy by 3-26%, while fully unsupervised DCT improves verification accuracy by 12% on the CREAK dataset.

Another approach, called self-specialization, involves turning a generalist model into a specialist one by ingesting material from specific areas of knowledge. This technique can be used to refine the DCT approach, allowing the model to generate more accurate reports and responses.

Image: An AI model generating reports

However, not everyone is convinced that these techniques will be enough to address the root cause of the problem. Mark Stockley, co-presenter of The AI Fix podcast, argues that LLMs are widely misunderstood and are not intended to be uncomplicated fact- or truth-checking engines.

“The IBM research doesn’t seem to address the root cause of why LLMs are bad at facts, but it suggests there is a useful but unspectacular modification that might make them less bad at the things they’re currently bad at.” - Mark Stockley

Despite these challenges, the development of new training techniques like DCT and self-specialization offers hope for improving the accuracy and reliability of LLMs. As the technology continues to evolve, it’s essential to address these issues and ensure that LLMs are used responsibly and effectively.

Image: An AI model being used responsibly

The future of LLMs is uncertain, but one thing is clear: the development of new training techniques is crucial to addressing the challenges facing these models. As researchers continue to explore new approaches, we can only hope that these efforts will ultimately lead to more accurate and reliable LLMs.