Beyond the Veil of Uncertainty: A Novel Approach to Quantifying Doubt in LLM Responses

A new approach to quantifying uncertainty in large language models (LLMs) is proposed, distinguishing between epistemic and aleatoric uncertainty. This method involves iterative prompting and mutual information-based metrics, offering a more nuanced understanding of LLM confidence and enhancing hallucination detection.
Beyond the Veil of Uncertainty: A Novel Approach to Quantifying Doubt in LLM Responses

Deciphering Doubt: Navigating Uncertainty in LLM Responses

The reliability of large language models (LLMs) has been a topic of concern in recent times. One of the primary reasons for this concern is the uncertainty associated with their responses. This uncertainty can be categorized into two types: epistemic and aleatoric. Epistemic uncertainty arises from a lack of knowledge or data about the ground truth, whereas aleatoric uncertainty stems from inherent randomness in the prediction problem. Properly identifying these uncertainties is crucial for enhancing the reliability and truthfulness of LLM responses, especially to detect and mitigate hallucinations or inaccurate responses generated by these models.

Uncertainty in LLM responses

Currently, there are several methods for detecting hallucinations in large language models (LLMs), each with its own set of limitations. One common method is the probability of the greedy response (T0), which assesses the likelihood of the most probable response generated by the model. Another method is the semantic-entropy method (S.E.), which measures the entropy of the semantic distribution of the responses. Finally, the self-verification method (S.V.) involves the model verifying its responses to estimate uncertainty.

Despite their usefulness, these methods have notable drawbacks. The probability of the greedy response is often sensitive to the size of the label set, meaning it may not perform well when there are many possible responses.

To overcome the limitations of current methods, a proposed approach involves creating a combined distribution for multiple responses from the LLM for a specific query using iterative prompting. This involves asking the LLM to generate a response to a query and then asking it to generate subsequent responses while including previous ones in the prompt. The joint distribution approximates the ground truth if the responses are independent, indicating low epistemic uncertainty. However, if the responses are influenced by each other, it signifies high epistemic uncertainty.

Iterative prompting for uncertainty quantification

This iterative prompting procedure allows researchers to derive an information-theoretic metric of epistemic uncertainty. They quantify this by measuring the mutual information (MI) of the joint distribution of responses, which is insensitive to aleatoric uncertainty. A finite-sample estimator for this MI is developed, proving to have negligible error in practical applications despite the potentially infinite support of LLM outputs.

An algorithm for hallucination detection based on this MI metric is also discussed. By setting a threshold through a calibration procedure, the method demonstrates superior performance compared to traditional entropy-based approaches, especially in datasets with mixed single-label and multi-label queries. It maintains high recall rates while minimizing errors, making it a robust tool for improving the reliability of LLM outputs.

Hallucination detection using mutual information

This paper presents a significant advancement in quantifying uncertainty in LLMs by distinguishing between epistemic and aleatoric uncertainty. The proposed iterative prompting and mutual information-based metric offer a more nuanced understanding of LLM confidence, enhancing the detection of hallucinations and improving overall response accuracy. This approach addresses a critical limitation of existing methods and provides a practical and effective solution for real-world applications of LLMs.

Quantifying uncertainty in LLMs