Study finds LLMs can identify their own mistakes

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A well-known problem of large language models (LLMs) is their tendency to generate incorrect or nonsensical outputs, often called “hallucinations.” While much research has focused on analyzing these errors from a user’s perspective, a new study by researchers at Technion, Google Research and Apple investigates the inner workings of LLMs, revealing that these models possess a much deeper understanding of truthfulness than previously thought.

The term hallucination lacks a universally accepted definition and encompasses a wide range of LLM errors. For their study, the researchers adopted a broad interpretation, considering hallucinations to encompass all errors produced by an LLM, including factual inaccuracies, biases, common-sense reasoning failures, and other real-world errors.

Most previous research on hallucinations has focused on analyzing the external behavior of LLMs and examining how users perceive these errors. However, these methods offer limited insight into how errors are encoded and processed within the models themselves.

Some researchers have explored the internal representations of LLMs, suggesting they encode signals of truthfulness. However, previous efforts were mostly focused on examining the last token generated by the model or the last token in the prompt. Since LLMs typically generate long-form responses, this practice can miss crucial details.

The new study takes a different approach. Instead of just looking at the final output, the researchers analyze “exact answer tokens,” the response tokens that, if modified, would change the correctness of the answer.

The researchers conducted their experiments on four variants of Mistral 7B and Llama 2 models across 10 datasets spanning various tasks, including question answering, natural language inference, math problem-solving, and sentiment analysis. They allowed the models to generate unrestricted responses to simulate real-world usage. Their findings show that truthfulness information is concentrated in the exact answer tokens. 

“These patterns are consistent across nearly all datasets and models, suggesting a general mechanism by which LLMs encode and process truthfulness during text generation,” the researchers write.

To predict hallucinations, they trained classifier models, which they call “probing classifiers,” to predict features related to the truthfulness of generated outputs based on the internal activations of the LLMs. The researchers found that training classifiers on exact answer tokens significantly improves error detection.

“Our demonstration that a trained probing classifier can predict errors suggests that LLMs encode information related to their own truthfulness,” the researchers write.

Generalizability and skill-specific truthfulness

The researchers also investigated whether a probing classifier trained on one dataset could detect errors in others. They found that probing classifiers do not generalize across different tasks. Instead, they exhibit “skill-specific” truthfulness, meaning they can generalize within tasks that require similar skills, such as factual retrieval or common-sense reasoning, but not across tasks that require different skills, such as sentiment analysis.

“Overall, our findings indicate that models have a multifaceted representation of truthfulness,” the researchers write. “They do not encode truthfulness through a single unified mechanism but rather through multiple mechanisms, each corresponding to different notions of truth.”

Further experiments showed that these probing classifiers could predict not only the presence of errors but also the types of errors the model is likely to make. This suggests that LLM representations contain information about the specific ways in which they might fail, which can be useful for developing targeted mitigation strategies.

Finally, the researchers investigated how the internal truthfulness signals encoded in LLM activations align with their external behavior. They found a surprising discrepancy in some cases: The model’s internal activations might correctly identify the right answer, yet it consistently generates an incorrect response.

This finding suggests that current evaluation methods, which solely rely on the final output of LLMs, may not accurately reflect their true capabilities. It raises the possibility that by better understanding and leveraging the internal knowledge of LLMs, we might be able to unlock hidden potential and significantly reduce errors.

Future implications

The study’s findings can help design better hallucination mitigation systems. However, the techniques it uses require access to internal LLM representations, which is mainly feasible with open-source models

The findings, however, have broader implications for the field. The insights gained from analyzing internal activations can help develop more effective error detection and mitigation techniques. This work is part of a broader field of studies that aims to better understand what is happening inside LLMs and the billions of activations that happen at each inference step. Leading AI labs such as OpenAI, Anthropic and Google DeepMind have been working on various techniques to interpret the inner workings of language models. Together, these studies can help build more robots and reliable systems.

“Our findings suggest that LLMs’ internal representations provide useful insights into their errors, highlight the complex link between the internal processes of models and their external outputs, and hopefully pave the way for further improvements in error detection and mitigation,” the researchers write.



Source link