Publications

Rethinking Hallucinations: A Cognitive-Inspired Taxonomy and Comprehensive Survey in Large Language Models, Large Vision-Language Models, and Multimodal Large Language Models

Abstract

Hallucination remains a critical challenge in generative artificial intelligence (AI), manifesting as outputs that are factually incorrect, logically inconsistent, or unfaithful to input constraints. As large-scale models continue to advance, understanding and mitigating hallucinations is essential for improving reliability across diverse applications. However, comprehensive surveys encompassing among large language models (LLMs), large vision-language models (LVLMs), and multimodal large language models (MLLMs) remain scarce, and the underlying mechanisms of hallucinations at the scientific level have not been fully explored and explained. To address this gap, this paper presents the first comprehensive review from the perspectives of interdisciplinary fields such as cognitive science and psychology, providing a deeper understanding through the lens of cross-disciplinary insights. This survey provides a systematic and comprehensive review of recent progress in hallucination research, covering its definitions, causes, and mitigation strategies across multiple modalities. We propose a novel taxonomy inspired by cognitive science, drawing parallels between hallucinations in LLMs and errors in human language processing. Furthermore, we highlight the latest advancements in LVLMs and MLLMs, where hallucinations pose unique challenges due to the complexity of cross-modal interactions. Additionally, we discuss real-world implications, emphasizing how hallucinations impact high-stakes applications and outlining key challenges that remain unsolved. By identifying future research directions, this survey aims to provide insights into …

Date
2025
Authors
Shao-Jun Xia, Huixin Zhang, Yifan Jiang, Xiaoyang Chen, Yike Chen, Zitong Li, Zhongwei Wan, Anlan Sun
Source
Large Vision-Language Models, and Multimodal Large Language Models (November 09, 2025)