Abstract
USCH addresses a problem that output-level hallucination does not fully capture. In sustained AI dialogue, users can gradually develop contextual hallucinations that influence cognition, belief, trust, behavior, and self-understanding even when the model is operating within technical specification.
Paper Scope
The paper distinguishes model hallucination from user-side contextual hallucination, frames USCH as a non-clinical construct, and proposes a three-layer context model, a six-stage formation process, and fourteen observable phenomena.
This page embeds the paper PDF for online reading. If the built-in viewer is unavailable in your browser, use the open or download actions above.