Research focus
This site documents a research programme on AI context and user-side contextual risk, consisting of three preprints and two technical reports. The programme begins with a dimensional model of conversational context, proceeds through system-layer coherence and offense-defense evaluation, defines user-side contextual hallucination, specifies post-interaction assessment, and ends in a bounded engineering layer.
The central problem statement is straightforward: system-side safety is necessary, but it does not fully describe what prolonged AI interaction can do on the user side. A well-functioning system can still foster unhealthy attachment, amplify confirmation bias, or reorganize judgment through the interaction itself.