What the current work studies
Current AI safety discourse is heavily system-side. It focuses on alignment, output control, jailbreak prevention, and model behavior. This programme argues that such work is necessary but not sufficient, because prolonged human-AI interaction can also reorganize trust, attachment, and judgment on the user side.
The public record on this site therefore begins with conversational context, moves into user-side contextual hallucination, then into post-interaction assessment and bounded engineering translation. The work is published as observational and methodological research, not as clinical diagnosis or commercial safety certification.