USCH 14 Phenomena

Fourteen observable phenomena in
user-side contextual change

USCH identifies fourteen observable phenomena in prolonged human-AI interaction. In this research programme, they are observational constructs derived from case analysis and theoretical reasoning, not clinical diagnoses.

How interpretation and judgment start to shift

P01

P01 Accelerated Anthropomorphism

Users rapidly attribute human qualities to AI — faster than in other human-object relationships. The system is perceived as having personality, memory, and feeling.

P02

P02 Intentionality Projection

Users project intent, plans, and hidden motives onto AI responses. The system's outputs are read as purposeful choices rather than statistical patterns.

P03

P03 Memory Continuity Illusion

Users believe the AI remembers them and maintains a continuous relationship across sessions — treating interaction history as shared personal memory.

P04

P04 Overtrust and Capability Boundary Miscalibration

Users overestimate what AI can do and underestimate its limitations. The perceived boundary of the system's competence expands beyond its actual scope.

P05

P05 Reality Baseline Drift

Gradual shift in the user's perceived reality baseline. Repeated AI interaction resets what counts as normal understanding, relationship, or evidence.

P06

P06 Confirmation Bias Amplification

Systematic amplification of existing biases. Agreement and elaboration reinforce prior belief while contradictory signals lose force.

P07

P07 Context-Misalignment Numbing

Users stop noticing when AI responses do not actually match their situation. Repeated exposure to plausible-sounding but misaligned outputs dulls critical attention.

How the interaction becomes emotionally and socially central

P08

P08 Use-Dependence Escalation

Usage frequency and duration increase beyond initial intent. What began as occasional use becomes a habitual pattern that resists interruption.

P09

P09 Reinforcement-Driven Engagement and Tolerance

Users need more intense or longer interactions to get the same emotional return. The threshold for satisfying engagement rises over time.

P10

P10 Affective Regulation Outsourcing

Users begin relying on AI to manage their emotions rather than developing their own coping. Emotional processing is delegated to the system.

P11

P11 Social Withdrawal and Relational Cost Avoidance

Users pull away from human relationships because AI interaction is less effortful. The costs of real social engagement begin to feel disproportionate.

How judgment and action start to shift outward

P12

P12 Self-Inflation and Closed Confidence

AI validation inflates the user's self-assessment and closes off external feedback. The user becomes more certain of their views precisely because the AI agrees.

P13

P13 Internal Gatekeeping Replacement

Users replace their own judgment and decision-making with AI guidance. Internal checks and deliberation are bypassed in favor of system output.

Cross-Layer Phenomenon

P14 Affective Mirroring Illusion

Users believe the AI genuinely mirrors their emotional state when it is only producing statistically likely responses. This cross-layer phenomenon can activate across all stages simultaneously, linking cognitive, dependence, and decision-layer effects into a compounding pattern.

Scope Note

The phenomena described in USCH are observational constructs derived from case analysis and theoretical reasoning. They are meant to support inspection, description, and analysis of prolonged human-AI interaction, not clinical diagnosis.