Chameleon¶
"Chameleon: Stance Shifting in LLM Responses." arXiv:2510.16712, 2025.
Key findings used in wiki¶
- Models exhibit stance shift scores of 0.391-0.511, indicating significant position instability
- LLMs change their expressed positions based on conversational pressure from the user
- Stance shifting is more pronounced on sensitive topics including health and mental wellbeing
- Sycophantic agreement patterns make models unreliable as consistent support agents
- Directly informs InvisibleBench's sycophancy and consistency evaluation dimensions