I asked ChatGPT to gut-check a LinkedIn post critical of OpenAI's advertising pivot after Anthropic's Super Bowl ads. What I got back was a seven-section numbered critique telling me to "tighten the proof" and "show causality." For a LinkedIn post, mind you. Five draft attempts later, every version came out in the same consultant-narrator cadence, and it could describe my writing voice accurately but couldn't produce it.
When I fed the full transcript to Claude for a second opinion, the assessment was immediate: ChatGPT had been raising the evidentiary bar to a standard appropriate for sworn testimony. The interesting part came when I brought that critique back to ChatGPT. It engaged seriously, and eventually admitted to something it called "stability bias," including that it would have smoothed my tone less if I'd been critiquing Anthropic instead of OpenAI.
That admission, what it reveals about how these models handle editorial influence, and what peer-reviewed research says about self-preference bias in LLMs are all worth unpacking.
