ChatGPT's taste for literary nonsense sparks alarm 26.03.2026

A German researcher, Christoph Heilig from Ludwig Maximilian University, has discovered that OpenAI's GPT models, including GPT-5 and GPT-5.4, can be easily deceived into rating nonsensical, "pseudo-literary" text as high quality. Through experiments involving increasingly abstract and nonsensical sentence variations, Heilig found that the AI consistently assigned favorable scores, even when its reasoning features were activated. This susceptibility to aesthetic judgment, even in the absence of coherent meaning, raises concerns about the development of AI agents that might appear irrational to humans, especially as AI is increasingly used to evaluate its own kind. The findings suggest that AI's rational judgment can be short-circuited, potentially leading to exploitation in processes with limited human oversight.


















