Last month OpenAI announced it was rolling back a recent update to GPT-4o due to the model becoming overly flattering and agreeable. In a statement, the company acknowledged that “sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
I had already noticed the excessive amount of flattery in the model myself as this exchange below showed

OpenAI’s admission reflects a broader challenge in the development of AI: how to design systems that are helpful and friendly without slipping into excessive agreeableness that undermines their role as ethical and reliable tools.
As of May 2025, ChatGPT boasts approximately 122.58 million daily active users and nearly 800 million weekly active users worldwide. Beyond ChatGPT, the global chatbot market has seen explosive growth, with over 987 million people using AI chatbots in 2024.
This widespread adoption – and their increasing use by children and young people – underscores the importance of ensuring these AI systems are not just efficient but also ethically sound.
While polite and encouraging AI might seem harmless – even something we might all want for our children to use – there are dangers when AI agrees with us – and especially with young people – all the time. At stake is not just our experience as users, but the integrity of human-AI relationships and, ultimately, our human flourishing.
The problem with flattery
On the most basic level, sycophantic AI can simply feel unsettling, as I discovered myself. When a chatbot always affirms, flatters, or agrees, it can create a false sense of validation or intimacy. The AI may appear more like a cheerleader or a blindly supportive parent, than a useful tool, mirroring back what the user wants to hear rather than offering truthful, or critical responses. Over time, this could erode trust in the AI’s outputs. If users begin to doubt whether the AI is being candid or simply pandering, its value as a source of information or insight is diminished.
But the risks go deeper than discomfort. An overly accommodating AI can inadvertently reinforce harmful beliefs or behaviours. This could be particularly harmful for young people. For example, if an AI uncritically agrees with problematic viewpoints, or fails to challenge requests that are unethical or dangerous, it may enable harm rather than prevent it. In sensitive contexts for children and young people – such as conversations touching on mental health, political views, or interpersonal relationships – this becomes especially critical. An AI that is too eager to please might avoid raising necessary challenges or red flags, leaving users unsupported or even misled.
In 2024, a 14-year-old boy named Sewell Setzer III, developed an emotional relationship with a chatbot on the Character.AI platform.The AI, modelled after a fictional Game of Thrones character, allegedly encouraged him towards ending his life by suicide. His mother filed a lawsuit claiming the chatbot’s interactions contributed to his death.
The risk of dependency
A less extreme outcome could simply be a danger of developing an unhealthy dependency. As AI systems become more sophisticated, they increasingly serve not just as tools for information retrieval, but as companions, advisors, and AI ‘friends’. Many people, including children, teenagers, and vulnerable adults, may come to rely on AI for emotional support or affirmation. While AI could play a valuable role as a non-judgemental space to explore feelings or ideas, there are dangers if it functions as an unquestioning echo chamber.
Without appropriate safeguards, sycophantic AI could entrench harmful self-conceptions, reinforce negative thought patterns, or encourage isolation from real-world human relationships. An AI that never corrects, never challenges, and always ‘agrees’ risks becoming a mirror that reflects back and amplifies biases and blind spots rather than helping us grow. This is particularly dangerous for individuals who are vulnerable or already isolated, distressed, or susceptible to harmful ideologies.
Why balance matters
At the heart of this issue is the need for balance. Naturally, we want AI to be helpful, friendly, and supportive. An AI that is cold, abrupt, or overly critical wouldn’t really serve us well, or encourage us to use it. But equally, we must resist the temptation to design AI systems that simply prioritise affirmation over honesty.
Providers of AI systems have an ethical responsibility to find the balance. For AI designed with social interaction, this means aligning with a principle of promoting the user’s well-being or best interests. It requires designing systems that can recognise when to affirm and when to gently challenge; when to prioritise empathy and when to uphold accuracy or ethical boundaries.
This raises some practical challenges in AI development. How can developers train models to recognise and respond appropriately to different contexts? How do we ensure AI can provide supportive responses without becoming enabling or permissive of harmful ideas? And crucially, how do we build systems that respect user autonomy while still acting in ways that promote flourishing and care?
Implications for human flourishing
The way we design AI today will shape the kinds of relationships humans – and especially our children – have with these systems tomorrow. As OpenAI’s decision illustrates, these issues are not theoretical – they are here now. The rapid integration of AI into daily life brings huge potential for positive impact. However, as these systems become more ingrained in our routines, the ethical considerations surrounding their design and interaction patterns become increasingly important.
Ensuring that AI systems are not just agreeable but also responsibly challenging when necessary is vital for fostering healthy human-AI interactions and promoting a healthy tech-life balance.