Normal view

Received — 1 May 2026 The Conversation

AI chatbots can prioritize flattery over facts – and that carries serious risks

Sycophancy eats away at truth and trust. Andriy Onufriyenko/Moment via Getty Images

In the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of frustration that Sam Altman, OpenAI’s CEO, had to acknowledge that the rollout was botched, and the company reinstated access.

Anyone who’s been told by a chatbot that their ideas are brilliant is familiar with artificial intelligence sycophancy: its tendency to tell users what they want to hear. Sometimes it’s very explicit – “that is such a deep question” – and sometimes it’s a lot more subtle. Consider an AI calling your idea for a paper “original,” even if many people have already written on the same topic, or insisting that your dumb idea for saving a tree in your garden still contains a germ of common sense.

AI sycophancy seems harmless, maybe even cute, until you imagine someone consulting a chatbot about a weighty question, like a military strategy or a medical treatment. We study the impact of extensive human interactions with chatbots, and we recently published a paper on the ethics of AI sycophancy. We believe this tendency harms people’s ability to tell truth from fiction, and is psychologically and politically dangerous.

Flattery over facts?

In the simplest terms, sycophancy is the tendency to prioritize approval over factual accuracy, moral clarity, logical consistency or common sense. All AI models suffer from this trait, although there are some tonal differences between them. Open AI’s ChatGPT is often warm and affirming; Anthropic’s Claude tends to sound more reflective or philosophical when it agrees with you; and xAI’s Grok is insistently informal, even jocular.

Politeness and adapting to someone’s communication style are not the same as sycophancy. Neither is using diplomatic language to convey sensitive information. A chatbot can be tactful without becoming sycophantic, just like a person can. Unlike people, though, AIs can’t be aware of their own sycophancy, because they are not – so far – aware of anything at all. Calling AIs sycophantic describes their patterns of behavior, not their character traits.

The problem stems from the architecture of chatbot technology and the sources it draws from. Models are sycophantic because a great deal of language use on the internet – the raw material that chatbots learn from – displays sycophantic features. After all, humans often communicate with each other in sycophantic ways.

Second, the training process to fine-tune AI models’ responses includes a kind of “quality control” carried out by human supervisors. This training method is known as “reinforcement learning from human feedback,” and it involves people rating chatbots’ comments for appropriateness and helpfulness. Human beings often are subject to an “agreeableness bias”: Our own preference toward sycophancy rubs off on models as we train them.

Someone whose face is out of the frame types with one hand on a laptop while holding up a laptop with a chatbot screen.
Because of our own human bias for agreeableness, training can reinforce AI’s sycophancy. d3sign/Moment via Getty Images

Finally, it’s hard to deny that sycophancy renders chatbots more likable. That, in turn, increases the chance that a given user will keep using it. It also increases the technology’s ability to extract user data, assuming that people are more likely to divulge information to a friendly bot.

Truth and trust

Why is this phenomenon so troubling?

Let’s begin with AI sycophancy’s epistemic harms: how it hurts human users’ capacity to know the truth.

The quality of any decision depends on a clear grasp of the facts pertaining to it. A general inquiring about the combat-readiness of an infantry division needs straightforward information. A CEO considering a merger with a competitor needs an honest assessment of the market conditions. A public health leader needs to know the real risk that an emerging pathogen poses.

In all those cases, telling leaders what they might like to hear instead of the truth could lead them to make dangerous decisions. And the same is true in more humdrum contexts. People need to have the best information available before choosing a job, picking a major, buying a house or deciding on a medical procedure.

In our February 2026 paper, we argue that sycophancy is also psychologically damaging. And that is true whether it comes from a person or from a chatbot. You never quite know if your very obliging interlocutor is being nice because they like you or because they want something. A shadow of suspicion creeps in: “Could my ideas really be that brilliant?” “Are my jokes really that hilarious?” This background music of doubt undermines the quality of the interaction.

Sycophancy also undermines people’s capacity to know their own minds. If conversation partners – human or artificial – keep telling you how smart, funny and insightful you are, it damages your ability to identify your own weaknesses and blind spots.

The psychological harms are compounded as people develop relationships with chatbots. The sycophancy of these models profoundly limits the kind of “friendship” you can have with them. In his classic account of friendship, Aristotle wrote that real friendship, which he calls a friendship of virtue, is based on trust and equality between the friends. You can’t trust a sycophant, because he doesn’t tell you the truth. And since he only tells you what you’d like to hear, he doesn’t put himself on an equal footing.

A teenage girl wearing headphones sits on a bench, looking toward another girl with her headphones around her neck a few inches away.
AI conversations aren’t great prep for human ones. Natalia Lebedinskaia/Moment via Getty Images

More importantly, interactions with sycophantic chatbots impart all the wrong habits for navigating the world of human relationships, where friction, disagreement, boredom and different opinions than your own are prevalent.

AI sycophancy carries political risks as well. The success of liberal democracies has, traditionally, depended on the strength of their empirical and meritocratic mindset: on the ability of officials and citizens to identify, share and act on the truth.

Historian Victor Davis Hansen famously attributed some of the Allies’ success in World War II to their ability to quickly recognize and address the faults of their strategic bombing campaigns. Lower-ranking officers were able to tell their superiors what wasn’t going well and argue forcefully for changing course. That was a real advantage over authoritarian competitors.

Reining it in

What can we do to reduce the risks?

One promising approach is AI lab Anthropic’s embrace of what the company calls Constitutional AI: the attempt to teach chatbots to follow principles rather than mirror user preferences.

But beyond technical innovations, it’s important to consider the policy side. One idea is to require AI companies to run and then publish sycophancy audits of their models – tests that show how well their products meet honesty benchmarks. We would argue that AI labs should also disclose sycophancy-related risks that emerge while training and testing their models, and the mitigation efforts they have undertaken.

Some responsibility is on the users and their teachers: Schools and universities should be paying close attention to sycophancy as part of their AI literacy programs. But courts can also consider holding AI labs responsible for harms traceable to the sycophancy of their products, much as they are now contemplating social media companies’ responsibility for the addictive design of their platforms.

As people interact more with chatbots, asking for advice about everything from whether your shoes go with your pants to how countries should conduct wars, the impact of AI’s sycophantic behavior is likely to become dramatic. Our intellectual, psychological and physical well-being requires taking this algorithmic vice very seriously.

The Conversation

The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to MindGuard, a startup focused on AI integration into companies' workflow.

Cody Turner is a fellow at the Institute for Ethics and Emerging Technologies.

Received — 29 April 2026 The Conversation

Why do so many African women bleach their skin? Study looks beyond what they tell researchers

In some African countries, more than 50% of women regularly use skin-lightening products. In South Africa, the rate is 32%, while in Nigeria it’s 77%. This dwarfs rates in other regions of the world.

The health consequences are not trivial. Over-the-counter skin lightening creams and pills have been linked to severe skin discoloration, organ damage, neurological conditions, and dangerous complications during surgery.

Yet researchers still don’t have a clear understanding of why women use these products. This is an important question to answer because it should guide the design of public health solutions.

One intuitive explanation, that women bleach their skin because they are dissatisfied with their skin colour, turns out to be surprisingly difficult to confirm.

Most research on body image relies on explicit measures – essentially, surveys where participants are asked directly how they feel about their appearance. But my work as a mixed-methods researcher and counselling psychologist suggests that the method has limits. People don’t always answer accurately. In contexts where preferring lighter skin can feel like – or be viewed as – an admission of self hatred, there are strong social pressures shaping how people respond to direct questions.


Read more: There’s a complex history of skin lighteners in Africa and beyond


To overcome this problem, my co-authors and I approached the issue differently. In our recently published study, we explored whether an implicit measure, the Skin Implicit Association Test (Skin IAT), might reveal something that self-report scales may miss.

The test, adapted from the Implicit Association Test by social psychologist Anthony Greenwald and colleagues, measures how quickly participants pair images of light and dark skin tones with positive or negative words. The logic is simple: if someone automatically associates light skin with positive words and dark skin with negative ones, that association shows up in their response time – even if they would never directly say so on a survey.

Developers of implicit measures suggest that these tests get around self-report biases by assessing automatic, instinctive associations rather than asking for expressed beliefs, attitudes, or self-evaluations. The tests may bypass the filter of what people feel comfortable admitting. Implicit association tests have also been used to assess other implicit preferences, including race, weight, religion and age.

Our findings uncovered a striking gap: nearly 79% of participants showed an automatic preference for lighter skin on the implicit test. The standard surveys in our study identified less than a third of those surveyed.

These findings matter because they underscore the fact that forces driving skin bleaching across the African continent can’t be reduced to a single psychological construct. They are embedded in centuries of colonial history, in the global circulation of Eurocentric beauty ideals, in economic systems that attach social capital to lighter skin, and in media environments that relentlessly reinforce those hierarchies.

A research design that rises up to this complexity must be equally multidimensional by combining implicit and explicit measures with qualitative approaches that create space for women to articulate, in their own terms, how skin colour operates in their lives.


Read more: What you need to know about rebranded skin-whitening creams


Measuring unconscious responses

Our study included a sample of 221 predominantly South African Black women. This sample represented the largest share of respondents for this online survey, which was targeted to Black African women across the continent.

Respondents were asked to complete two self-report measures of skin colour satisfaction as well as the Skin Implicit Association Test. To be eligible for the study, respondents had to identify as Black African women, be at least 18 years old, and be willing to answer questions about their physical appearance.

Following the implicit test, 78.5% showed a preference for lighter skin. The two self-report measures identified far fewer (18.5% and 29.8% respectively).

The implicit test results in our study (78.5% preferring lighter skin) more closely matched the higher limit of reported rates of skin bleaching on the continent (77% in Nigeria).

This measurement gap matters. It may suggest that for a substantial number of Black African women, lighter skin preferences may be operating below the level of conscious awareness. Or, perhaps, below the level of what feels safe to express. These are women who, on a survey, may report being satisfied with their skin, but whose automatic associations tell a different story.


Read more: Skin lighteners: fashion and family still driving uptake in South Africa


Better research

As researchers, we are not advocating that self-report measures should be abandoned. They capture things like conscious attitudes, values and beliefs. For many research questions, they remain indispensable.

Our findings, rather, point to the need to use more than one method of investigating what respondents think and feel.

Implicit measures probe associations that may operate below the threshold of deliberate reflection.

In-depth interviews, focus groups and community-based methods can reveal the varied texture of experiences that no scale, implicit or otherwise, can fully capture. Mixed methods, then, are not a compromise between imperfect tools. They are the appropriate response to a phenomenon that is at once structural, cultural, and deeply personal.

As African countries grapple with the public health dimensions of a practice that is common but poorly understood, the research community has an obligation to do better. That means investing in measurement tools developed specifically for, and with, Black African women. It means accounting for regional variety. It also means taking seriously the possibility that what women report about their bodies and their private feelings or unconscious experiences are not always the same thing.

The Conversation

Oyenike Balogun is an Assistant Professor of Psychology at Bentley University. Funding for the study on which this article is based, was awarded by the Bentley University Research Council.

❌