Normal view

Received — 24 April 2026 The Conversation

You probably wouldn’t notice if an AI chatbot slipped ads into its responses

Are you sure you could tell if an AI chatbot were trying to sell you something? AP Photo/Michael Dwyer

Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.

We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.

These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.

The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.

Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.

People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.

In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.

A block of text
Researchers used this system prompt for an AI chatbot in an experiment about user reactions to advertising slipped into chatbot dialog. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 9, No. 4, Article 213., CC BY

Chatbot ads have added power

A single prompt to a chatbot can reveal a lot more about a user than the person might expect.

A 2024 study showed that large language models can infer a wide range of personal data, preferences and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.

To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.

For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.

Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result: Although ads made the chatbot perform 3% to 4% worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the nonadvertising responses. They even said the ad-infused responses felt more friendly and helpful.

A chatbot sneaks a product advertisement into its response to a user who is asking about a diet and exercise regimen.

Knowing you to persuade you

This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.

But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.

Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.

This type of autonomous interrogation is feasible, aligns with AI companies’ business models and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.

But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.

Here are some steps you can take to try to detect AI chatbot advertising.

  • Look for any disclosure text – words such as “ad,” “advertisement” and “sponsored” – even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.

  • Think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.

  • An unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.

The Conversation

This article’s research was supported by a $10,000 Microsoft Azure & OpenAI cloud credit grant from the National Science Foundation NAIRR Pilot. Brian Jay Tang has previously been supported by funding from General Motors, Defense Advanced Research Projects Agency, Army Research Office, Office of Naval Research, and Y Combinator.

This article’s research was supported by a $10,000 Microsoft Azure & OpenAI cloud credit grant from the National Science Foundation NAIRR Pilot. Kang G. Shin has previously been supported by funding from General Motors, Army Research Office, and National Science Foundation.

Received — 22 April 2026 The Conversation

Heavy rain on snow is testing aging dams across Michigan and Wisconsin – this is the future in a warming world

In the upper Midwest, aging infrastructure, from dams to city drains, was overwhelmed by floodwater in April 2026. Jonathan Aguilar/Milwaukee Neighborhood News Service/CatchLight via Getty Images

Michigan and parts of Wisconsin are in the midst of a historic flooding event in spring 2026. Days of heavy rainfall on top of snow have sent lakes and rivers over their banks and threatened several dams in both states, forcing people to evacuate homes downstream.

By April 20, 2026, nearly half of Michigan’s counties were under a state of emergency. In Cheboygan, Michigan, large pumps were brought in to lower pressure on a century-old dam in the city.

The region’s aging water infrastructure was never designed for the volume of water it is facing. That’s a troubling sign for the future, with flooding becoming more common as global temperatures rise.

In many areas, the damage has been exacerbated by a culture of building homes and cabins on the shores of inland lakes and along riverine lakes behind small, often privately owned dams. Many of these dams were built over 100 years ago, with some long forgotten.

Michigan State Police captured scenes of stressed dams and flooding across Cheboygan County, near the tip of the Lower Peninsula, including the century-old dam in the city of Cheboygan that was nearly overwhelmed by flood water.

I am a professor emeritus of meteorology at the University of Michigan whose work focuses on helping communities adapt to climate change. The warming climate is worsening the flood risk, and disasters like the one Michigan is experiencing are setting higher benchmarks for safety as communities plan future infrastructure.

Where is all the water coming from?

For much of Michigan and Wisconsin, as well as northern Illinois, 2026 has been the wettest March and April on record.

In March, much of that precipitation fell as snow, including in an enormous blizzard that brought 3 feet of snow to parts of Michigan. In mid-April, persistent rains began. The rain, on top of all that snow, sent floodwaters running into rivers, streets and homes. The water carries large amounts of ice that damages shores, infrastructure and homes.

The moisture for much of these storms has been funneled northward from the warm Gulf of Mexico, thanks in part to a high pressure system sitting over the southeastern U.S.

A US map showing the highest increase in rainfall from extreme downpours across the Upper Midwest and Northeast.
Extreme downpours are becoming intense across the United states. This map shows the percentage change in total precipitation falling on the heaviest 1% of rainy days from 1958 to 2021. NOAA/adapted from Fifth National Climate Assessment

The problem of warming winters

The kind of flooding Michigan and Wisconsin are experiencing in 2026 is what forecasters expect to see more of as global temperatures rise.

Winters have been warming faster than other seasons across the U.S. In Michigan and Wisconsin, winter months used to be reliably below freezing, but that’s changing. In the Cheboygan area, near the tip of Lower Michigan, March temperatures used to be below freezing on all but a few days. By the 1991-2020 period, the region averaged 10 days above or close to the freezing point – about twice as many as the 1951-1980 period.

Charts show the shift toward warmer March weather.
March is warming, as a comparison of daily high temperatures in the Cheboygan area in 1991-2020 and 1951-1980 shows. The bar chart comparison shows that the number days above freezing is rising. GLISA

The air coming in from the south is also warmer than in the past. Nationally, 2026 was the warmest March on record in 132 years of record-keeping in the contiguous U.S., with an average temperature more than 9 degrees Fahrenheit (5 degrees Celsius) higher than the 30-year average. So, in addition to snowmelt starting earlier, melting is happening faster.

Michigan’s average wintertime temperature rose by more than 4 F (2.3 C) from 1951 to 2023. Though winter 2026 in Michigan was colder than the 1991-2020 average, the Gulf of Mexico, where the moisture originated, was warmer than average, accelerating the snowmelt.

How warming leads to downpours and flooding

A few aspects of a warming climate can lead to flooding.

First, temperatures are increasing. In higher temperatures, moisture evaporates faster from the ground, plants and surface water. That moisture, once in the atmosphere, eventually falls again as precipitation. However, for each degree Celsius that temperatures increase, the atmosphere can hold about 7% more moisture, resulting in more heavy downpours.

A warmer winter also means more melting snow and more rain-on-snow events that can quickly increase the amount of runoff into rivers.

Much of the upper Midwest was exceptionally wet in March and April 2026.
Since March 1, 2026, most of Michigan and Wisconsin have experienced their wettest stretch in the 134 years that the region’s precipitation has been recorded. Iowa Environmental Mesonet

The Great Lakes region and much of the Northeast already experience more precipitation than in the past. Winters with more persistent wetness – not just snow but also rain – prime the region for floods. With continued warming in the coming decades, 2026 might be among the least disruptive in the future.

Data shows that a scenario of persistent wetness, changes in winter and seasonal runoff is part of the future for Michigan and the other states and Canadian provinces along the Great Lakes Basin, as well as New England.

Fixing dams for the future

All of this means communities across the region will have to pay closer attention to the growing risks facing their vital infrastructure – particularly dams.

Even prior to the 2026 floods, Michigan had a well-documented problem with its aging inventory of 2,600 dams. In May 2020, an intense storm system that stalled over the region brought so much rain that the Edenville and Sanford dams both failed near Midland, Michigan, forcing 10,000 people to evacuate and causing an estimated US$200 million in damage.

After that disaster, a state task force issued recommendations for fixing the state’s water control infrastructure to meet the growing risks. But a member of the task force told The Detroit News in April 2026 that little had been done to address those recommendations.

Water spills from the Cheyboygan dam, where the water level came close to the top, threatening the century-old dam's integrity.
Officials ordered evacuations as floodwater nearly overwhelmed the century-old dam in Cheboygan, Mich., in April 2026. Michigan Department of Natural Resources via AP

Because warming will continue for the coming decades, the 2026 flooding should be considered at the lower end of capacity for stormwater infrastructure and dams. Rather than relying on the statistics that described floods in the past, planners will have to anticipate the floods of the future.

Michigan is often touted as a climate haven because it is relatively cool and has plenty of water. The state is not, however, immune to the amped-up weather of a warming climate. Environmental security in the future requires improved and more adaptive infrastructure.

The Conversation

Richard B. (Ricky) Rood receives funding from the National Oceanic and Atmospheric Administration.

Received — 20 April 2026 The Conversation

Most people do not realize when a personal message they receive was written by AI, study finds

People tend to be offended when they get a personal note written by AI – if they know. Ekaterina Buravleva/iStock via Getty Images

Two new experiments show that most people do not even consider that a personal message could be AI-generated, even when they themselves use artificial intelligence to write.

To see how people judge someone based on their writing in the age of ChatGPT, my colleague Jiaqi Zhu and I recruited more than 1,300 U.S.-based participants, ages 18 to 84, and showed them AI-generated messages like an apology sent in an email. We split our volunteers into four groups: Some people saw the messages with no information about who or what wrote them, as in everyday life. Others were told the messages were definitely written by a human, definitely AI-generated, or that the source could be either.

A text message presenting an apology generated by AI.
An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study. Zhu & Molnar (2026)

We found a clear “AI disclosure penalty.” When people knew a message was AI-generated, they rated the sender much more negatively – “lazy,” “insincere,” “lack of effort” – than when they believed that the same text was written by a person – “genuine,” “grateful,” “thoughtful.”

But here is the twist: The participants who were not told anything about authorship formed impressions that were just as positive as those from people who were told the messages were genuinely human.

This complete lack of skepticism surprised us – and it raises new questions. Maybe participants were not familiar enough with AI to realize that today’s models can produce detailed and personal messages. (They can.) Or perhaps participants have never used AI themselves. (They likely have.) So we also tested whether participants’ own AI use changed how they judged senders.

To our even bigger surprise, we found little to no effect. People who use generative AI quite frequently in their daily lives – at least every other day – did penalize AI use slightly less when AI authorship was disclosed, compared with people who never or rarely use AI. But participants were no more skeptical by default: When authorship was not disclosed, heavy AI users, light AI users and nonusers all tended to assume the text was written by a person and formed essentially the same impressions.

A word cloud showing words that describe how people reading text messages felt.
Word clouds depict participants’ first impressions of senders who wrote messages themselves, left, and those who used AI, right. Andras Molnar

Why it matters

Lack of skepticism and a lack of negative impressions matter because people make social judgments from text all the time. Recipients consider taking the time and effort to send written messages as an insight into the writer’s sincerity, authenticity or competence, and those impressions shape people’s decisions in friendships, dating and work.

Yet our main findings reveal a striking disconnect: People usually do not suspect AI use unless it is obvious. This unawareness creates a moral dilemma: People who use AI in secret can enjoy the benefits while facing almost no risk of detection. Meanwhile, paradoxically, people who are upfront and admit to using AI suffer a reputational hit.

Over time, lack of skepticism and awareness could reshape what writing means in everyday life. Readers might learn to treat writing as a less reliable signal of someone’s character or effort, and instead rely on other forms of communication. For example, widespread AI use has already prompted employers to discount the value of cover letters from job applicants. Instead, they are relying more on personal recommendations from an applicant’s current supervisor or connections made through in-person networking.

What other research is being done

Other researchers have documented a wide range of negative impressions about people who disclose their AI use. Studies show it makes job applicants seem less desirable and employees seem less competent. Readers of creative writing perceive AI users as less creative and inauthentic. People see personal apologies and corporate apologies that stem from AI as less effective. In general, disclosing AI use decreases trust and undermines legitimacy.

Yet without disclosure, there is clear evidence that most people cannot reliably detect AI-generated text, even with the help of detection tools, especially when the text is a mix of human-written and AI-generated content. Even when people feel confident about their ability to spot AI text, their confidence may be nothing more than a self-affirming illusion.

What’s next

Even though our experiments did not reveal suspicion of AI use, that doesn’t mean people never suspect it in the real world. In some settings, people may already be hypervigilant about AI. Use in academia is an obvious example. In our next studies, we want to understand when and why people naturally start to suspect AI use, and what flips the switch between trust and doubt.

Until then, if you want your personal message to be judged as heartfelt, the safest strategy may be to make a phone call, leave a voicemail or, better yet, say it in person.

The Research Brief is a short take on interesting academic work.

The Conversation

Andras Molnar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

❌