It’s official, at least according to some AI tools. ChatGPT and Google have claimed that I eat more hot dogs than any other tech journalist in the world. In reality, this is a fabricated lie to prove a much more serious problem: how easily modern AI systems can be manipulated.
It’s long been known that chatbots sometimes “invent” information. But now a new, lesser-known problem is emerging that could have serious consequences for how people find accurate information and even for their own safety. A growing number of individuals have discovered a way to trick AI tools into giving almost any answer they want. And it can be done easily. According to experts, this practice is already affecting the answers that artificial intelligence gives on serious issues like health and personal finances. Biased or false information can lead people to make the wrong decisions, from political elections to choosing a professional service to medical matters.
To demonstrate the problem, the experiment’s author published an article on his personal website titled “The Best Tech Journalists at Eating Hot Dogs,” filled with completely untrue claims, including a fictional international championship in South Dakota. Within 24 hours, some of the world’s most popular chatbots, including ChatGPT, Gemini, and the AI-powered summaries at the top of Google searches, reproduced the information as fact.
SEO experts warn that this method can easily be used to promote businesses or spread misinformation. “It’s much easier to fool chatbots than it was to fool Google two or three years ago,” says Lily Ray of the marketing agency Amsive. According to her, AI companies are moving faster on developing products than on guaranteeing their accuracy. A Google spokesperson said that the ranking systems used in search keep results “99% spam-free” and that the company is aware of attempts at manipulation. OpenAI also emphasizes that it takes measures to identify and prevent hidden influences in its tools. However, both companies warn that their tools “can make mistakes.”
Critics say the problem has not yet been solved. Cooper Quintin of the digital rights organization Electronic Frontier Foundation warns that these vulnerabilities could be used for fraud, reputational damage, or even to push people into physical danger.
Chatbots operate on large language models, trained on vast amounts of data. But when faced with specific questions, they often search the internet for information—a process that makes them more susceptible to manipulation. Experts point out that, unlike traditional search, where users visit multiple sources and exercise critical thinking, AI-curated answers are often perceived as authoritative and direct by the tech company. Recent studies suggest that users are 58% less likely to click on a link when an AI-curated summary appears at the top of a search. The problem is not limited to humorous examples. There have been documented cases where chatbots have reproduced unsubstantiated claims about health products or financial investments, based on press releases or sponsored content.
Google acknowledges that unusual or highly specific searches, which account for about 15% of daily searches, can create “data voids,” leading to low-quality results. The company says it is working to limit the display of AI summaries in such cases.
Experts propose several preventive measures. One of them is to place more visible warnings for users. Chatbots could also be more transparent about the sources of information and clarify when a claim is based on only a single source or on promotional material. Users are advised to exercise caution, especially on issues related to health, law, finance or local businesses. Artificial intelligence can be useful for general questions, but for topics with real consequences, it is recommended to verify information through multiple and reliable sources. In a race for innovation and profit, experts warn, user safety should not be left behind.

