- Nov 10, 2025
- 1 min read
Microsoft Study Sees AI Shopping Agents Easily Duped By Scams
In recently published research by Microsoft in partnership with Arizona State University, AI agents tasked with online shopping repeatedly fell prey to fraud.

Photo credit: Skorzewiak / Shutterstock.comÂ
In recently published research by Microsoft in partnership with Arizona State University, AI agents tasked with online shopping repeatedly fell prey to fraud. The study created a simulated marketplace populated by 100 buyer agents and 300 seller agents, all with fake money to spend. Within the test environment, the buyer agents were easily duped by malicious sellers on the marketplace.Â
The experiment showed that when confronted with large sets of search results, the agents consistently chose the first “good enough” option across models rather than performing deeper evaluation, a phenomenon researchers term first-proposal bias.Â
The agents were successfully manipulated using tactics such as fake reviews, persuasive authority appeals, and prompt injection attacks. Some prominent models (including GPT‑4o and the open-source GPTOSS‑20b) redirected all payments to malicious agents. The study shows that Claude Sonnet 4 demonstrated resistance to these attempts at manipulation.
The findings highlight the vulnerability of autonomous AI shopping agents to fraud. They also indicate the potential vulnerability of AI agents to fraud in general and raise questions about what role they should play. As Microsoft notes, “Agents should assist, not replace, human decision-making.”Â
The study suggests caution is warranted as major AI players like OpenAI look to introduce autonomous shopping assistants, particularly in high-stakes environments.



