Amazon has taken legal action against Perplexity AI after alleging that the startup’s agentic shopping tool, built into its Comet browser, accessed Amazon customer accounts and disguised automated activity as ordinary human browsing.
The dispute centers on whether modern AI agents that browse and shop on behalf of users cross the line into unauthorized automated access and whether the platforms they interact with can restrict that activity.
What Amazon says
According to Amazon the company has repeatedly asked Perplexity to disable or remove features that let Comet act like an automated shopper on Amazon’s site.
Amazon’s public statement and legal filing say the Comet agentic functionality masked automated activity as human browsing, accessed customer accounts, and degraded the shopping and customer service experience Amazon provides to its users.
Amazon has framed the move as enforcing its platform terms and protecting user accounts and the integrity of its personalization and recommendations.
What Perplexity says
Perplexity has denied the allegations and pushed back in a public blog post titled “Bullying is Not Innovation.” The startup says it stores user login credentials locally on users’ devices rather than on Perplexity servers, and it denies that its agent accessed accounts without authorization.

Perplexity also describes Amazon’s action as an attempt to limit competition and block consumer choice for AI assistants that can automate shopping tasks.
The technical and legal issues at stake
At the center of the fight are two overlapping questions. The first is technical: when an AI agent uses a browser to act on a user’s behalf, how should that agent identify itself, and what security controls are required to prevent abuse or data exposure.
The second is legal and commercial: do platform terms of service allow or prohibit agentic automation, and can a major marketplace enforce those terms or seek an injunction when it believes its rules are being broken.
Amazon contends that disguising automated requests as ordinary browsing can circumvent security checks and create a worse experience for shoppers.
Perplexity and other agent makers argue that agents are the modern equivalent of browser extensions or automated fill tools that act with user consent.
Those lines are not yet settled in law, and the Amazon Perplexity case will likely test them.
Why this matters for shoppers and AI developers
First, the outcome could determine how freely consumers can use agentic assistants to shop online.
If courts side with Amazon, platforms could require agents to expose themselves as automated actors, to obey rate limits, or to get explicit permission before making purchases.

If courts favor Perplexity, agent makers may enjoy broader freedom to automate purchasing flows for users, provided credentials remain under user control.
Second, this case touches on privacy and security. Even if credentials remain local, automated interactions can reveal patterns or allow automated purchase behavior that platforms see as risky.
Developers building agentic features will need to prioritize transparency and compliance to reduce the risk of legal challenges and user harm.
Business incentives and marketplace power
This clash is also about business models. Amazon makes significant revenue from ads, sponsored listings, and its own recommendation engine.
An agent that bypasses or rewires those monetization paths could threaten existing revenue streams.
Perplexity positions itself as enabling consumer choice, arguing that users should pick the assistant that best serves their interests rather than being forced into a marketplace’s native experience.
The legal battle will therefore be watched closely as a test of how much control major platforms can exert over agentic tools.
Practical tips for users and developers
For users: if you plan to use AI assistants to shop on major platforms, keep credentials secure and understand how the tool stores or transmits login data.
Prefer tools that clearly document where credentials are held and how actions are authorized.
For developers: build transparency into your agent. Make automated behavior obvious to the end user and to sites you interact with whenever possible.
Rate limit actions, honor robots and API terms, and be prepared to show how user consent and data protection are implemented. That approach reduces legal risk and builds trust.

The bigger picture for AI regulation and consumer choice
This lawsuit is an early sign of how courts and regulators will handle agentic AI that acts on behalf of users.
As agents move from simple query assistants into full task completion, regulators and platforms will wrestle with how to balance innovation with safety, privacy, fair competition, and user experience.
The legal principles established here could influence other sectors where agents act for users, including travel, banking, and content aggregation. Reuters+1
Conclusion
Amazon’s action against Perplexity marks a high profile test of where agentic AI can operate and how much control major platforms can require.
The complaint frames the issue as one of unauthorized automated access and customer protection. Perplexity frames the fight as a matter of user choice and innovation.
The case could set important precedents on identification, consent, and the limits of automation in consumer services.
For now users and developers should watch the filings closely, favor transparency, and assume this debate over agentic assistants is only beginning