Sam Altman called it a “once-a-decade opportunity to rethink what a browser can be about.” Two months later, a December 2025 privacy report tells a very different story: ChatGPT Atlas is the least private browser ever tested. Not just “not great”—catastrophically bad. We’re talking 1 out of 100 for fingerprinting protection. Zero for tracker blocking. That’s not a typo.

I’ve been tracking AI browsers since they were rumored, and honestly? This is worse than I expected. OpenAI shipped an AI-first browser that, by design, needs to see everything you do online—and apparently forgot to include even basic privacy protections. Let’s break down exactly what’s going on.

The Digitain Privacy Report: The Numbers Are Brutal

A December 2025 study by Digitain tested 13 popular browsers across dozens of security metrics. ChatGPT Atlas didn’t just underperform—it set records for the wrong reasons.

Privacy MetricChatGPT AtlasGoogle ChromeSafari
Overall Privacy Risk99/99 (Worst)76/9949/99
Anti-Fingerprinting1/10024/10067/100
Tracker Blocking0/10012/10078/100
State Partitioning0% passed62% passed94% passed
Connection Security24/10071/10085/100
Phishing URL Blocking5.8%89%91%

Read that again. ChatGPT Atlas scored a 99 out of 99 on Privacy Risk—meaning it offers almost no privacy protection whatsoever. It failed 100% of state partitioning tests, which means websites can track you across sessions, tabs, and your entire browsing history.

For context: even Chrome, which privacy advocates love to criticize, scored dramatically better. And Safari? It’s in a completely different league.

Paruyr Harutyunyan, Digitain’s digital marketing head, didn’t mince words: “New AI-powered browsers, despite their features, do not automatically guarantee increased security. AI systems rely on extensive data collection.”

The Real Problem: AI Browsers Are Fundamentally Different

Here’s what nobody’s talking about. Traditional browser privacy vulnerabilities are bugs—things that can be patched. ChatGPT Atlas’s privacy problems are features. The architecture itself is the issue.

Traditional browsers use sandboxing to isolate websites. Tab A can’t access data from Tab B. It’s been a core security principle for decades. But Atlas’s AI agent? It’s a “trusted user” that can see and act across all your tabs, all your logins, all your sessions.

Security researchers at the University of Sydney put it bluntly: Atlas “undermines the core principle of browser isolation” and creates “attack vectors that don’t exist in traditional browsers.” This isn’t a bug they forgot to fix—it’s how the product works.

The AI needs visibility into your browsing to be useful. The trade-off is that this visibility creates a massive attack surface that traditional security models weren’t designed for.

Prompt Injection: The “Unsolved” Security Problem OpenAI Admits To

If the privacy scores weren’t concerning enough, there’s an even bigger issue: prompt injection attacks. And OpenAI itself calls this a “frontier, unsolved security problem.”

Here’s how it works: Malicious code can be hidden in web pages that the AI reads. When the agent processes that content, it can be tricked into executing unauthorized commands. This isn’t theoretical—researchers have demonstrated attacks that:

  • Exfiltrate data from other logged-in tabs (email, banking, work apps)
  • Hijack sessions and authentication tokens
  • Make unauthorized purchases on e-commerce sites
  • Forward sensitive files to external servers

The UK’s National Cyber Security Centre (NCSC) agrees that these attacks “may never be entirely prevented.” OpenAI’s response? They’ve deployed an “adversarially trained model” and automated red teaming. But they also acknowledge that “complete protection from certain attacks may not be achievable.”

That’s a company telling you their product has fundamental security limitations they can’t fully fix. For a browser designed to access your banking, email, and work systems, that’s extraordinarily concerning.

This connects directly to the broader pattern we’ve seen with AI coding tools—where convenience and capability create security blind spots that the industry is scrambling to address.

Browser Memories: The “Total Surveillance” Feature

Atlas’s “Browser Memories” feature is marketed as personalization—remembering your preferences, tasks, and browsing patterns to make the AI more helpful. In practice, critics argue it’s something else entirely.

Proton, the privacy-focused email provider, describes Atlas as “total surveillance” that “merges AI conversation, web interactions, and personal data harvesting into a single interface that understands context and acts on it.”

What gets stored? According to security researchers:

  • Visited websites and browsing patterns
  • Interaction history and click behavior
  • Login states and account access
  • Task context across sessions
  • Sensitive information (though OpenAI claims filters exclude PII, medical records, and financial details)

But here’s the catch: Washington Post testing found Atlas retained memories about sensitive health services and doctor names—information that supposedly should have been filtered out.

Reddit users in r/privacy aren’t buying OpenAI’s privacy claims. One highly upvoted post summed up the sentiment: “With this data, OpenAI could know me better than I do.” Another noted that the controls for managing what Atlas remembers are “confusing” and don’t give users real visibility into what’s being collected.

Agent Mode: Power and Peril

The most impressive Atlas feature is also its most dangerous. Agent Mode lets the AI autonomously navigate websites, fill forms, book appointments, and complete multi-step tasks on your behalf.

The promise is genuine productivity. An AI that can handle tedious web tasks while you focus on meaningful work.

The problem? Every autonomous action is an attack vector. Security researchers have demonstrated scenarios where:

1. User visits a compromised website

2. Hidden malicious prompts manipulate the AI

3. Agent Mode executes unauthorized actions across other tabs

4. Data exfiltration or unauthorized purchases occur

OpenAI has implemented safeguards: Agent Mode requires confirmation for sensitive actions, can’t download files or install extensions, and operates in a constrained mode. But the fundamental tension remains—the more capable the agent, the more dangerous a successful attack becomes.

What This Means For You

If You’re Considering ChatGPT Atlas

Don’t use it for anything sensitive. Banking, healthcare, work email, financial services—keep these in a traditional browser. The security and privacy gaps are too significant for high-stakes browsing.

Understand the trade-off. Atlas offers genuine productivity benefits. The AI integration is impressive. But you’re trading privacy protection for convenience.

Watch for updates. OpenAI is actively working on security improvements. The product may improve significantly over the next 6-12 months. But today, in December 2025, the risks are real.

For Enterprise IT Teams

Block it for now. Until OpenAI addresses the fundamental architecture issues, Atlas shouldn’t be approved for accessing corporate systems. The prompt injection risks alone justify this decision.

Monitor for shadow IT. Employees may install Atlas personally and inadvertently access work accounts. Consider network-level detection.

For Privacy-Conscious Users

Stick with established options. Safari’s Intelligent Tracking Prevention, Firefox’s Enhanced Tracking Protection, or Brave’s shields provide actual privacy protection. These browsers scored dramatically better across every metric.

The Bottom Line

Sam Altman’s “once-a-decade” browser vision isn’t wrong—AI will fundamentally reshape how we browse the web. But ChatGPT Atlas, as it exists today, prioritizes capability over security in ways that should concern every user.

The 1/100 fingerprinting score, 0/100 tracker blocking, and “unsolved” prompt injection vulnerabilities aren’t marketing problems OpenAI can spin away. They’re fundamental architectural decisions that create genuine risk.

OpenAI built a browser optimized for AI capability. They forgot (or deprioritized) the privacy and security foundations that every browser has spent decades building. The result is a genuinely impressive AI product with genuinely alarming security gaps.

My recommendation? Watch Atlas evolve. The underlying AI technology is powerful. But use a real browser for anything that matters—at least until OpenAI proves they can solve the problems they’ve publicly admitted they can’t fully fix.

FAQ

Is ChatGPT Atlas safe to use?

For casual browsing and AI experimentation, Atlas works fine. For banking, healthcare, work systems, or any sensitive accounts, no—the privacy and security gaps are too significant. Use a traditional browser with proper protections for anything important.

How does Atlas compare to other AI browsers?

Atlas is currently the only major AI-first browser from a leading AI lab. Microsoft Edge has AI features, but they’re add-ons to a traditional browser, not core architecture. Atlas’s deep AI integration is both its strength and its security weakness.

Will OpenAI fix these privacy issues?

OpenAI is actively working on security improvements including adversarial training and automated red teaming. However, they’ve publicly acknowledged that some attacks “may not be entirely preventable” due to the nature of AI agents. Fundamental improvements may require architectural changes that take significant time.

What’s the best privacy browser in 2025?

Safari leads for built-in privacy with its Intelligent Tracking Prevention. Brave and Firefox offer excellent privacy for power users. Even Chrome scored dramatically better than Atlas, though it’s still criticized for data collection. For maximum privacy, pair a privacy-focused browser with a VPN.

Categorized in:

AI, News,

Last Update: December 28, 2025