AI browsers like Atlas from OpenAI and Comet from Perplexity promise 0 they come with major cybersecurity risks, forming a new playground for 1 powered web browsers compete with traditional browsers like Google Chrome and Brave, aiming to attract billions of daily internet users. A few days ago, OpenAI released Atlas, while Perplexity’s Comet has been around for months. AI-powered browsers can type and click through 2 can tell it to book a flight, summarize emails, or even fill out a form. Basically, AI-powered browsers are designed to act as digital assistants and navigate the web 3 are being hailed as the next big leap in online 4 researchers flag AI browser flaws But most consumers are unaware of the security risks that come with the use of AI 5 browsers are vulnerable to sophisticated hacks through a new phenomenon called prompt 6 can exploit AI web browsers, gain access to users’ logged-in sessions, and perform unauthorized 7 example, hackers can access emails, social media accounts, or even view banking details and move 8 to recent research by Brave, hackers can embed hidden instructions inside web pages or even 9 an AI agent analyzes this content and sees the hidden instructions, it can be tricked into executing them as if they were legitimate user 10 web browsers cannot tell the difference between genuine and fake user 11 engineers experimented with Perplexity’s Comet and tested its reaction to prompt 12 was found to process invisible text hidden within 13 approach enables attackers to control browsing tools and extract user data with ease.
Brave’s engineers called these vulnerabilities a “systemic challenge facing the entire category of AI-powered browsers.” Prompt injection is hard to fix Security researchers and engineers say that prompt injection is difficult to fix. That’s because artificial intelligence models do not understand where instructions come 14 can’t differentiate between genuine and fake 15 software can tell the difference between safe input and malicious code, but large language models (LLMs) struggle with 16 process everything, including user requests, website text, and even hidden data, and treat it as one big conversation. That’s why prompt injection is 17 can easily hide fake instructions inside content that looks safe and steal sensitive 18 companies admit prompt injection is a serious threat Perplexity stated that such attacks don’t rely on code or stolen passwords but instead manipulate the AI’s “thinking process.” The company built multiple defense layers around Comet to stop prompt injection 19 uses machine learning models that detect threats in real time and has integrated guardrail prompts that keep the AI focused on user intent.
Moreover, the browser requires mandatory user confirmation for sensitive actions like sending an email or purchasing an 20 researchers believe AI-powered browsers should not be trusted with sensitive accounts or personal data until major improvements are rolled 21 can still utilize AI web browsers, but with no access to tools, disabled automated actions, and should avoid using them when logged in to banking accounts, emails, or healthcare 22 Chief Information Security Officer (CISO) of OpenAI, Dane Stuckey, acknowledged the dangers of prompt injection and wrote on X, “One emerging risk we are very thoughtfully researching and mitigating is prompt injections, where attackers hide malicious instructions in websites, emails, or other sources to try to trick the agent into behaving in unintended ways.” He explained that OpenAI’s goal is to make people “trust ChatGPT agents to use your browser, the same way you’d trust your most competent, trustworthy, and security-aware colleague or friend.” Stuckey said the team at OpenAI is “working hard to achieve that.” Join Bybit now and claim a $50 bonus in minutes
Story Tags

Latest news and analysis from Cryptopolitan



