Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often hidden in web pages or emails, is a risk thatâs not going away any time soon â raising questions about how safely AI agents can operate on the open web. âPrompt injection, much like scams and social engineering on the web, is unlikely to ever be fully âsolvedâ,â OpenAI wrote in a Monday blog post detailing how the firm is beefing up Atlasâs armor to combat the unceasing attacks. The company conceded that âagent modeâ in ChatGPT Atlas âexpands the security threat surface.â OpenAI launched its ChatGPT Atlas browser in October, and security researchers rushed to publish their demos, showing it was possible to write a few words in Google Docs that were capable of changing the underlying browserâs behavior. That same day, Brave published a blog post explaining that indirect prompt injection is a systematic challenge for AI-powered browsers, including Perplexityâs Comet. OpenAI isnât alone in recognizing that prompt-based injections arenât going away. The U.K.âs National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications âmay never be totally mitigated,â putting websites at risk of falling victim to data breaches. The U.K. government agency advised cyber professionals to reduce the risk and impact of prompt injections, rather than think the attacks can be âstopped.â For OpenAIâs part, the company said: âWe view prompt injection as a long-term AI security challenge, and weâll need to continuously strengthen our defenses against it.â The companyâs answer to this Sisyphean task? A proactive, rapid-response cycle that the firm says is showing early promise in helping discover novel attack strategies internally before they are exploited âin the wild.â Thatâs not entirely different from what rivals like Anthropic and Goo...
First seen: 2025-12-22 22:36
Last seen: 2025-12-24 15:45