
Other attacks could involve the prompt causing the AI assistant to display fake information that would mislead the user: fake investment advice promoting a certain stock, fabricated news, dangerous medical advice like wrong doses for medicine, malicious instructions that could open a backdoor on the computer, instructions to re-authenticate that include a link to a phishing site, a link to download malware, and so on.
URL fragments cannot modify page content. They are only used for in-page navigation using the code that’s already there, so they are normally harmless. However, it now turns out that they can be used to modify the output of in-browser AI assistants or agentic browsers, which gives them an entirely new risk profile.
“This discovery is especially dangerous because it weaponizes legitimate websites through their URLs,” the researchers said. “Users see a trusted site, trust their AI browser, and in turn trust the AI assistant’s output-making the likelihood of success far higher than with traditional phishing.”
