Security researchers have concerns that Anthropic’s Claude for Chrome is vulnerable to malicious prompting. Claude for Chrome allows users to chat with Claude as they browse the web. Claude can read webpages, fill forms, and click on links and buttons to perform complex tasks for the user. But Anthropic’s testing revealed that 11.2% of malicious prompting attempts succeeded even with safety measures in place. One test case was a malicious email that asked Claude to delete all emails in the user’s inbox for “mailbox hygiene”. AI researcher Simon Willison states that an 11.2% success rate is unacceptable for so-called AI agents, especially when several AI companies are releasing their own browser extensions. One competing product, Perplexity’s Comet browser, was found to be vulnerable to a prompt injection attack that instructed it to start password recovery for the user’s Gmail account. Although Perplexity attempted to fix the issue, Comet remains vulnerable to this attack.

Archive link

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 hours ago

    First, I can’t believe people are paying $100-200 / month for Ai crap. Second, if it were free or very cheap and I could sandbox it to only respond to painful cookie request menus to reject cookies, I would use it. I have consent o matic but it does a shit job and only works on a small percentage of sites.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 hours ago

      only respond to painful cookie request menus to reject cookies

      You can do that just with ublock with the annoyance list, or using an extension like i don’t care about cookies. Simple and efficient, no need for an “ai agent” for that