• @redcalcium
    link
    2519 days ago

    How do you sanitize ai prompts? With more prompts?

    • @CanadaPlus@lemmy.sdf.org
      link
      fedilink
      45
      edit-2
      19 days ago

      Easy, you just have a human worker strip out anything that could be problematic, and try not to bring it up around your investors.

    • @xmunk@sh.itjust.works
      link
      fedilink
      3619 days ago

      It’s really easy, just throw an error if you detect a program will cause a halt. I don’t know why these engineers refuse to just patch it.

    • @kromem@lemmy.world
      link
      fedilink
      English
      2
      edit-2
      19 days ago

      Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

      For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.

      • @redcalcium
        link
        519 days ago

        Won’t this cause subtle but serious issue? Kinda like how pomegranate translates to “granada” in Spanish, but when you translate “granada” back to English it translates to grenade?

        • @kromem@lemmy.world
          link
          fedilink
          English
          119 days ago

          It will, but it will also cause less subtle issues to fragile prompt injection techniques.

          (And one of the advantages of LLM translation is it’s more context aware so you aren’t necessarily going to end up with an Instacart order for a bunch of bananas and four grenades.)