return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 3 months agoAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comexternal-linkmessage-square167linkfedilinkarrow-up1475arrow-down112
arrow-up1463arrow-down1external-linkAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 3 months agomessage-square167linkfedilink
minus-square𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up46arrow-down1·3 months agodoesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought) any money says they’re vulnerable to prompt injection in the comments and posts of the site
minus-squareBradleyUffner@lemmy.worldlinkfedilinkEnglisharrow-up37arrow-down1·3 months agoThere is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.
minus-squareKeenFlame@feddit.nulinkfedilinkEnglisharrow-up1·3 months agoI don’t understand what you mean. Why is there no way?
minus-squareBradleyUffner@lemmy.worldlinkfedilinkEnglisharrow-up1·3 months agoWatch this video. https://youtu.be/_3okhTwa7w4
minus-squareCTDummy@piefed.sociallinkfedilinkEnglisharrow-up30arrow-down1·3 months agoLmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.
minus-squareT156@lemmy.worldlinkfedilinkEnglisharrow-up5·3 months agoI am a little curious about how effective a traditional chain mail would be on it.
minus-squareToTheGraveMyLove@sh.itjust.workslinkfedilinkEnglisharrow-up6·3 months agoGood god, I didn’t even think about that, but yeah, that makes total sense. Good god, people are beyond stupid.
doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)
any money says they’re vulnerable to prompt injection in the comments and posts of the site
There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.
I don’t understand what you mean. Why is there no way?
Watch this video.
https://youtu.be/_3okhTwa7w4
Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.
I am a little curious about how effective a traditional chain mail would be on it.
Good god, I didn’t even think about that, but yeah, that makes total sense. Good god, people are beyond stupid.