• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • Oh, I’m not saying there aren’t innate risks. You’re bringing up great points, and I agree we mustn’t throw caution to the wind. This is slightly besides the point of my initial comment, though, where I was merely stating my belief that the “hack” described in the OP might be a non issue in a couple of years. But you are right. Again, I’m sorry about my ignorance. I didn’t mean to start an argument. It’s great hearing other points of view, though.


  • Good point! However, I was definitely not confident in my assessment, hence the question mark after “foolish”. I guess seeing all these “A.I. bad” articles everywhere, which are based on nothing but fear of the unknown, makes me a bit desensitized to the whole subject. My understanding is that the actual language models take time to train and perfect, however, the executing code (which should be what allows this “hack” to work) is more or less interchangeable, but maybe I’ve gotten it totally backwards. If so, please forgive my ignorance.






  • I was gonna reply to this in the style of ChatGPT, but I somehow feel like that’d be the same as joking about having a bomb at airport security. But yeah, this is my main concern as well. Not only social media, but even blogs and reputable-looking websites which can act as “sources”. And what about Wikipedia bots?

    I’m not worried about the loss of jobs or the sentience of computers, but rather the incapability to discern what’s real and what’s not. Could online human certificates be a thing? Multi-factor authentication (that is somehow still anonymous)?