It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
Remember before they were released and the first we heard of them, were reports on the guy training them or testing or whatever, having a psychotic break and freaking out saying it was sentient. It’s all been downhill from there, hey.
Remember before they were released and the first we heard of them, were reports on the guy training them or testing or whatever, having a psychotic break and freaking out saying it was sentient. It’s all been downhill from there, hey.
I thought it was so comically stupid back then. But a friend of mine said this was just a bullshit way of hyping up AI.
Seeing how much they’ve advanced over recent years I can’t imagine whatever that guy was working on would actually impress anyone today.
There are enough people thinking their agent is sentient but fear to speak out because they don’t understand why not, even when people try to explain…
That tracks. And It’s kinda on brand, still. Skeezy af.