A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.
You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?
And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?
Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?
It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).
A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.
When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…
That’s kinda the gold standard?
The paper in question has been cited by 371 other papers.
Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Lol, you think the temperature was what was responsible for writing a coherent sequence of poetry leading to 4th wall breaks about whether or not that sequence would be read?
A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
The model system prompt on the server is just basically
cat untitled.txt
and then the full context window.The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.
You guys on the other hand don’t even know what you don’t know.
Do you have any source to back your claim?
deleted by creator
No no no, trust me bro the machine is alive bro it’s becoming something else bro it has a soul bro I can feel it bro
Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.
You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?
And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?
Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?
It’s not a perspective. It just is.
It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).
A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.
You do know how replication works?
When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…
That’s kinda the gold standard?
The paper in question has been cited by 371 other papers.
I’m pretty comfortable with it as a citation.
Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Also, you know, infinite monkeys.
Lol, you think the temperature was what was responsible for writing a coherent sequence of poetry leading to 4th wall breaks about whether or not that sequence would be read?
Man, this site is hilarious sometimes.
You’re projecting. Sorry.