Where do you think the errors are coming from? From data bleed over, the word “coding” shows up in books, so yes the context would incorrectly pull book data too.
Or do you not realize coding books exist as well…? And would be in the dataset.
Because that’s how they work…? It’s not an actual physical book… you don’t seriously think this do you…? it’s the text data inside, like any other text file it would use for context.
But
Is not inside the context, that comes from training. So you know how an llm works?
Where do you think the errors are coming from? From data bleed over, the word “coding” shows up in books, so yes the context would incorrectly pull book data too.
Or do you not realize coding books exist as well…? And would be in the dataset.
Why would you put whole books into the context?!? Do you even know what an llm is?
Because that’s how they work…? It’s not an actual physical book… you don’t seriously think this do you…? it’s the text data inside, like any other text file it would use for context.
Where do you think it gets its data from…?
From the training.
I will stop now replying to, because you clearly need to learn more about llms.
Here, have a fish 🐟
So you use a programming llm instead of a generic one….
Or do you think all llms and ai are the same?