From what we see today based on these LLMs that are given a larger context (e.g. internal documentation or knowledge bases), we can say that it’d be as good as a decent developer that reads said documentation and it’s able to apply that knowledge to a specific use case.
But Stack Overflow answers often target things that don’t come up in the docs, that are outdated, or somewhat case-dependent and/or opinionated. Answers that might even lead to changes in documentation. This kind of insight will be hampered over time without a way of continuously sharing such knowledge.
if we use embedding and the language documentation, I wonder how much it can work out going forward?
Nothing because language models don’t understand the text they read.
From what we see today based on these LLMs that are given a larger context (e.g. internal documentation or knowledge bases), we can say that it’d be as good as a decent developer that reads said documentation and it’s able to apply that knowledge to a specific use case.
But Stack Overflow answers often target things that don’t come up in the docs, that are outdated, or somewhat case-dependent and/or opinionated. Answers that might even lead to changes in documentation. This kind of insight will be hampered over time without a way of continuously sharing such knowledge.