The number of questions on Stack Overflow fell by 78 percent in December 2025 compared to a year earlier. Developers are switching en masse to AI tools in
I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.
the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.
Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow
I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.
I can only speak for myself obviously, and my context here is some very recent and very extensive experience of applying AI to some new software developed internally in the org where I participate. So far, AI eliminated any need for any kind of assistance with understanding and it was definitely not trained on this particular software, obviously. Hard to imagine why I’d ever go to SO to ask questions about this software, even if I could. And if it works so well on such a tiny edge case, I can’t imagine it will do a bad job on something used at scale.
If we go by personal experience, we recently had the time of several people wasted troubleshooting an issue for a very well known commercial Java app server. The AI overview hallucinated a fake system property for addressing an issue we had.
The person that proposed the change neglected to mention they got it from AI until someone noticed the setting did not appear anywhere in the official system properties documented by the vendor. Now their personal reputation is that they should not be trusted and they seem lazy on top of it because they could not use their eyes to read a one page document.
That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…
It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.
It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.
Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.
But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.
Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.
Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?
I’m confused where you are getting your information from, but this is not particularly special behavior.
The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.
Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.
What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…
I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.
It’ll certainly be of lesser quality even if they go through steps to make it able to address it.
good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.
I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.
the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.
Works well for now. Wait until there’s something new that it hasn’t been trained on. It needs that Stack Exchange data to train on.
Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow
I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.
Lol no, AI can’t do a single thing without humans who have already done it hundreds of thousands of times feeding it their data
I used to push back but now I just ignore it when people think that these models have cognition because companies have pushed so hard to call it AI.
Technical specs don’t capture the bugs, edge cases and workarounds needed for technical subjects like software.
I can only speak for myself obviously, and my context here is some very recent and very extensive experience of applying AI to some new software developed internally in the org where I participate. So far, AI eliminated any need for any kind of assistance with understanding and it was definitely not trained on this particular software, obviously. Hard to imagine why I’d ever go to SO to ask questions about this software, even if I could. And if it works so well on such a tiny edge case, I can’t imagine it will do a bad job on something used at scale.
If we go by personal experience, we recently had the time of several people wasted troubleshooting an issue for a very well known commercial Java app server. The AI overview hallucinated a fake system property for addressing an issue we had.
The person that proposed the change neglected to mention they got it from AI until someone noticed the setting did not appear anywhere in the official system properties documented by the vendor. Now their personal reputation is that they should not be trusted and they seem lazy on top of it because they could not use their eyes to read a one page document.
That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…
It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.
It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.
Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.
Interesting story, but I’ve seen the same work with how many ass in assassian
you can probe the stuff it’s bad at, and a lot of it doesn’t line up well with the story that it’s how people were corrected.
But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.
Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.
Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?
I’m confused where you are getting your information from, but this is not particularly special behavior.
The whole point of StackExchange is that it contained everything that isn’t in the docs.
The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.
Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.
Probably explains why quora started sending me multiple daily emails about shit i didn’t care about and removed unsubscribe buttons form the emails.
I don’t delete many accounts… but that was one of them
What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…
I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.
I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…
It’ll certainly be of lesser quality even if they go through steps to make it able to address it.
good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.
deleted by creator
But can anyone train on them? What happens to the original dataset?
deleted by creator