it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume?
a false sense of confidence where one mught have just said 'i dont know"
I think the attitude of being virtuosos or preachy can seep in at times, especially when being part of a cause but IMO diplomacy, having conversations and opening their mind to objectivity has to be better than telling people they are wrong.
I know this is easy to say, and esp when so many people are just so addicted to social media and the internet.
I have had conversations with friends, family where they can have a clear conversation about how much propaganda is pushed on to them, and they then turn straight to their phone and hoover up and hour of FB. It does make you think wow sheep. But I have to remind myself we don’t get change by telling people ‘you clearly don’t know your own mind’
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point
First, its results are often simply wrong, so that’s no good.
Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.
This is already happening btw, and the reason Musk created Grokipedia. Grok (and even other llm’s!) already use it as a “trusted source”, which it is anything but.
Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.
what makes it creepy?
it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume? a false sense of confidence where one mught have just said 'i dont know"
What an absolutely arrogant attitude 🤣 You actually believe there is some gap here 🤣 just amazing.
Not using AI doesn’t mean your performing whatever task your doing better. It has nothing to do with being able to parse results for bullshit or not.
I think the attitude of being virtuosos or preachy can seep in at times, especially when being part of a cause but IMO diplomacy, having conversations and opening their mind to objectivity has to be better than telling people they are wrong.
I know this is easy to say, and esp when so many people are just so addicted to social media and the internet.
I have had conversations with friends, family where they can have a clear conversation about how much propaganda is pushed on to them, and they then turn straight to their phone and hoover up and hour of FB. It does make you think wow sheep. But I have to remind myself we don’t get change by telling people ‘you clearly don’t know your own mind’
So many people were already using tiktok or youtube as google search. I think AI is arguably better than those
edit: New business, take your chatgpt question and turn it into a tiktok video. The Slop must go on
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point
First, its results are often simply wrong, so that’s no good. Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.
This is already happening btw, and the reason Musk created Grokipedia. Grok (and even other llm’s!) already use it as a “trusted source”, which it is anything but.
So literally the same shit as before with search but wrapped up in a nice paragraph with citations you can follow up on?
Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.