• Dave.@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    14 hours ago

    There was a point, about 10-12 years ago now, where The Algorithm™ took over social media entirely.

    If you were around before that, you would have noticed the shift. Your friend’s comments and posts started to get intermixed with “other stuff” , and eventually you could scroll endlessly and not see anything from your direct friends, or friends of friends. Forever.

    What decided what you could see? Why, The Algorithm™ , of course. So, at that point right there, that’s when a direct and consistently biased feed of someone else’s opinion about what you wanted to see got pumped into people’s brains. And you can bet it’s going to be designed to be handing out the most engaging things that it can find for you, to keep you scrolling away on their platform. But it doesn’t matter a fuck if what its handing out i’s mentally harmful to you personally, as long as you’re engaged.

    And just like schoolkids in the USA reciting the Pledge of Allegiance every morning, reinforcement of whatever The Algorithm™ wants (simply: more engagement) becomes pretty trivial when it’s crammed into your head consistently from a young age. Lacking any other reference points, children are the ones with the least amount of defenses against all of that shite.

    These kinds of laws worldwide are trying to stop that kind of thing from happening, because they can’t stop the source directly. Social media companies hold too much sway over the population and the economy now, it would be political suicide to try and go toe to toe with them.

    In my opinion, The Algorithm™ as it stands now is a cancer that needs to be cut out of social media by any means possible. Whether there’s anything left remaining after that is debatable.

    • schnurrito@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      14 hours ago

      I’ve recently said this in another thread, and I’ll repeat it here: this problem would easily be solved by changing content liability laws (e.g. section 230 in the US) so that anything recommended by an algorithm counts as speech by the platform and the platform is liable for it if it turns out to be illegal (e.g. libellous).

      That would mean that you could operate a forum or wiki or Lemmy or Mastodon instance without worrying about liability, but Facebook, YouTube, TikTok would have to get rid of the feature where they put “things that might interest you” that you didn’t actually choose to follow into your feed.

      None of that has anything to do with anyone’s age.