• 0 Posts
  • 581 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle
  • Ironically, it felt to me like the post deified algorithms itself, but this is the main takeaway:

    We should neither mystify, nor deify these systems, because it makes us forget that we have built them ourselves and infused them with meaning.

    An “algorithm” is nothing more than a set of instructions to follow to complete some kind of task. For example (and closely related), a sorting algorithm might attempt to sort a list by randomizing the list, then checking if it’s sorted and repeating if not (bogosort).

    Lemmy uses an algorithm to sort posts by “most recent”, for example, and I think that having a “most recent” sorting option is noncontroversial.

    Where algorithmic feeds become problematic, in my opinion, is when they start becoming invasive or manipulative. This is also usually when they become personalized. Lemmy, Reddit (within a subreddit), and other kinds of forums usually do not have personalized feeds, and the sorting algorithms for “hot” are usually noncontroversial (maybe there’s debate about effectiveness, but none usually about harm). Platforms like FB, Twitter, TikTok, Instagram, YT, etc all have personalized feeds that they use personal data to generate. They also are the most controversial, and usually what is referred to as “algorithmic” feeds.

    These personalized feeds are not magic. They often include ML black boxes in them, but training a model isn’t sorcery, nor are any of the other components to these algorithms. Like the article mentioned, they are written by people, and can be understood (for the most part), updated, and removed by people. There is no reason a personalized feed is required to invade your privacy or manipulate you. The only reason they do is because these companies are incentivized to do so to maximize how much ad revenue they make off you by keeping you engaged for longer.





  • If you already know some programming languages, look for some kind of GUI or game library for it to see if you can use it. If not, something like Blender might be easiest to make in C++, Rust, C (if you’re a masochist), or maybe Zig. This may also influence the shading language you choose. Start with this.

    You will need to know some shader language. You have a few options there, but the most popular are GLSL and OpenGL (though I’d prefer GLSL). There’s also WGSL and some others, but they aren’t as popular. Prefer whatever the graphics library you’re using wants you to use.

    Math is very heavy on linear algebra. Look up PBR if you want to render realistic 3d shapes. Google’s Filament is well documented and walks through implementing it yourself if you want, but it’s pretty advanced, so you might want to start simpler (fragment colors can just be base color * light color * light attenuation * (N*L) for example).








  • Surely you have an example where it’s appropriate for a service to generate nonconsensual deepfakes of people then? Because last I checked, that’s what the post’s topic is.

    And yes, children are people. And yes, it’s been used that way.

    Edit: as for guardrails, yes any service should have that. We all know what Grok’s are though, coming from Elon “anti-censorship” Musk. I mentioned ChatGPT also generating images, and they have very strict guardrails. They still make mistakes though, and it’s still unacceptable. Also, any amount of local finetuning of these models can delete their guardrails accidentally, so yeah.





  • Modern* protect children bills would be more accurate.

    The playbook these days is to use children, terrorism, etc to justify something that fails to address that problem and pushes some other agenda.

    This has been true anyway in the US and UK. I have no clue how true this is in France and I won’t pretend to know, but from some other comments on the potential implementation could be better than what we’ve seen so far. I hope so, anyway.