Ask me about:

  • Science (biology, computation, statistics)
  • Gaming (rhythm, rogue-like/lite, other generic 1-player games)
  • Autism & related (I have diagnosis)
  • Bad takes on philosophy
  • Bad takes on US political systems & more US stuff

I’m not knowledgeable about most other things

  • 31 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: September 15th, 2024

help-circle









  • So the funny thing is… the lead researcher added “finding diamonds” since it’s a niche and highly difficult task that involves multi-step processing (have to cut wood, make pickaxe, mine iron, …) that the AI was not trained on. DeepMind has a good track record with real life usage of their AI… so I think their ultimate goal is to make the AI go from “Minecraft kiddies” to something that can think on the spot to help with treating rare disease or something like that

    Y’know they could have used something like Slay the Spire or Balatro… but I digress































  • So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers

    Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:

    “AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance”

    like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…

    Also quite some irony that the preprint has the following quote: “Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts