Ask me about:

  • Science (biology, computation, statistics)
  • Gaming (rhythm, rogue-like/lite, other generic 1-player games)
  • Autism & related (I have diagnosis)
  • Bad takes on philosophy
  • Bad takes on US political systems & more US stuff

I’m not knowledgeable about most other things

  • 29 Posts
  • 23 Comments
Joined 10 months ago
cake
Cake day: September 15th, 2024

help-circle



  • So the funny thing is… the lead researcher added “finding diamonds” since it’s a niche and highly difficult task that involves multi-step processing (have to cut wood, make pickaxe, mine iron, …) that the AI was not trained on. DeepMind has a good track record with real life usage of their AI… so I think their ultimate goal is to make the AI go from “Minecraft kiddies” to something that can think on the spot to help with treating rare disease or something like that

    Y’know they could have used something like Slay the Spire or Balatro… but I digress

































  • So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers

    Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:

    “AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance”

    like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…

    Also quite some irony that the preprint has the following quote: “Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts



  • Based on my understanding of how these things work: Yes, probably no, and probably no… I think the map is just a “catalogue” of what things are, not at the point where we can do fancy models on it

    This is their GitHub account, anyone knowledgeable enough about research software engineering is welcomed to give it a try

    There are a few neuroscientists who are trying to decipher biological neural connections using principles from deep learning (a.k.a. AI/ML), don’t think this is a popular subfield though. Andreas Tolias is the first one that comes to my mind, he and a bunch of folks from Columbia/Baylor were in a consortium when I started my PhD… not sure if that consortium is still going. His lab website (SSL cert expired bruh). They might solve the second two statements you raised… no idea when though.


  • I have a suspicion it’s not just an Alzheimer’s issue but rather quite systemic to lots of competitive fields in academia… There definitely needs to be guard rails. I think the sad thing with funding is… these days you have to be exceptionally good at grant writing to even have a chance of getting into the lottery, and it mostly feels like a lottery with success rates in the teens… and apparently no grant=no lab, no career for most ppl (seriously why are most PI roles soft money-funded anyway). Hard to not try and cut the corners if there’s so much pressure on the line

    Not to mention, apparently even if you are a super ethical PI who wants to do nothing wrong, if the lab gets big enough, there might eventually be some unethical postdoc trying to make it big and falsify data (that you don’t have time to check) under your name so… how the hell do people guard against that.

    I’m honestly impressed how science is still making progress with all of these random nonsense in the field