Ask me about:
I’m not knowledgeable about most other things
So the funny thing is… the lead researcher added “finding diamonds” since it’s a niche and highly difficult task that involves multi-step processing (have to cut wood, make pickaxe, mine iron, …) that the AI was not trained on. DeepMind has a good track record with real life usage of their AI… so I think their ultimate goal is to make the AI go from “Minecraft kiddies” to something that can think on the spot to help with treating rare disease or something like that
Y’know they could have used something like Slay the Spire or Balatro… but I digress
Frankly I agree. From my personal experience, every single native Chicagoan has been calling that particular building “Sears Tower”. Even though the name has been officially changed for more than 15 years by this point…
And I thin OSM actually handled this quite well! The original Sears Tower name is still available as an “alt_name” tag on OSM as well, I just double-checked and yep it’s still searchable on the map
Interesting… A coworker of mine previously worked on a fintech project that needed to use open models. Apparently their team found the Llama models to be much better than anything Mistral had at the time… I’m hoping Mistral’s new model (the one featured in the news article) is better. Not sure if Le Chat is open weights like the Mistral/Mixtral lines though…
Unfortunately… Isn’t there a saying like “the amount of effort to refute bullshit is much large than the amount needed to produce it” or something? So sadly the HCQ thing is just going to stay there for now; the journal taking 4.5 years to retract it didn’t help either
Well neuroscience isn’t a very old field… More seriously though, I think biomedical scientists know surprisingly little about something if NIH doesn’t fund it… aaand that’s how we understood so little about our own household companions (and a bit too much about cancer. Seriously why do we know so many weird things about cancer much of those don’t even translate into therapeutics)
I clearly didn’t drink enough coffee for this before posting
My bad, the original news article did a good job at explaining the missing link… I misunderstood what you were asking
I think that’s pretty much it
This is the study they were referring to: https://doi.org/10.1016/j.jaci.2015.07.040
C-section babies have slightly higher risks of several diseases related to immune system function, and the hypothesis is that it is because these babies have slightly less developed immune systems
I happen to know a few folks who work in this field (detecting fraudulent scientific papers). This is a bit of an insider knowledge, but there are science sleuths who are fearing for their lives… there might be some seriously shady stuff going on behind research paper mills, but I don’t know who will be the one digging those up.
If it is just on an individual level though methinks Retraction Watch does a decently good job at informing what might or might not be trustworthy
A recent report on Retraction Watch, a PhD student was trying to figure out who’s behind a papermill: https://retractionwatch.com/2024/10/01/hidden-hydras-uncovering-the-massive-footprint-of-one-paper-mills-operations/
This is from Nature News today: https://www.nature.com/articles/d41586-024-03427-w. Heard a bit about this startup even before so…
This again??
This time once archive.org is back online again… is it possible to get torrents of some of their popular data storage? For example I wouldn’t imagine their catalog of books with expired copyright to be very big. Would love a community way to keep the data alive if something even worse happens in the future (and their track record isn’t looking good now)
Pretty sure the “intimate detail” is just the editor being horny… I didn’t make the title don’t blame me
Hehe
Blame the Nature News editor for this, the paper title wasn’t horny at all
I got curious and wanted to see what method they are using: I believe they are using data from this portal? https://implicit.harvard.edu/implicit/selectatest.html
Looks like anyone can take this! But I guess that also means… did the dyslexics/dyscalculics self-select themselves?
Edit: took one. There is a demographics questionnaire where you can list whether you have disabilities, dyslexia is in there (but not Autism??)… So it is self-selected. And on unrelated note, I am apparently in the 1% that has a strong automatic preference for physically disabled rather than not-disabled people (facepalm
This is a good point… I’m more used to biomedical papers where this author list would be considered typical or even short, but yeah the affiliations seem to state that there are four PIs on this paper which is wild… don’t know what to make of it. If someone knows archaeology better plz inform
I genuinely don’t know… there doesn’t seem to be any ongoing discussion of who or why are these people targeting IA. There are other people who are trying to rescue data stored on IA
Hope this would be over soon…
So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers
Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:
“AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance”
like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…
Also quite some irony that the preprint has the following quote: “Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts
A bit off topic… But from my understanding, the US currently doesn’t have a single federal agency that is responsible for AI regulation… However, there is an agency for child abuse protection: the National Center on Child Abuse and Neglect within Department of HHS
If AI girlfriends generating CSAM is how we get AI regulation in the US, I’d be equally surprised and appalled
Based on my understanding of how these things work: Yes, probably no, and probably no… I think the map is just a “catalogue” of what things are, not at the point where we can do fancy models on it
This is their GitHub account, anyone knowledgeable enough about research software engineering is welcomed to give it a try
There are a few neuroscientists who are trying to decipher biological neural connections using principles from deep learning (a.k.a. AI/ML), don’t think this is a popular subfield though. Andreas Tolias is the first one that comes to my mind, he and a bunch of folks from Columbia/Baylor were in a consortium when I started my PhD… not sure if that consortium is still going. His lab website (SSL cert expired bruh). They might solve the second two statements you raised… no idea when though.
I have a suspicion it’s not just an Alzheimer’s issue but rather quite systemic to lots of competitive fields in academia… There definitely needs to be guard rails. I think the sad thing with funding is… these days you have to be exceptionally good at grant writing to even have a chance of getting into the lottery, and it mostly feels like a lottery with success rates in the teens… and apparently no grant=no lab, no career for most ppl (seriously why are most PI roles soft money-funded anyway). Hard to not try and cut the corners if there’s so much pressure on the line
Not to mention, apparently even if you are a super ethical PI who wants to do nothing wrong, if the lab gets big enough, there might eventually be some unethical postdoc trying to make it big and falsify data (that you don’t have time to check) under your name so… how the hell do people guard against that.
I’m honestly impressed how science is still making progress with all of these random nonsense in the field
It’s definitely way more prevalent. There actually is this post from Retractionwatch just a few days ago too. This is kind-of a systematic issue induced by how scientific funding & the system works…
My current PI is actually co-mentoring a student who was studying scientific fraud, but the problem is… being a fraud researcher is apparently a really good way to alienate a lot of people, which ensures you never make it in academia (which is heavily dependent on networking/knowing people)… so I don’t know how many ppl would seriously study this.
Welcome to the Google DeepMind Minecraft SMP server : ) (/s)