• 0 Posts
  • 20 Comments
Joined 6 months ago
cake
Cake day: March 3rd, 2024

help-circle



  • It’s the moon Ariel, plus maybe a few others. Figured I’d put in comments what the article was about to balance out the name jokes. I used to try and pronounce it with the different accent, but I don’t bother now. It’s the name.

    On the actual topic, that’s fascinating that there’s enough gravitational force for Uranus to do what Jupiter does to its moons. Granted Ariel is a lot closer. We really need missions for each of these type moons to get under the ice and see what’s there.


  • Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you’re too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.

    I forgot the fourth, that I’ve seen in a few places (satirically, but could be true). The ASI analyses what we’ve done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.





  • LLMs alone won’t. Experts in the field seem to have different opinions on if they will help get us there. What is concerning to me is that the issues and dangers of AGI also exist with advanced LLM models, and that research is being shelved because it gets in the way of profit. Maybe we’ll never be able to get to AGI, but we sure better hope if we do we get it right the first time. How’s that been going with the more primitive LLMs?

    Do we even know what the “right” AGI would be? We’re treading in dangerous waters.




  • Good questions.

    What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

    Honestly, we might be too late anyway for avoidance, but it’s specifically research of the alignment problem that I think regulation could help with, and since they’re still self regulation and free to do what OpenAI did with their department for that…it’s akin to someone manufacturing a new chemical and not bothering any research on side effects, only what they can gain from it. Oh shit, never mind, that’s standard operating procedure isn’t it, at least as long as the government isn’t around to stop it.

    And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

    Another topic that I personally think we’re doomed to ignore until things get so bad they affect more than poor people and countries. How does it compare? Climate change and the probable directions it takes the planet are much more of a certainty than the unknown of if AGI is possible and what effects AGI could have. Interesting that we’re taking the same approaches though, even if it’s more obvious a problem. Plus profiting via greenwashing rather than a concentrated effort to do effective things to mitigate what we could.


  • No surprise, since there’s not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well…

    And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.

    Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.


  • I don’t think it’s that uncommon an opinion. An even simpler version is the constant repeats over years now of information breaches, often because of inferior protect. As a amateur website creator decades ago I learned that plain text passwords was a big no-no, so how are corporation ITs still doing it? Even the non-tech person on the street rolls their eyes at such news, and yet it continues. CrowdStrike is just a more complicated version of the same thing.






  • Guess no one at Microsoft realized people use computers differently and more options is always better than one. Or they intended to have the option and either forgot to include it or it was buggy. Either way it was #2 on my “how do you disable this” list, and I had to deal with it for a while. I get how grouping can be good for some things, but when you want to be able to bounce between various windows and some happen to use the same app, it was a pain.