Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 224 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle
  • I’ve only found success in LLM code (local) with smaller, more direct sections. Probably because it’s pulling from its training data the most repeated solutions to such queries. So for that it’s like a much better Google lookup filter that usually gets to the point faster. But for longer code (and it always wants to give you full code) it will start to drift and pull things out of the void, much like in creative text hallucination but in code it’s obvious.

    Because it doesn’t understand what it’s telling you. Again, it’s a great way to mass filter Stack Overflow and Reddit answers, but remember in the past when searching through those, that can work well or be a nightmare. Just like then, don’t take any answer and just plug it in, understand why that might or might be a working solution.

    It’s funny, I’ve learned a lot of my programming knowledge through the decades by piecing things together and in the debugging of my own or other’s coding, figured out what works. Not the greatest way to do it, but I learn best through necessity than without a purpose. But with LLM coding that goes wild, debugging has its limits, and there have been minor things that I’ve just thrown out and started over because the garbage I was handed was total BS wrapped up in colorful paper.





  • That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

    ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?


  • I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

    The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.














  • WIndows is bloated, especially if there are updates involved. However, how old is the hard drive it’s on? Not only tech age, but perhaps there are some read errors occurring to cause rereading that you aren’t seeing because it finally works. Also, if it is a hard drive upgrading to SSD is huge as well.