• 0 Posts
  • 104 Comments
Joined 3 years ago
cake
Cake day: July 11th, 2023

help-circle



  • People at the heads of nonprofits are often highly compensated, and it’s rare that any of them solve the underlying problem or even make meaningful headway. It’s why there is so much “awareness” and short term band aids involved. A nonprofit that solves the problem it’s supposedly trying to solve has no reason to exist and will cost people well paying jobs managing it.



  • The whole premise of deep think and similar in other models is to come up with an answer, then ask itself if the answer is right and how it could be wrong until the result is stable.

    The seahorse emoji question is one that trips up a lot of models (it’s a Mandela effect thing where it doesn’t exist but lots of people remember it and as a consequence are firm that it’s real), I asked GLM 4.7 about it with deep think on and it wrote about two dozen paragraphs trying to think of everywhere a seahorse emoji could be hiding, if it was in a previous or upcoming standard, if maybe there was another emoji that might be mistaken for a seahorse, etc, etc. It eventually decided that it didn’t exist, double checked that it wasn’t missing anything, and gave an answer.

    It was startlingly like stream of consciousness of someone experiencing the Mandela effect trying desperately to find evidence they were right, except it eventually gave up and realized the truth.

    EDIT: Spelling. Really need to proofread when I do this kind of thing on my phone.



  • They couldn’t do that from one photo though, they’d need several examples all believed to be the same guy. A swirl like that preserves some of the information and you can reverse it, but the lost data is lost. Do that for several photos and you can get enough preserved bits to piece something together.

    Same idea for some other kinds of blurs or mosaics. Black boxes, not so much - you e got no data to work with, so anything you tried to reconstruct would be more or less entirely fantasy.





  • So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.

    Specifically for Microsoft, there doesn’t really seem to be any area were MS’ core business value for customers gains from adding AI, in which case this “AI everywhere” strategy in Microsoft is an incredibly shit business choice that just burns money and damages brand value.

    It’s a shiny new tool that is really powerful and flexible and everyone is trying to cram everywhere. Eventually, most of those attempts will collapse in failure, probably causing a recession and afterward the useful use cases will become part of how we all do things. AI is now where the internet was in the late 80s - just beyond the point where it’s not just some academics fiddling with it in research labs, but not in any way a mature technology.

    Most gaming PCs from the 2020s can run a model locally though it might need to be a pruned one, so maybe a little farther along.









  • that he just wants a propaganda bot that regurgitates all of the right wing talking points.

    Then he has utterly failed with Grok. One of my new favorite pastimes is watching right wingers get angry that Grok won’t support their most obviously counterfactual bullshit and then proceed to try to argue it into saying something they can declare a win from.