• a_non_monotonic_function@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.

    This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      To be fair, you can’t control what humans optimize what you’re trying to teach them either. A lot of times they learn the opposite of what you’re trying to teach them. I’ve said it before but all they managed to do with LLMs is make a computer that’s just as unreliable (if not moreso) than your below-average human.