• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

    Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

    At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.




  • ApexHunter@lemmy.mltoProgrammer Humor@lemmy.mlcoding chess
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    9 months ago

    The reply would have been return x % 2 == 0, or if you wanted it to be less readable return !(x&1).

    But if you were going for a way that is subtly awful or expensive, just do a regex match on “[02468]$”. You don’t get a stack overflow with larger numbers but I struggle to think of a plausible bit of code that consumes more unnnecessary cycles than that…