This isn’t going to hurt Google’s antitrust cases at all… Noooo sir.
This isn’t going to hurt Google’s antitrust cases at all… Noooo sir.
The joke is that, regardless of how the type is declared in json, you are parsing a string. (your json blob is just a series of characters, not raw binary data)
The reply would have been return x % 2 == 0
, or if you wanted it to be less readable return !(x&1)
.
But if you were going for a way that is subtly awful or expensive, just do a regex match on “[02468]$”. You don’t get a stack overflow with larger numbers but I struggle to think of a plausible bit of code that consumes more unnnecessary cycles than that…
Is this meant to be a joke or is it intended to be a serious solution?
Asking for someone who lacks a sense of humor.
Ok, fine, I’m asking for me. That person is me.
Saw one the other day with Jennifer Anniston. Good enough that it took a second to realize it was deep fake audio and video.
Omg, I have SOOoo many questions about what is going on in this picture.
My guy wanted to use drones to cut hedges.
They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.
Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.
At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.