

I’ve heard the younger generation tends to prefer 21 or 67 long functions instead.


I’ve heard the younger generation tends to prefer 21 or 67 long functions instead.


AGI wouldn’t be about a model that knows everything about language and every advancement in every field, but rather a model that is better than humans at finding solutions to problems
A LLM (or any other kind of model) that cannot adapt to changes in a field cannot perform better than humans in that field after that field experiences significant changes. Any such model would eventually degrade in output quality over time.
Also, AGI (artificial general intelligence) usually refers to an AI capable of performing all cognitive tasks at least as well as a human. It’s as much of a buzzterm as “AI” is, of course, so there’s an endless number of definitions for it. Such an AI should be capable of, at minimum, adaptation over time.
The point of a “smarter” model is not that it knows all the facts, that would be wasteful as it is trivial to look up facts at inference time.
An omniscient model would be impossible, but that’s not what I was referring to at all. LLMs these days fill their context windows with relevant information through careful prompting, tool calls, and so on. This is generally how a model is supposed to adapt. Context windows are bounded in size, though. It would have an increasing amount of information to include in that window over time, meaning the amount of data it needs to fit in the context window is unbounded.
Unless someone creates a LLM with infinite context (which would require infinite VRAM), such a LLM can never exist. Therefore, a LLM trained today will never be equivalent to (or better than) humans at all cognitive tasks for the entire future of humanity. There will always come a point where such a LLM’s output quality degrades, and it can do nothing to resolve that.
Edit: Here’s a simple example: a new written language emerges with all the complexities of a language like English. Humans can learn that language and communicate in it. A LLM cannot.


While I’m interested to see the proof, it’s more of a formality. It doesn’t take a PhD to ask what happens when the “AGI” LLM is trained on out of date information. They don’t learn over time, and they have a limited context buffer. At the very minimum, it would run out of context just keeping up with changes to spoken language over 30 years, let alone advancements in fields, new fields, and so on.


open blah.json | get foo.bar.2 just works. It also just works with yaml and any other formats I want to support (you can define custom commands to support any extension you want).z myproject saves enough time and effort to justify using it over cd most of the timegit directly a lot, but Gitui’s interface is more convenient for staging changes

What you’re referencing is distillation. Anthropic even has an article on distillation “attacks” (as if they have some divine right to the data behind their models) that goes over it a bit.


recovery email which they did not hash
How do you recover an account on the other providers? Do you have to provide the same recovery email you set before during account recovery? If you hash the email, you have no way of reading it anymore, so someone has to provide it to you again.


I’ve had laptops where this is a BIOS setting. A simple switch on the side of the keyboard to flip which keys are on the fn layer solves the problem too. Same with key remapping, for keyboards that support that (and really any nice keyboard should or it’s not worth the cost).


Also,i’m sure you know this, but security through obscurity is a poor systems design choice in almost all scenarios.
The only time I can think of from the top of my head where obscurity aids security is when secret keys are kept obscure. This isn’t even what people mean by “security through obscurity” though, so I’d actually beg someone to give an example where obscurity is actually beneficial to security and doesn’t just give a false sense of security instead.
That’s not to say everything can or should be open source, of course, just that relying on it being closed source for your application to be secure is a good way to open yourself up to attacks.


What does any of this have to do with GPL or open source licenses? Military applications all have strict validation requirements that rule out the majority of open source anyway, and your first example doesn’t even explain how the software being open source would be dangerous at all. Actually, for that matter, nor does the military example. Encryption doesn’t work because the other party doesn’t know your algorithm lol, it works because the other party doesn’t know your secret keys.


I would expect the jury to be nothing less than world-class experts on statistics, linear algebra, and calculus once the case is decided.


If you’re referring to GPL variants, that depends. You can absolutely use GPL software and libraries with closed source software. You just need to separate the GPL portions from the closed source portions with some sort of boundary, like running it as a service of some sort or turning it into a CLI tool. You’re just not allowed to create derivative works of GPL software that isn’t also GPL.
Also, there should be nothing dangerous about open sourcing code (unless you’re referring to financial risk to the business I guess). Secrets should never live in code, and obscurity is never secure.


In good news, we got a summary of his 9950x3d2 review, which was basically that it’s a ripoff at $900. Unironically, if you’re somehow in the market for a CPU like that, consider either the 9950x3d for productivity and core count, or the 9800x3d for gaming. The 9950x3d2 brings nothing to the table for anyone outside of maybe some niche applications which need both core count and cache size and can afford the latency for data transfers between CCDs.
Or, I guess, don’t buy anything because all the companies suck and everything is unbelievably expensive. Who needs a computer anyway?


Silicon Valley owes a moral debt to the country that made its rise possible. The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation.
I agree with this. They have a duty to permanently rid the world of these sick fucks, especially for the sake of defending their country from them and people like them.


Here’s hoping John Apple does what it takes to make me actually consider an Apple device! Like, I don’t know, making user-friendly decisions without the EU getting on their ass about it first.


My phone’s keyboard lets me compose multiple dashes into an emdash. I believe I can also bind a compose key on my desktop, though I haven’t needed it there yet.


This wouldn’t be “illigal”, but if that’s the case Annas Archive should be “fine”… (I know that they are distributing, and this is the fight)
I don’t know much about European law, but redistribution changes things a lot here in the US. At least here, it then gets into copyright law, and you’d be reproducing copyrighted works without authorization (the Internet Archive attempted to get around this with books by getting legitimate copies of the books, digitizing them, then “lending” the digital copies of those books).
So if I prefer to download the Anna’s dataset instead of scrape myself, would this be illigal?
No idea in Europe. In the US, it might be, depending on what the contents of the work are. I believe Anna’s Archive would count as piracy in this case, though scraping directly from Spotify might not be because they are redistributing the music with authorization from the copyright holder. It gets pretty confusing, honestly.
Regardless, if you aren’t doing things at large scale, even if you are breaking a law by downloading pirated content, it’s unlikely anyone will care. People usually only really start caring if you start redistributing stuff, so as long as you aren’t hosting what you’re scraping, you’re unlikely to run into any trouble.


There’s no obvious answer to your question without more information (for example, where are you?) but I’m not aware of scraping being illegal anywhere, with some exceptions. For example, in the US (where I am), as long as you’re not doing “illegal hacking” to scrape your data, you’re probably fine.
There are TOSs that websites like to impose as well. If you have to agree to one to access any data, you should follow it. Breaking the TOS isn’t really “illegal” in a criminal sense (in the US), but you may expose yourself to anything from being blocked from the site to a lawsuit. Bypassing blocks might also be illegal, though you’d have to speak to a lawyer to know more about that.


It’s illegal
Sauce? Also, where?


They’re able to hit the ground running.
Photoshop used AI long before generative AI took off. Specialized models have existed for various domains for decades. This is unrelated to the current bubble.
Science used AI long before generative AI took off. Specialized models have existed for various domains for decades. This is unrelated to the current bubble.
News media is also dying. It’s saturated with low quality clickbait, and most major news sites are barely worth a mention anymore. Not only are the writers losing their jobs, but the businesses themselves are being bought out by larger investment companies and being turned into tabloid clickbait, propaganda tools, and listicles. I wouldn’t expect most of them to survive past the bubble, and that even has very little to do with generative AI anyway and more to do with a cultural shift in how people receive and consume news.