What don’t you like about Cargo? Is there another package manager you like more?
What don’t you like about Cargo? Is there another package manager you like more?
The cool kids are forcing people to read this at gunpoint nowadays
Here’s an announcement of the Matrix 2.0 spec, which includes a switch to the new OIDC system:
https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/
That links to this Matrix spec proposal:
IMO the tl;dr is that they’re switching to a widespread standard that has a lot of development around it and security auditing. It will take a lot less of their time to just use that vs maintaining their own stack. OIDC will let you use google/github-style “Login with…” approaches, but you can use that with any other service that supports OIDC, not just a few blessed services.
There’s also this site with a “Why?” section: https://areweoidcyet.com/#why
Never, because it’s not. This is the future:
https://knowyourmeme.com/memes/cultures/fully-automated-luxury-gay-space-communism
Let’s get there as quickly as possible
You’re in good company. Steam even managed to do it for a whole bunch of people:
https://github.com/ValveSoftware/steam-for-linux/issues/3671
I was also curious, here’s a good answer:
https://unix.stackexchange.com/questions/670199/how-is-dev-null-implemented
The implementation is:
static ssize_t write_null(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return count;
}
The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.
We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.
Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.
The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:
o1-mini
and llama3-8B
, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.
Gary Marcus is an AI crank and should be disregarded
What parent is likely referencing
TBH I wonder if the current Microsoft is capable of executing that here. I don’t believe in a “changed” MS, but Linux is eating the world, and MS doesn’t really care about Windows much anymore. Azure happily runs Linux VMs
Get pissed at NVIDIA. They’re the problem.
C is definitely still king, but I wonder if crABI will eventually be able to dethrone it:
https://github.com/rust-lang/rust/issues/111423
If they can define a useful ABI that manages to include lifetimes, that might just be enough of an improvement to get people to switch over from assuming the C ABI everywhere.