

This article from last year compares LLMs to techniques used by “psychics” (cold reading, etc).
https://softwarecrisis.dev/letters/llmentalist/
I think it’s a great analogy (and an interesting article).
This article from last year compares LLMs to techniques used by “psychics” (cold reading, etc).
https://softwarecrisis.dev/letters/llmentalist/
I think it’s a great analogy (and an interesting article).
“In October 2021, Governor Greg Abbott hosted the lobbying group Texas Blockchain Council at the governor’s mansion. The group insisted that their industry would help the state’s overtaxed energy grid; that during energy crises, miners would be one of the few energy customers able to shut off upon request, provided that they were paid in exchange.”
Incredible. Driving up energy needs to make their fake currency will help the state’s energy grid, because we can then hold the grid hostage until we’re paid.
I see two possible reasons for your situation. One is that the company is turning to contractors to fill in gaps in their knowledge/experience, which is why everyone else has no clue how to tackle these tasks and why they get assigned the easy ones.
The other possibility is that the senior devs are gaming the metrics, letting the employees knock out easy tasks while the contractor is stuck with untangling the knots of the more intractable tasks.
Look into installing AppArmor instead of SELinux. AppArmor is easier to configure, and SELinux is not officially supported on Arch.
There’s no downside to writing the guards afaik, but I’m more of a c programmer. It’s been a while since I did much c++, so I’m not up on modern conventions. But dealing with legacy code adhering to older conventions often comes with the territory with c and c++, so it’s something to keep in mind.
You can generally rely on a header file doing its own check to prevent being included twice. If a header doesn’t do that, it’s either wrong or doing something fucky. It is merely a convention, but it’s so widespread that you really don’t need to worry about it.
You are mixing up some terms, so I want to help clarify. When you #include a header file, you aren’t importing a library. You are telling the compiler to insert the contents of that header file into your source where the #include line is. A library is something different. It is an already-compiled binary file. A library should also come with a header file to tell you what functions and classes are present in the library, but that header isn’t itself the library.
It may seem annoying to have to repeat yourself between headers and source, but it’s honestly something you get used to.
If you think he did something illegal, report him to the police or sue him. If not, then this is freedom of speech.
…and? People also have freedom of association, and people can choose not to associate with an organization that employs someone with morally awful beliefs - especially when they make those beliefs very public.
Not really, and no. This shouldn’t affect your already-running system. This change means that the iso will offer plasma by default and will run plasma in the live environment.
And I wouldn’t say it’s particularly hard to switch from any desktop environment to another. It takes some relearning where stuff is, keyboard shortcuts, etc, but any desktop environment can run any Linux program, provided the necessary libraries are installed (which your package manager takes care of). You can install kde programs on your xfce desktop, and they will run fine (and vice versa). They’ll just pull in a bunch of kde libraries when you install.
What? Linux does use git for version control.
Reminds me of
Plus, jokingly using fash shit tends to attract people who aren’t really joking but want plausible deniability.
To be clear, dmesg -w
should be run before you do anything to cause the crash. It will continuously print kernel output until you press ctrl+c or the kernel crashes.
In my experience, a crashing kernel will usually print something before going unresponsive but before it can flush the log to disk.
If you have another pc, ssh from it to the problem machine and run sudo dmesg -w
. That should show kernel messages as they are generated and won’t rely on them being written to disk.
I’ve got a 6600XT and have had zero issues with Ubuntu and Fedora.
Well, it depends on your perspective. Copyleft licenses restrict downstream developers in order to protect the rights of downstream users.
Account passwords have never had the purpose of protecting data from physical access - on Linux or any other operating system that I’m aware of. Physical access means an attacker can pull your drive and plug it into their computer, and no operating system can do anything to block access in that scenario, because the os on disk is not running.
You need disk encryption to protect your data. The trade off is that if you forget the encryption password, your data is unrecoverable by you. But that’s what password managers are for (or just writing it down and putting it in a safe).
Depending on what games you played, mac was a decent alternative for gaming. Blizzard treated mac as a first class platform for many years, indie games using multi platform engines often targeted it, and porting studios like aspyr would bring over a few big titles here and there.
Linux was in a similar boat before proton really opened things up, but with even less support than mac from game devs.
Yeah, dropping 32-bit made me start considering leaving the platform, despite being a happy Mac gamer for over a decade. The switch to arm finally made me move to back to pc. I expect Apple will drop their x86 compatibility layer after a few years like they did after the ppc to x86 transition.
Steam and lutris has made linux a great gaming platform for me.
I’ve seen the comparison to pair programming with a junior programmer before, and it’s wild to me that such a comparison would be a point in favor of using AI for improving productivity.
I have never experienced a productivity boost by pairing with a junior. Which isn’t to say it’s not worth doing, but the productivity gains go entirely to the junior. The benefits I receive are mainly improving my communication and mentoring skills in the short term, and improving the team’s productivity in the long term by boosting the junior’s knowledge.
And it’s not like the AI works on the mundane stuff in parallel while I work on the more interesting, higher level stuff. I have to hold its hand through the process.
I feel like the efficiency gains of AI programming is almost entirely in improving your speed at wrestling a chatbot into producing something useful. Which may not be entirely useless going forward - knowing how to search well is an important skill, this may become something similar, but it just doesn’t seem worth the hassle to me.