That’s certainly one possibility. But another possibility is that the people praise LLMs are not very good at judging whether the code it generates is of good quality or not…
That’s certainly one possibility. But another possibility is that the people praise LLMs are not very good at judging whether the code it generates is of good quality or not…


Agreed. To make it a bit more general, whenever I see people claiming to be able to predict the future with absolute certainty and confidence, that to me is just a sign they are idiots and shouldn’t be listened to. Definitely had a lot of those in past companies I have worked in. A lot of the time, they’re trying to gaslight people into believing in their version of the future so they can sell us garbage (products, stock price, etc.). They’ll always get some fools to believe them of course.


The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code.
Not surprising at all. When you write code, you’re actually thinking about it. And that’s valuable context when you’re debugging. When you just blindly follow snippets you got from some random other place, you’re not thinking about it and you don’t have that context.
So it’s easy to see how this could lead to a net productivity loss. Spend more time writing it yourself and less time debugging, or let something else write it for you quickly, but spend a lot of time debugging. And on top of it all, no consideration of edge cases and valuable design requirement context can also get lost too.


I’m a slow adopter of new technologies like AI LLMs. My reasoning is that if it turns out to actually be a good product, then it will eventually prove itself, and the early adopters can be the “beta testers” so to speak. But if it turns out to be a bad product, then I won’t have wasted my time on something that isn’t worthwhile.
Maybe a day comes when I start using these tools, but they clearly just aren’t all that useful in their current form. In all honesty, I’m pretty sure that they will never be useful enough for me to consider them worth learning, but definitely not so today.


Cupertino has complied anyway, and said it introduced “Notarization for iOS apps, an authorization process for app marketplaces, and requirements that help protect children from inappropriate content and scams.”
Notarization requirements mean that they still maintain total control over the operating system and what software it can run. These kinds of onerous requirements keep the bar artificially high for competitors and are only possible because they are still enforcing their monopolistic control over the platform.
So no, they’re not complying at all actually. They’re just doing the same thing in a different way.


The last Windows OS I used was XP, around 2004-ish. Even back then, it was obvious to me that, because it was closed source, that they could one day start acting against my interests, and there was nothing I could do to stop it. I saw open source as an insurance policy - it prevents vendors from acting maliciously against their users. In that very quaint, old time, nobody believed that MS would ever do something like that, but it didn’t matter - the fact was that they could, so inevitably, they would.
I’m quite proud of how prescient I was when I look at what they’re doing today. No evil is too great to stop a greedy businessman.
Anyway, I decided to just be brave and create a partition on my main drive and install Ubuntu on it. All I needed to get my work done was OpenOffice, LaTeX, a browser, a compiler, Python… Everything worked better in Linux than Windows so even though I was dual-booting, I practically never used Windows again after a couple weeks. Later on, I switched to Debian, and the next laptop that I bought, I just wiped the hard disk and used Linux for the whole thing. I kept the recovery partition because I was paranoid but obviously never needed it.
Today, there’s no doubt in my mind that Linux is the best OS. Sure, Macs have better batteries, but if I’m doing productive work, then I don’t really need more than an hour away from my charger. I could maybe agree that the BSDs are better, but I’ve never tried them.


It’s hard to beat the last one, but he somehow managed to pull it off.
Then again, Mitchell Baker is still on the board of directors if I’m not mistaken, so it sounds like the rot is too pervasive for just one CEO to change.


I was having issues with Librewolf on a work computer a few weeks ago, so I decided to try Firefox to see if it was LW’s security settings.
Holy shit, what a fucking trainwreck Firefox has become! It’s so bad that I can’t honestly recommend anyone use it anymore. The first time I started it, I saw all kinds of ads and trashy “news” articles that had no relevance to me whatsoever. Plus I had to reinstall all my extensions because they weren’t signed and there’s no way to disable that requirement. I was so horrified and offended, I just dumped it immediately and tried Chrome instead. What difference is there at this point?
It’s just insulting at this point. I understand that they trying to find new revenue sources, and things are still better today than they were with Mitchell Baker as CEO, but it’s still horrific how poorly Mozilla is being run. I’m so grateful we still have usable forks from the amazing people running projects like Librewolf. Without them, the web would just be flat out unusable.


I bet he takes a bath in a swimsuit


This has been very obvious to a lot of people since mobile devices were originally invented. The notion that you are sold a product that you “own” but is still 100% controlled by the vendor - anyone who thought about it for more than a second knew that it would eventually come to this. Of course, nobody gave even that tiny amount of thought about it. Or they were too naïve to think that a corporation could ever be evil.
I miss the times when spyware was considered uncoool. Mobile devices are the undoubtedly the worst invention of the information age. (And social media is probably the second worst.)


I wrote a program that scanned object files (compiled from a large C++ project) to see how they were interdependent. It was pretty useful for detecting cycles in the shared libraries that we were compiling from them, but the biggest benefit was it enabled me to very easily rewrite the build system from scratch.
It was surprisingly simple - most ELF parsers can read a file and dump the symbol tables in them. (In this context, a symbol means a defined function, so if a C/C++ source file has int main() in it, the corresponding .o file will have a main symbol in it.) They also include information about which symbols are defined in the .o file, as well as which symbols it depends on which are undefined. This allows you to figure out a dependency graph, which you can easily visualize using graphviz or use to autogenerate build files for CMake or any other build system you may wish to use.
In my case, I wrote this kind of program twice in two separate jobs. Both of them had a very janky build system using custom Makefiles. I used this program to rewrite the build systems in CMake. The graphviz dependency graphs are also just generally helpful to have as project documentation. CMake can do this natively, by the way - here’s the documentation for it: https://cmake.org/cmake/help/latest/manual/cmake.1.html#cmdoption-cmake-graphviz
I don’t even know where to start to make vim or neovim do all that. If it can’t do that seamlessly and just as well, vimlike editors will never be a replacement for a proper IDE. It’s fast, capable single file and small scope editor for me.
If you’re interested in learning how to do it, I found this guide extremely helpful for getting started. it’s in both blog and video format, and it shows how to install Lazy (a package manager for vim), and which plugins to install to get LSP working (which is what would provide all the hotkeys that you were mentioning above).
It’s definitely not a task for the faint of heart, but I found it very rewarding once I figured out how to work with the plugin systems because it’s so powerful and easy to customize. I found it helpful to just watch the video a few times to see everything working, then slowly started building up my own configuration (which was a bit more minimal than the linked guide I provided - I only installed about 30-40% of the plugins he listed on that page).
Another alternative is Lazyvim, which provides an out-of-the-box configuration experience for you. It installs a lot of plugins and most things should work out of the box with very little configuration. It is a massive beast though, but still pretty good for a first start.


The funny thing is, before Google existed, people had no idea if their marketing attempts were working. Maybe they had some ways of knowing or guessing, but there was no way to know how accurate their metrics were. Internet-based advertising, and tracking-based advertising in particular was supposed to change that.
And now that we sit here with a duopoly of advertising giants, we’re back to the stage where marketers just have to trust that their provider is giving them good helpful information. And how are they supposed to know whether they really can believe it or not? They can’t of course! So we’ve come right back to where we’ve started.
But considering they still spent tons of money before Google and Facebook gave them these “analytics”, it looks like they probably don’t even care that much.


Another thing, he confirms something I was worried about, in his comments on parallelism / Python without the Global Interpreter Lock (aka GIL): Some developments in the language serve rather the big companies, than the community and open source projects. For example, lock-less multi-threading in Python serves mostly the largest companies, while having little value for small projects.
Absolutely agree. The significance of the GIL is heavily overstated in my opinion. There’s a narrow set of use-cases where it matters, ie. if you must use threads and something like multiprocessing or a message queue (ie. Celery) doesn’t do what you need. These are pretty rare circumstances, from my experience at least.


One principle I try to apply (when possible) comes from when I learned Haskell. Try to keep the low-level logical computations of your program pure, stateless functions. If their inputs are the same, they should always yield the same result. Then pass the results up to the higher level and perform your stateful transformations there.
An example would be: do I/O at the high level (file, network, database I/O), and only do very simple data transformations at these levels (avoid it altogether if possible). Then do the majority of the computational logic in lower level, modular components that have no external side effects. Also, pass all the data around using read-only records (example: Python dataclasses with frozen=True) so you know that nothing is being mutated between these modules.
This boundary generally makes it easier to test computational logic separately from stateful logic. It doesn’t work all the time, but it’s very helpful in making it easier to understand programs when you can structure programs this way.


Interesting, I had never heard of ccache before, though yes, all good build systems (CMake, Ninja, etc.) should cache intermediate object files.
But the projects I was working on were so large that even binary and unit test executables were so large that even they would take ~20 seconds to link. You can’t use caching to alleviate that buildtime cost unfortunately.


I think it’s just because it is always recommended as an “easy” language that’s good for beginners.
The only other thing it has going for it is that it has a REPL (and even that was shit until very recently), which I think is why it became popular for research.
If that’s the case, then why didn’t Javascript take its place instead? It’s arguably even better at Python in both of those areas…


Agreed. I have seen a lot of Python code that was really painful to massage back into a more structured object hierarchy. Java certainly does a bit better in that respect, and as a language, it does a much better job of encouraging better practices, but I think it’s also largely due to the kinds of people that use those languages as well.


Sure, but as with all things, it can depend on a lot of factors. All code needs some degree of testing, though one could certainly argue that Python needs more than Java and Java needs more than Rust/Haskell/etc. So you could argue that the productivity gain of using Python is offset by the productivity loss of extra testing. It’s still hard to say which one wins out in the end.
It’s called tivoization and started with a device called “Tivo” which was the first of its kind to attempt this procedure.
There are probably lots of hardware devices in your house that use GPL software but prevent you from actually modifying it because the hardware will refuse to run modified copies. If a piece of software is licensed GPLv3, it would violate the license terms to do something like this.