In recent git versions (>2.23), git restore
and git restore --staged
are the preferred ways to discard changes in the working tree (git checkout -- .
) and staged changes (git reset --
) respectively.
In recent git versions (>2.23), git restore
and git restore --staged
are the preferred ways to discard changes in the working tree (git checkout -- .
) and staged changes (git reset --
) respectively.
My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
——On the cruelty of really teaching computing science - E.W. Djikstra
If you are looking at learning CS in a more holistic manner, there’s Path to a free self-taught education in Computer Science!. It’s a list of courses, categorized by topics, which are exactly what a CS undergraduate would learn. It might feel daunting at first, but you can pick any interesting topic and dive in.
I especially recommend CS50P for beginners.
One problem with exceptions is composability.
You have to rely on good and up-to-date documentation or you have to dig into the source code to figure out what exceptions are possible. For a lot of third party dependencies (which constitute a huge part of modern software), both can be missing.
Error type is a mitigation, but you are free to e.g. panic in Rust if you think the error is unrecoverable.
A third option is to have effect types like Koka, so that all possible exceptions (or effects) can be checked at type level. A similar approach can be observed in practical (read: non-academic) languages like Zig. It remains to be seen whether this style can be adopted by the mainstream.
Bingo!
The paper only says it’s a collaboration. It’s pretty large scale, so the opportunity might be rare. There’s a chance that (the same or other) researchers will follow up and experiment in more schools.
The interviews revealed that data scientists sometimes get distracted by the latest developments in AI and implement them in their projects without looking at the value that it will deliver.
At least part of this is due to resume-oriented development.
Sorry I wasn’t being clear. AC is used for connecting within areas of densely populated cities, e.g. British National Grid. If we are talking about really long distances (> hundreds of kilometers), HVDC is indeed preferred.
I was talking about a trend of some factories replacing AC from power grids (possibility generated in nearby cities) with DC from solar panels on their rooftops. So it’s a long distance compared to that.
Power grids would mean long distance power transmission, so AC has an advantage. If the point of consumption is near the point of PV generation, DC can and is already being used.
I know factories with solar panels on their rooftops to cut down power bills and instead of converting to high voltage AC, a custom-built DC power system is used.
For true immutability, burn something like tails on a read-only CD.
From my understanding, MV3 kills vital features of ad-blockers in that
Wikimedia Foundation (the org behind the Wikipedia and similar projects) does get more donations than their operational cost, but that’s expected. The idea is that they’ll invest the extra fund[1] and some day the return alone will be able to sustain Wikipedia forever.
Although, some have criticized that the actual situation is not clearly conveyed in their asking for donation message. It gives people an impression that Wikipedia is going under if you don’t donate.
Others also criticized that the feature development is slow compared to the funding, or that not enough portion is allocated to the feature development. See how many years it takes to get dark mode! I don’t know how it’s decided or what’s their target, so I can’t really comment on this.
They publish their annual financial auditions[2] and you can have a read if you’re interested. There are some interesting things. For example, in 2022-2023, processing donations actually costs twice as much as internet hosting, which one would expect to be the major expense.
The link to the study is just a “Paid Search Ad” page. Ouch for the professionalism of Forbes.
They tried to build this abomination in London and it got shot down.
Maybe I’m missing something, but shouldn’t the benchmark be a good approximation to the real workload? I don’t see how the measurements reflect the performance difference in real life usages.
Why would I need 100MiB/s processing as opposed to 20MiB/s processing, when I can only read maybe several lines per second?
Let me simplify it: proceeds to print the same expression
Deprecation warnings should contain suggestions for alternatives.
// TODO: Leave the code cleaner than you found