• 0 Posts
  • 363 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Nobody should feel a strong need to upgrade after only two generations. Same deal with most tech like GPUs and CPUs.

    I use my phone a lot and my Pixel 7 is fine. The primary factors driving my last couple upgrades were battery degradation and software support. Neither should be a big problem with a Fairphone.

    I’m also trying to decide whether to stick with the Pixel/GrapheneOS ecosystem or go for Fairphone.

    How hard/expensive was it to replace your battery? I looked on iFixIt and it seemed a lot harder than my orevious phones.











  • There are two potential show-stoppers.

    1. Field-specific apps that only run on windows. If you really need Adobe Creative Cloud or SolidWorks or something like that you might be out of luck. This is mostly true for apps that require GPU acceleration, which is difficult to rig up in a VM. You wouldn’t want to do that if it was a big part of your workload.

    2. Mandatory spyware and rootkit DRM to prevent cheating with remote tests. Hopefully if they do such a thing they provide loaner hardware too. I’ve seen a lot of bullshit in my time but my experience is outdated, so I don’t know what’s common nowadays.





  • It ranges from “automatic” to “infuriating”.

    If you have Secure Boot enabled, there are some hoops to jump through. Read the docs and follow the steps for DKMS.

    Depending on your distro and your requirements, you might want to install the drivers manually from Nvidia rather than using older drivers from your distro.

    If you need CUDA, god help you. Choose a distro that makes this easy and use containers to avoid dependency hell. Note that this is not any easier on Windows (at least not last I checked, which was a few years ago).





  • SEO (search engine optimization) has dominated search results for almost as long as search engines have existed. The entire field of SEO is about gaming the system at the expense of users, and often also at the expense of search platforms.

    The audience for an author’s gripping life story in every goddamn recipe was never humans, either. That was just for Google’s algorithm.

    Slop is not new. It’s just more automated now. There are two new problems for users, though:

    1. Google no longer gives a shit. They used to play the cat-and-mouse game, and while their victories were never long-lasting, at least their defeats were not permanent. (Remember ExpertsExchange? It took years before Google brought down the hammer on that. More recently, think of how many results you’ve seen from Pinterest, Forbes, or Medium, and think of how few of those deserved even a second of your time.)
    2. Companies that still do give a shit face a much more rapid exploitation cycle. The cats are still plain ol’ cats, but the mice are now Borg.

  • Well I’m sorry, but most PDF distillers since the 90s have come with OCR software that can extract text from the images and store it in a way that preserves the layout AND the meaning

    The accuracy rate of even the best OCR software is far, far too low for a wide array of potential use cases.

    Let’s say I have an archive of a few thousand scientific papers. These are neatly formatted digital documents, not even scanned images (though “scanned images” would be within scope of this task and should not be ignored). Even for that, there’s nothing out there that can produce reliably accurate results. Everything requires painstaking validation and correction if you really care about accuracy.

    Even ArXiv can’t do a perfect job of this. They launched their “beta” HTML converter a couple years ago. Improving accuracy and reliability is an ongoing challenge. And that’s with the help or LaTeX source material! It would naturally be much, much harder if they had to rely solely on the PDFs generated from that LaTeX. See: https://info.arxiv.org/about/accessible_HTML.html

    As for solving this problem with “AI”…uh…well, it’s not like “OCR” and “AI” are mutually exclusive terms. OCR tools have been using neural networks for a very long time already, it just wasn’t a buzzword back then so nobody called it “AI”. However, in the current landscape of “AI” in 2025, “accuracy” is usually just a happy accident. It doesn’t need to be that way, and I’m sure the folks behind commercial and open-source OCR tools are hard at work implementing new technology in a way that Doesn’t Suck.

    I’ve played around with various VL models and they still seem to be in the “proof of concept” phase.