• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.


  • It’s likely CentOS 7.9, which was released in Nov. 2020 and shipped with kernel version 3.10.0-1160. It’s not completely ridiculous for a one year old POS systems to have a four year old OS. Design for those systems probably started a few years ago, when CentOS 7.9 was relatively recent. For an embedded system the bias would have been toward an established and mature OS, and CentOS 8.x was likely considered “too new” at the time they were speccing these systems. Remotely upgrading between major releases would not be advisable in an embedded system. The RHEL/CentOS in-place upgrade story is… not great. There was zero support for in-place upgrade until RHEL/CentOS 7, and it’s still considered “at your own risk” (source).


  • Anything that pushes the CPUs significantly can cause instability in affected parts. I think there are at least two separate issues Intel is facing:

    • Voltage irregularities causing instability. These could potentially be fixed by the microcode update Intel will be shipping in mid-August.
    • Oxidation of CPU vias. This issue cannot be fixed by any update, any affected part has corrosion inside the CPU die and only replacement would resolve the issue.

    Intel’s messaging around this problem has been very slanted towards talking as little as possible about the oxidation issue. Their initial Intel community post was very carefully worded to make it sound like voltage irregularity was the root cause, but careful reading of their statement reveals that it could be interpreted as only saying that instability is a root cause. They buried the admission that there is an oxidation issue in a Reddit comment, of all things. All they’ve said about oxidation is that the issue was resolved at the chip fab some time in 2023, and they’ve claimed it only affected 13th gen parts. There’s no word on which parts number, date ranges, processor code ranges etc. are affected. It seems pretty clear that they wanted the press talking about the microcode update and not the chips that will have the be RMA’d.


  • CountVon@sh.itjust.workstoProgrammer Humor@lemmy.mlPunch cards ftw
    link
    fedilink
    English
    arrow-up
    48
    ·
    edit-2
    3 months ago

    One of my grandfathers worked for a telephone company before he passed. That man was an absolute pack rat, he wouldn’t throw anything away. So naturally he had boxes and boxes of punch cards in this basement. I guess they were being thrown out when his employer upgraded to machines that didn’t need punch cards, so he snagged those to use as note paper. I will say, they were great for taking notes. Nice sturdy card stock, and the perfect dimensions for making a shopping list or the like.


  • I’m sure there would be a way to do this with Debian, but I have to confess I don’t know it. I have successfully done this in the past with Clover Bootloader. You have to enable an NVMe driver, but once that’s done you should see an option to boot from your NVMe device. After you’ve booted from it once, Clover should remember and boot from that device automatically going forward. I used this method for years in a home theatre PC with an old motherboard and an NVMe drive on a PCIe adapter.


  • People here seem partial to Jellyfin

    I recently switched to Jellyfin and I’ve been pretty impressed with it. Previously I was using some DLNA server software (not Plex) with my TV’s built-in DLNA client. That worked well for several years but I started having problems with new media items not appearing on the TV, so I decided to try some alternatives. Jellyfin was the first one I tried, and it’s working so well that I haven’t felt compelled to search any further.

    the internet seems to feel it doesn’t work smoothly with xbox (buggy app/integration).

    Why not try it and see how it works for you? Jellyfin is free and open source, so all it would cost you is a little time.

    I have a TCL tv with (with google smart TV software)

    Can you install apps from Google Play on this TV? If so, there’s a Jellyfin app for Google TVs. I can’t say how well the Google TV Jellyfin app works as I have an LG TV myself, so currently I’m using the Jellyfin LG TV app.

    If you can’t install apps on that TV, does it have a DLNA client built in? Many TVs do, and that’s how I streamed media to my TV for years. On my LG TV the DLNA server shows up as another source when I press the button to bring up the list of inputs. The custom app is definitely a lot more feature-rich, but a DLNA client can be quite functional and Jellyfin can be configured to work as a DLNA server.






  • I briefly experimented with it ages ago. And I mean ages ago, like 20+ years ago. Maybe it’s changed somewhat since then, but my understanding is that Gentoo doesn’t provide binary packages. Everything gets compiled from source using exactly the options you want and compiled exactly for your hardware. That’s great and all but it has two big downsides:

    • Most users don’t need or even want to specify every compile option. The number of compile options to wade through for some packages (e.g. the kernel) is incredibly long, and many won’t be applicable to your particular setup.
    • The benefits of compiling specifically for your system are likely questionable, and the amount of time it takes to compile can be long depending on your hardware. Bear in mind I was compiling on a Pentium 2 at the time, so this may be a lot less relevant to modern systems. I think it took me something like 12 hours to do the first-time compile when I installed Gentoo, and then some mistake I made in the configuration made me want to reinstall and I just wasn’t willing to sit through that again.




  • Remember there are actual people who are making these decisions.

    Sure, but what I want to know is why they feel comfortable making immoral decisions. Are they all psychopaths? Psychopathy is known to be more common in the C-suite, by some estimates 3.5% of executives are psychopaths. Businesses reward those who deliver good business outcomes, and psychopaths might tend do better at that with no pesky moral compass to get in the way. But the rest are just average people, probably no different than the general populace when it comes to measures of morality. So if 95%+ of oil company executives are not inherently less moral than the rest of us, why the hell would they be willing to make decisions that literally destroy the fucking planet?? I mean, the oil companies knew climate change was a big fucking problem decades ago, and they still did what they did. How the fuck does that even happen??

    My thesis here is that the corporate structure itself is sufficient to compel otherwise moral people to make choices that are absolutely heinous when viewed objectively. When you’re faced with an option that makes your corporate targets and nets you a bonus but irreparably harms some distant other, the average person will tend to make the immoral choice. They’ll rationalize it, they’ll minimize it, but ultimately they will happily fuck over someone in another country, another generation, or hell, just in another office, so they can make a buck.


  • Corporations are always happy to pander to morality when it’s to their benefit, but I believe corporations are inherently amoral. They might make decisions that are moral, but that’s just a happy coincidence that occurs when the decision that’s in their interest also happens to be the moral choice. Corporations are equally happy to make choices that most would consider immoral, if it meets their goals.

    I have no source for this, but my theory is that when the workforce of a corporation grow past Dunbar’s number it will inherently bend toward amorality. Making moral choices requires knowing the people affected by your choices, and having empathy for them. Once it becomes impossible for one worker at a company to have a personal relationship with every other member of the staff, it’s all too easy for groups to form within the company that will make choices that drive the company’s goals (growth, revenue, profit) at the expense of anything and everything else (the environment, the community, their customers, even their own workers).


  • Key resellers are really, truly awful. In many cases the keys are purchased from legitimate sites using stolen credit card numbers. The key resellers plead ignorance as to where the keys come from, but it’s an open secret at this point. If you don’t want to pay the Steam/Gog price, piracy is less awful because you won’t be fueling a criminal enterprise and there’s no chance your Steam/Gog account will get a stolen key revoked.

    Credit card fraud and software keys actually ends up being paid for by the rest of us. Fraudulent transactions and chargebacks lead to higher merchant fees, and those costs end up getting passed on to legitimate purchasers.



  • Oracle is shit because they use Red Hat works, providing contract on top of it… and only add UEK as … “better option” …

    That’s something they were allowed to do. It’s something everyone was allowed to do. FOSS means free and open source for everyone, even people and organizations you don’t like. Otherwise it’s not really free (as in freedom), now is it?

    Also, the “contract on top of it” is this license, which is a pretty short read. In my view it’s a very inoffensive license compared to Red Hat’s coercive license.

    Also also, they’re forking Oracle Linux from RHEL as of 9.3, so they’re won’t be “taking” from Red Hat in future anyhow.

    They (oracle) do contribute some on mainline kernel, but by making RHEL copy paste and only add UEK and their product… ugh… I don’t know.

    It drives me nuts when I see people imply that Oracle was somehow “stealing” from Red Hat by creating a downstream distro. It’s not theft when the thing being taken was free and open source! So Oracle copy-pasted RHEL, made some changes and redistributed it. So what? That’s something everyone was allowed to do, as long as they didn’t violate the open source license while doing it. Oracle isn’t violating the open source licenses, the sources are freely available, so why should I fault them for doing what they did?

    I think you’re also overlooking how much Oracle Linux actually benefited Red Hat themselves. By making Oracle Linux a downstream distro and testing all the Oracle software on it, I’d argue that Oracle actually made RHEL more valuable by increasing the number of enterprise workloads RHEL could support. Yes, a customer could theoretically get support from Oracle instead of Red Hat, but hardly anyone actually did that. I see real-world Oracle Database installs every day and the majority of them are on Red Hat Enterprise Linux proper. Very few are on a downstream. Every one of those RHEL installs is a paying Red Hat customer.

    Oracle didn’t do all that out of the goodness of their hearts of course, they did it because their customers wanted to standardize on one OS and Oracle wanted to sell them database (and other) software. They did it for profit, but there’s nothing inherently wrong with that. Both Oracle and Red Hat profited from that arrangement. Every enterprise Linux user indirectly benefited from the arrangement too, because it meant there was a less fragmented OS ecosystem to build on! But now Red Hat wants to alter the deal, Vader-style, Oracle is forking Oracle Linux, and you know who loses the most in all of this? All of those users who previously enjoyed the benefit of a less fragmented enterprise OS landscape, myself among them. As far I’m concerned, the blame for that lies squarely at Red Hat’s feet.


  • I was actually kind of hoping for the second option, if only so that it would be Oracle footing the legal bill to establish a precedent. That Oracle didn’t choose this option may indicate that Red Hat’s coercive license wrapper (“if you exercise your open source rights to redistribute, we’ll close your account”) is actually an effective and legal end-run around open source licenses. I don’t want that to be the case.