

Regular users can use Gemini, Deepseek, Meta AI, and there will probably be many more services in the future.
Regular users can use Gemini, Deepseek, Meta AI, and there will probably be many more services in the future.
NFS gives me the best performance. I’ve tried GlusterFS (not at home, for work), and it was kind of a pain to set up and maintain.
If it works, I don’t update unless I’m bored or something. I also spread things out on multiple machines, so there’s less chance of stuff happening like you describe with the charts feature going away. My NAS is pretty much just a NAS now.
You can probably backup your configs/data, upgrade, then deploy jellyfin again, restore, and reconfigure. You should probably backup your data on your ZFS pool. But, I recently updated to the latest TrueNas Scale from ~5 year old FreeBSD version of TrueNas and the pools still worked fine (none of the “apps” or jails worked, obviously). The upgrade process even ported my service configurations over. I didn’t care about much of the data in the pools, so only backed up the most important stuff.
I personally use a dual core pentium with 16GB of RAM. When I first installed TrueNas (FreeNas back then), I only had 8GB of RAM, but that proved to be not enough to run all the services I wanted, so I would suggest 12-16GB. Depending on the services you want to run any multi-core x86 CPU that allows 16GB of RAM to be used should be adequate. I believe TrueNas recommends ECC RAM, but I don’t think using consumer grade RAM and hardware has caused me any problems. I’m also using an old SSD for the system drive, which I is recommended now (I used to use 2 mirrored USB thumb drives, buy that’s not recommended anymore). Very importantly, make sure the HDD(s) you get are not shingled drives; made that mistake initially, and performance was ridiculously bad.
The Republican party isn’t acting like they’re worried about having to compete in fair elections again. It’s also looking like the administration doesn’t need congress or the courts, and can do whatever they want.
Could use it kind of like an extra monitor with something like Barrier.
Could use it like a home assistant for a kitchen or something, but I don’t know if there’s any good privacy respecting software for that ATM (looks like MyCroft went bankrupt).
I used an old laptop I had laying around for controlling a Maslow CNC. Could also use a laptop to run OctoPrint or something.
Yeah, I was disappointment when I bought a very expensive Galaxy S22 to replace my old Moto G whose charging port wore out,. The S22 had worse battery life, camera, and no noticeable performance improvements. Recently, my S22 stopped charging, and I just bought a “Mint”-grade used Pixel 6 and installed GrapheneOS on it. Happy so far, and it’s nice to be able to block network access to all apps, including Google’s.
Trump has mentioned that tariffs will help him pay for his planned tax cuts. Tariffs are like a flat-tax, which disproportionately help the rich while taking more from the poor.
I also think there may be some other angles they’re working; but I’m not completely sure on. Trump often threatens people to solicit favors; so this may also be a way for him and his cronies collect bribes and favorably business deals from politicians and the wealthy from around the world. He may also have deals with Putin, because he’s acting exactly how you’d expect a person to act who was trying to destroy the Western hegemony.
Some of the “open” models seem to have augmented their training data with OpenAI and Anthropic requests (I. E. they sometimes say they’re ChatGPT or Claude). I guess that may be considered piracy. There are a lot of customer service bots that just hook into OpenAI APIs and don’t have a lot of guardrails, so you can do stuff like ask a car dealership’s customer service to write you Python code. Actual piracy would require someone leaking the model.
It’s ok for very small scripts that are easy to reason through. I’ve used it extensively in CI/CD, just because we were using Jenkins for that and it was the path of least resistance. I do not like the language though.
I’m curious if ByteDance could just create a new legal entity and call it TikTak or something.
If you have to verify children’s identity, you have to verify everyone’s identity. This is part of KOSA. https://www.eff.org/deeplinks/2024/12/kids-online-safety-act-continues-threaten-our-rights-online-year-review-2024
Worked manual jobs (assembly line) right out of highschool (well fast food during highschool too), and absolutely hated how boring it was to me. I’m not a social person, and used to have really bad social anxiety. I’ve always had an interest in computers, for whatever reason, so after a few years of manual labor, decided to go to college for that. Also, I lived in a very depressed area, and the jobs I had were very low paying, to the point I couldn’t afford to move out from my parents, so something had to change.
Anyways, I made the right choice, because I’m pretty good at what I do, and I love encountering and solving difficult problems.
While in college, I did work at a metal fab shop for a summer, and I could’ve totally seen myself doing that as well. It wasn’t mind-numbing like assembly line work, did involve problem solving, and the tools and machines were “cool.”
Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it’s not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.
I learned it because I had to write a WPF desktop application, so you could start with WPF tutorials. I was already very familiar with Java, which is very similar, so it wasn’t too hard. Last time I used it was in Unity. You might want to find a good free online course for C# to get a good grasp of C#/Java’s style of OOP, design patterns, and all that kind of stuff.
I just use Joplin, encrypted, and synced through dropbox. Tried logseq, but never really figured out how to use its features effectively. The notebook/note model of Joplin seems more natural to me. My coding/scripting stuff mostly just goes into git repos.
The PC I’m using as a little NAS usually draws around 75 watt. My jellyfin and general home server draws about 50 watt while idle but can jump up to 150 watt. Most of the components are very old. I know I could get the power usage down significantly by using newer components, but not sure if the electricity use outweighs the cost of sending them to the landfill and creating demand for more newer components to be manufactured.
I’m loading up on vacuum tubes.
Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.
I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.
A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.
Oh, I forgot about Claude. Last time I tried it, it seemed on par or even better that ChatGPT-4o (but was missing features like browsing).