

so… netflix with extra storage?
so… netflix with extra storage?
reliably maintaining services
it’s funny that you use that as a selling point.
In my experience almost no outage happens because hardware failures. And most outages happen because bad configurations and/or expired certs, which in turn are a symptom of too much complexity.
I use UK-Layout, with some remappings for my precious umlauts
q+altgr ->ü
a+altgr -> ä
s+altgr -> ß
z+algr -> ö
bonus: in contrast to the peasentry I have an uppercase ẞ (altgr+shift+s)
raspi with vlc, mulvad and qbittorrent installed.
i used http://vanilla-js.com/
my last super simple project with js:
and the initial stuff was done with chatgpt. From start to finish the project took me about an hour (including deploying it to my server)
do banking apps work on it again?
[something] is sometimes a relaxing process
Yeah, no.
user shouting
user: “YOU MUST IMPLEMENT XYZ!!! IT’S ESSENTIAL FOR MY USECASE”
answer: "Thanks for your feed back. We accept pull requests. "
and the user was never heard from again.
how they fool the AI while keeping it invisible to the human eye
My guess is that AI companies will try to scrape as much as possible without a human ever looking at the data.
When poisoned data start to become enough of a problem, that humans have to look over very sample, then this would increase training cost to to a point where it’s no longer worth to bother with it in the first place.
what is the bare minimum of security measures you can do?
I guess just the normal things with p2p stuff: make sure no ports are exposed except for the essentials, update software, use SSL wherever possible.
When you don’t use VPN, people will see your actual IP adress and will launch the same kind of attacks, they also launch on servers [1] to try to hijack your system and add them to their bot net.
[1] port scans, login-attemps, applying known exploits. If this doesn’t sound scary, you should try operating a server that is exposed on the internet and then look at the number of login attemps.
yt-dl has a speedlimit. yt-dlp has not.
I recommend to use relevativ paths in the compose files. e.g.
- '/home/${USER}/server/configs/heimdall:/config'
becomes
- './configs/heimdall:/config'
you may want to add “:ro” to configs while you are at it.
also I like to put my service in /srv/ instead of home.
also I don’t see anything about https/ssl. I recommend adding a section for letsencrypt.
when services rely on each other, it’s a good idea to put them into the same compose file. (on 2nd thought: I am not sure if you already do that? To me it is not clear, if you use 1 big compose file for everything or many small ones. I would prefer to have 1 big one)
you can use “depends_on” to link services together.
you should be consistent with conventions between configurations. And you should remove config-properties that serve no purpose.:
while you are at it, you may want to consider using an .env file where you could move everything that would differ between different deployment. e.g.
consider using podman instead of docker. The configuration is pretty much identical to docker-syntax. The main difference is, that it doesn’t require a deamon with root privileges.
you may want to consider to pin version for the containers.
pro version pinning:
con version pinning:
the option to have two instances is nice for maintenance stuff, e.g.
another benifit of containers:
such as?
Because it implies that synchronous code […] [is] still quite popular.
it isn’t?
why use a resource folder for it, when you can embed a base64 encoding directly into the source file?
didn’t they switch to epub last year?