

I had a website that was set up for only my personal use. According to the logs the only activity I ever saw was my own. However, it involves a compromise. Obscurity at the cost of accessibility and convenience.
First, when I set up my SSL cert, I chose to get a wildcard subdomain cert. That way I could use a random subdomain name and it wouldn’t show up on https://crt.sh/
Second, I use an uncommon port. My needs are very low so I don’t need to access my site all the time. The site is just a fun little hobby for myself. That means I’m not worried about accessing my site through places/businesses that block uncommon ports.
Accessing my site through a browser looks like: https//randomsubdomain.domainname.com:4444/
I’m going on the assumption that scrapers and crawlers are going to be searching common ports to maximize the number of sites they can access over wasting their time on searching uncommon ports.
If you are hosting on common ports (80, 443) then this isn’t going to be helpful at all and would likely require some sort of third party to manage scrapers and crawlers. For me, I get to enjoy my tiny corner of the internet with minimal effort and worry. Except my hard drive died recently so I’ll pick up again in January when I am not focused on other projects.
I’m sure given time, something will find my site. The game I’m playing is seeing how long it would take to find me.




I created a file tree that looks similar to my system’s file tree, except it only contains all the files that I modified or added and only their respective directories. From there I just use
rsyncto sync those files/file tree to the system’s/.It’s convenient to see what changes I currently have but it requires a bit of manual maintenance. I only really started doing it that way because I was learning how to use rsync and I just kept going on with it because it was working for me.
I’m only working with my laptop, android phone and two Raspberry Pi’s so I can get with my little rsync based setup.