• 2 Posts
  • 21 Comments
Joined 5 months ago
cake
Cake day: April 20th, 2024

help-circle



  • I have an understanding of the underlying concepts. I’m mostly interested in the war driving. War driving, at least in my understanding, implies that someone, a state agency in this case, physically went to the very specific location of the suspect, penetrated their (wireless) network and therefore executed a successful traffic correlation attack.

    I’m interested in how they got their suspects narrowed down that drastically in the first place. Traffic correlation attacks, at least in my experience, usually happen in a WAN context, not LAN, for example with the help of ISPs.



  • Windows, as any operating system, is best run in a context most useful to the user and appropriate for the user’s technical level.

    • Need to run Windows apps/games and aren’t afraid to tinker around if and when something doesn’t work as expected or your software simply isn’t supported? WINE/Proton.
    • Need to run mostly light Windows apps and don’t want to tinker around? VM.
    • Need to run Windows apps/games that don’t rely on Kernel-Level Anti-Cheat, want direct hardware access and aren’t afraid to tinker around, especially if you only have one GPU, and when something doesn’t work as expected? KVM
    • Need to run any Windows app/game without things constantly breaking or the need to tinker around and staying on top of things? Dual-Boot from different disks, utilize LUKS/FDE and be done with it.

  • Emotet@slrpnk.nettoLinux Gaming@lemmy.worldGaming on Linux is great!
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    1 month ago

    Ehhh.

    Yeah, compared to a few years ago, it’s very much improved and a lot of games, especially those on Steam, run pretty good and in rare cases even better than on their native platform, Windows.

    But the pretty much broken state of VR support combined with some annoying bugs that are very hard to troubleshoot even for advanced users, the decision by most AAA and even some smaller studios to actively block Linux clients in multiplayer games via anti-cheat measures and the usual Linux fuckery of HDR, VRR (which hopefully will get better now that Wayland is getting there) and some NVIDIA fuckery (which is also getting better) leads to the following conclusions for me:

    1. Linux Gaming is improving.
    2. If all you play are some indie titles and/or single-player titles, you may be good.
    3. If you want to play in VR, most popular multiplayer titles and rely on features such as HDR and VRR, you’ll still need to dual boot into Windows.

    I’m very much looking forward to the day when I can fully banish Windows, at least from my private machines. I’m very tolerant towards debugging and living on the bleeding edge, if that is needed. But I don’t see the need for Windows for PC gaming to go away anytime soon for most users and, frankly, writing love letters to Linux Gaming without mentioning even some hurdles can, has and will take new Linux users by surprise and turn them off. Communicating transparently, so the user can make their own informed decisions, is a better strategy.






  • Emotet@slrpnk.nettoSelfhosted@lemmy.worldServer build for Family
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    While this is a great approach for any business hosting mission critical or user facing ressources, it is WAY overkill for a basic selfhosted setup involving family and friends.

    For this to make sense, you need to have access to 3 different physical locations with their own ISPs or rent 3 different VPS.

    Assuming one would use only 1 data drive + an equal parity drive, now we’re talking about 6 drives with the total usable capacity of one. If one decides to use fewer drives and link your nodes to one or two data drives (remotely), I/O and latency becomes an issue and you effectively introduced more points of failure than before.

    Not even talking about the massive increase in initial and running costs as well as administrive headaches, this isn’t worth it for basically anyone.



  • I’ve been tempted by Tailscale a few times before, but I don’t want to depend on their proprietary clients and control server. The latter could be solved by selfhosting Headscale, but at this point I figure that going for a basic Wireguard setup is probably easier to maintain.

    I’d like to have a look at your rules setup, I’m especially curious if/how you approached the event of the commercial VPN Wireguard tunnel(s) on your exit node(s) going down, which depending on the setup may send requests meant to go through the commercial VPN through your VPS exit node.

    Personally, I ended up with two Wireguard containers in the target LAN, a wireguard-server and a **wireguard-client **container.

    They both share a docker network with a specific subnet {DOCKER_SUBNET} and wireguard-client has a static IP {WG_CLIENT_IP} in that subnet.


    The wireguard-client has a slightly altered standard config to establish a tunnel to an external endpoint, a commercial VPN in this case:

    [Interface]
    PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Address = XXXXXXXXXXXXXXXXXXX
    
    PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
    PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
    
    PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
    
    PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
    
    [Peer]
    PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    AllowedIPs = 0.0.0.0/0,::0/0
    Endpoint = XXXXXXXXXXXXXXXXXXXX
    

    where

    PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
    PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
    

    are responsible for properly routing traffic coming in from outside the container and

    PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
    
    PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
    

    is your standard kill-switch meant to block traffic going out of any network interface except the tunnel interface in the event of the tunnel going down.


    The wireguard-server container has these PostUPs and -Downs:

    PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    default rules that come with the template and allow for routing packets through the server tunnel

    PostUp = wg set wg0 fwmark 51820

    the traffic out of the tunnel interface get marked

    PostUp = ip -4 route add 0.0.0.0/0 via {WG_CLIENT_IP} table 51820

    add a rule to routing table 51820 for routing all packets through the wireguard-client container

    PostUp = ip -4 rule add not fwmark 51820 table 51820

    packets not marked should use routing table 51820

    PostUp = ip -4 rule add table main suppress_prefixlength 0

    respect manual rules added to main routing table

    PostUp = ip route add {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0

    route packages with a destination in {LAN_SUBNET} to the actual {LAN_SUBNET} of the host

    PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0

    delete those rules after the tunnel goes down

    PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
    PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
    

    Basically the same kill-switch as in wireguard-client, but with the mark manually substituted since the command it relied on didn’t work in my server container for some reason and AFAIK the mark actually doesn’t change.


    Now do I actually need the kill-switch in wireguard-server? Is the kill-switch in wireguard-client sufficient? I’m not even sure anymore.



  • Oh, neat! Never noticed that option in the Wireguard app before. That’s very helpful already. Regarding your opnsense setup:

    I’ve dabbled in some (simple) routing before, but I’m far from anything one could call competent in that regard and even if I’d read up properly before writing my own routes/rules, I’d probably still wouldn’t trust that I hadn’t forgotten something to e.g. prevent IP/DNS leaks.

    I’m mainly relying on a Docker and was hoping for pointers on how to configure a Wireguard host container to route only internet traffic through another Wireguard Client container.

    I found this example, which is pretty close to my ideal setup. I’ll read up on that.




  • Great synopsis!

    The cool thing about GrapheneOS: It provides basically all the comforts and usability as any Android (stock) ROM minus some compatibility issues with a portion of Google Apps and services (Google Pay doesn’t and probably will never work, for example) while providing state-of-the-art security and privacy if you choose to utilize those features. A modern Pixel with up-to-date GrapheneOS, configured the right way, is literally the most secure and private smartphone you can get today.


  • The problem isn’t necessarily “stuff not sent over vpn isn’t encrypted”. Everyone uses TLS.

    Never said it was. It’s a noteworthy detail, since some (rare) HTTP unencrypted traffic as well as LAN traffic in general is a bit more concerning than your standard SSL traffic contentwise, apart from the IP.

    For this to be practical you first need a botnet of compromised home routers

    This is more of a Café/Hotel Wi-Fi thing IMO. While it may take some kind of effort to get control over some shitty IoT device in your typical home environment, pretty much every script kiddie can at least force spoof the DHCP server in an open network.


  • Interesting read.

    So, in short:

    • The attacker needs to have access to your LAN and become the DHCP server, e.g. by a starvation attack or timing attacks
    • The attacked host system needs to support DHCP option 121 (atm basically every OS except Android)
    • by abusing DHCP option 121, the attacker can push routes to the attacked host system that supersede other rules in most network stacks by having a more specific prefix, e.g. a 192.168.1.1/32 will supersede 0.0.0.0/0
    • The attacker can now force the attacked host system to route the traffic intended for a VPN virtual network interface (to be encrypted and forwarded to the VPN server) to the (physical) interface used for DHCP
    • This leads to traffic intended to be sent over the VPN to not get encrypted and being sent outside the tunnel.
    • This attack can be used before or after a VPN connection is established
    • Since the VPN tunnel is still established, any implemented kill switch doesn’t get triggered

    DHCP option 121 is still used for a reason, especially in business networks. At least on Linux, using network namespaces will fix this. Firewall mitigations can also work, but create other (very theoretical) attack surfaces.


  • Emotet@slrpnk.nettoLinux@lemmy.mlLix - a new fork of Nix
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    4 months ago

    The problem with Nix and its forks, imho, is that it takes a lot of work, patience, time and the willingness to learn yet another complex workflow with all of its shortcomings, bits and quirks to transition from something tried, tested and stable to something very volatile with no guaranteed widespread adoption.

    The whole leadership drama and the resulting forks, which may or may not want to achieve feature parity or spin off into their own thing, certainly doesn’t make the investment seem more attractive, either.

    I, too, like the concept of Nix very, very much. But apart from some experimental VMs, I’m not touching it on anything resembling a production environment until it looks to like it’s here to stay (predictable).