You’re probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn’t matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.

If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you’re a pointy head with a fetish for iptables this will be a world of pain, so isn’t really a solution.

There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.

Am I missing an obvious solution here?

It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      14 hours ago

      you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

      Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.

      • tux7350@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        14 hours ago

        Something like this. This is a compose.yml that only allows ips from the local host 8080 to connect to the container port 80.

        services:
          webapp:
            image: nginx:latest
            container_name: local_nginx
            ports:
              - "127.0.0.1:8080:80"
        
          • tux7350@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 hours ago

            Well if your reverse proxy is also inside of a container, you dont need to expose the port at all. As long as the containers are in the same docker network then they can communicate.

            If your reverse proxy is not inside a docker container, then yes this method would work to prevent clients from connecting to a docker container.

              • tux7350@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                12 hours ago

                Course, feel free to DM if you have questions.

                This is a common setup. Have a firewall block all traffic. Use docker to punch a hole through the firewall and expose only 443 to the reverse proxy. Now any container can be routed through the reverse proxy as long as the container is on the same docker network.

                If you define no network, the containers are put into a default bridge network, use docker inspect to see the container ips.

                Here is an example of how to define a custom docker network called “proxy_net” and statically set each container ip.

                networks:
                  proxy_net:
                    driver: bridge
                    ipam:
                      config:
                        - subnet: 172.28.0.0/16
                
                services:
                  app1:
                    image: nginx:latest
                    container_name: app1
                    networks:
                      proxy_net:
                        ipv4_address: 172.28.0.10
                    ports:
                      - "8080:80"
                
                  whoami:
                    image: containous/whoami:latest
                    container_name: whoami
                    networks:
                      proxy_net:
                        ipv4_address: 172.28.0.11
                

                Notice how “who am I” is not exposed at all. The nginx container can now serve the whoami container with the proper config, pointing at 172.28.0.11.

      • Matt The Horwood@lemmy.horwood.cloud
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        sure, you can see below that port 53 is only on a secondary IP I have on my docker host.

        ---
        services:
          pihole01:
            image: pihole/pihole:latest
            container_name: pihole01
            ports:
              - "8180:80/tcp"
              - "9443:443/tcp"
              - "192.168.1.156:53:53/tcp" # this will only bind to that IP
              - "192.168.1.156:53:53/udp" # this will only bind to that IP
              - "192.168.1.156:67:67/udp" # this will only bind to that IP
            environment:
              TZ: 'Europe/London'
              FTLCONF_webserver_api_password: 'mysecurepassword'
              FTLCONF_dns_listeningMode: 'all'
            dns:
              - '127.0.0.1'
              - '192.168.1.1'
            restart: unless-stopped
            labels:
                - "traefik.http.routers.pihole_primary.rule=Host(`dns01.example.com`)"
                - "traefik.http.routers.pihole_primary.service=pihole_primary"
                - "traefik.http.services.pihole_primary.loadbalancer.server.port=80"
        
    • Björn@swg-empire.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      14 hours ago

      Yeah, leaving unwanted ports open is a configuration problem. A firewall gives you just the opportunity to fuck up twice.