I’m trying to get back into self hosting. I had previously used Unraid and it worked well to run VMs where needed and Docker containers whenever possible. This biggest benefit is that there is an easy way to give each container it’s own IP so you don’t have to worry about port conflicts. Nobody else does this for Docker as far as I can tell and after trying multiple “guides”, none of them work unless you’re using some ancient and very specific hardware and software situation. I give up. I’m going back to Unraid that just works. No more Docker compose errors because it’s Ubuntu host is using some port requiring me to disable key features.

  • midnight@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Not saying I don’t believe you, but do you have any examples where changing the external port causes an issue? I change the port on almost every single docker container from what the default is. To be clear, I’m referring to the left side of the colon in the port declaration:

    
    ports:
          - 12080:80
    

    I should also clarify I don’t use LXC containers. My background had me more familiar with VMs so I went that route. I’ve never felt like I’m performing surgery when deploying containers, but I have seen other complaints around docker networking that I’ve apparently been lucky enough to avoid.

    Like I said though, do what works best for you. I don’t mind tinkering to get things tuned just right, which causes some friction with unRAID. I’ve invested enough time an energy for this where I just have to spin up a proxmox VM and pass the IP to a few Ansible playbooks I wrote to get to a healthy base state and then start deploying my docker containers. I recognize not everyone wants to do this though.