I’ve only ever used desktop Linux and don’t have server admin experience (unless you count hosting Minecraft servers on my personal machine lol). Currently using Artix and Void for my desktop computers as I’ve grown fond of runit.

I’m going to get a VPS for some personal projects and am at the point of deciding what distro I want to use. While I imagine that systemd is generally the best for servers due to the far more widespread support (therefore it’s better for the stability needs of a server), I have a somewhat high threat model compared to most people so I was wondering if maybe I should use something like runit instead which is much smaller and less vulnerable. Security needs are also the reason why I’m leaning away from using something like Debian, because how outdated the packages are would likely leave me open to vulnerabilities. Correct me if I’m misunderstanding any of that though.

Other than that I’m not sure what considerations there are to make for my server distro. Maybe a more mainstream distro would be more likely to have the software in its repos that I need to host my various projects. On the other hand, I don’t have any experience with, say, Fedora, and it’d probably be a lot easier for me to stick to something I know.

In terms of what I want to do with the VPS, it’ll be more general-purpose and hosting a few different projects. Currently thinking of hosting a Matrix instance, a Mastodon instance, a NextCloud instance, an SMTP server, and a light website, but I’m sure I’ll want to stick more miscellaneous stuff on there too.

So what distro do you use for your server hosting? What things should I consider when picking a distro?

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    56
    ·
    4 months ago

    I love Debian for servers. Super stable. No surprises. It just works. And millions of other people use it as well in case I need to look something up.

    And even when I’m lazy and don’t update to the latest release oldstable will be supported for years and years.

    • Marcos Dione@en.osm.town
      link
      fedilink
      arrow-up
      11
      ·
      4 months ago

      @bjoern_tantau @communism That ‘support for years and years’ means security support. So even if the nominal versions stay stable, security fixes are backported. Security scans that only check versions usually give false positives: they think fixes in newer versions are not present when in fact they are.

      Many others distros do exactly the same. I only chose Debian because the amount of software already packaged in the distro itself is bigger than any other, barring 3rd party repos.

    • ouch@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      4 months ago

      This is the way.

      Add unattended-upgrades, and never worry about security updates.

      • TheBigBrother@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        I’m using cron to run daily “sudo apt update && sudo apt upgrade -y” LMAO, what’s the way to use unattended-upgrades?

          • TheBigBrother@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            4 months ago

            Thx

            Edit: I will stay with cron I believe it’s easier to configure.

            sudo apt install cron sudo crontab -e @daily sudo apt update && sudo apt upgrade -y

            Easy peasy…

            • corsicanguppy@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              sudo apt install cron sudo crontab -e @daily sudo apt update && sudo apt upgrade -y

              I have 20 years of history with the RPM version of this workflow and up to EL6 it was solid like bedrock. Now it’s merely solid like a rock, but that’s nothing to do with the tools or formats but the payload. And as long as it stays acceptably good, this should do us for another 20 years.

              Controlling the supply chain is important, though, but is far more scalable where effort is concerned.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 months ago

    Always, always, always: Debian. It’s not even a debate. Ubuntu is a mess for using as a server with their snaps bullshit. Leave that trash on the desktop, it’s a mess on a server.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I tried them by standing up a snap based docker server and it was a nightmare. Never again.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago

        Snaps are meant for server applications

        That’s a frightening statement. I don’t work in secret-squirrel shit these days, but I do private-squirrel stuff, and snaps are just everything our security guys wake up at night to, screaming. Back when I ran security for a company, the entire idea would have been an insta-fuckno . Please, carefully reconsider the choices that put you in a position where snaps are the best answer.

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    arrow-up
    25
    ·
    4 months ago

    I run NixOS. It (or something like it, with a central declarative configuration for basically everything on the system) is imo the ideal server distro.

    • gomp@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      4 months ago

      I think I can sense your love/hate relationship with nixos from here :) you are not alone

      • 2xsaiko@discuss.tchncs.de
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        Very true haha. NixOS is great and the best I’ve got right now but I would lie if I said it has never been painful.

        Especially for desktop use I want to build my own distro which takes a lot from NixOS, mostly in terms of the central configuration but not much else (I definitely want a more sane package installation situation where you don’t need stuff like wrapper scripts which are incredibly awful imo), but also other distros, and also with some unconventional things (such as building it around GNUstep). But who knows if that ever gets off the ground, I have way too many projects with enormous scale…

  • Revan343@lemmy.ca
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    Always Debian. I’m most comfortable in an environment with apt, and that’s even more important on a server

  • ginza@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 months ago

    My server is running headless Debian. I run what I can in a Docker container. My experience has been rock solid.

    From what I understand Debian isn’t less secure due to the late updates. If anything it’s the opposite.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 months ago

    Debian, with a Kubernetes cluster on top running a bunch of Debian & Alpine containers. Never ever Ubuntu.

      • Daniel Quinn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 months ago

        Because Ubuntu is the worst of both worlds. Its packages are both old and unstable, offering zero benefit over always-up-to-date distros like Arch or the standard Debian.

        Especially when you’re running a containerised environment, there’s just no reason to opt for anything other than a stable, boring base OS while your containers can be as bleeding edge, crazy, or even Ubuntu-based as you like.

    • h0bbl3s@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      I second this. I run fedora on my desktop and debian on the server. Docker works great on debian as well.

  • SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 months ago

    I switched mine to NixOS a while ago. It’s got a steep learning curve, but it’s really nice having the entire server config exist in a handful of files.

  • asap@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    4 months ago

    uCore spin of Fedora CoreOS:

    https://github.com/ublue-os/ucore

    • SELinux
    • Supports secure boot
    • Immutable root partition (can’t be tampered with)
    • Rootless Podman (significantly more secure than Docker)
    • Everything runs in containers
    • Smart and secure opinionated defaults
    • Fedora base is very up-to-date, compared to something like Debian
    • Fliegenpilzgünni@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      How did you set up the intial system?
      From what I’ve seen, FCOS needs an ignition file and has no Anaconda installer. I would like to set it up soon too, but it looked like a huge hazzle…

      • asap@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        Yes you need an ignition file, but you just need to put it on any web accessible (local) host.

        I used a docker one-liner on my laptop to host the server:

        docker run -p 5080:80 --name quick-webserver -v "$PWD":/var/www/html php:7.2-apache
        

        And put this Ignition file in the directory I ran the above command from: https://github.com/ublue-os/ucore/blob/main/examples/ucore-autorebase.butane

        You could equally put the Ignition file on some other web host you have, or even Github.

        That’s it, that’s the only steps.

      • barsquid@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        If you want atomic Fedora but don’t want to deal with the ignition file stuff, check out Fedora IoT.

        • Fliegenpilzgünni@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Thing is, uCore has some very neat things I want, and FIOT doesn’t provide me such a great OOTB experience compared to the uBlue variant.


          I’m also not sure if I even should decide for Fedora Atomic as a server host OS.

          I really love Atomic as desktop distro, because it is pretty close to upstream, while still being stable (as in how often things change).

          For a desktop workstation, that’s great, because DEs for example get only better with each update, and I want to be as close to upstream as possible, without sacrificing reliability.
          The two major releases each year cycle is great for that.

          But for a server, even with the more stable kernel, I think that’s maybe too unstable? I think Debian is less maintenance, because it doesn’t change as often, and also doesn’t require rebooting as often.

          What’s your experience with it?

          • asap@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            doesn’t require rebooting as often.

            You have to reboot to upgrade to the latest image, so you’ll have to get rid of the ideal of uptime with years showing on the clock.

            Rebooting is optional, and so far it’s been rock solid. Since your workload is all containerised everything just comes up perfectly after a reboot without any intervention.

            I think Debian is less maintenance

            Arguably that’s the best feature of an atomic server. I don’t need to perform any maintenance, and I don’t need to worry that I’ve configured it in some way that has reduced my security. That’s all handled for me upstream.

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    4 months ago

    It’s not conventional wisdom, but I’m happiest with arch.

    • I’m familiar with it
    • can install basically any package without difficulty
    • also love that I never have a gigantic version upgrade to deal with. sure there might be some breaking change out of nowhere, but it’ll show up in my rss feeds and it hits all my computers at the same time so it’s not hard to deal with.
    • Arch never really surprises me because there’s nothing installed that didn’t choose to put there.
    • arch wiki

    Tempted by nixos but I CBA to learn it.

    • k4j8@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      I agree and use Arch as well, but of course I wouldn’t recommend it for everyone. For me, having the same distribution on both server and desktop makes it easier to maintain. I run almost everything using containers on the server and install minimal packages, minimizing my upgrade risk. I haven’t had an issue yet, but if I did I have btrfs snapshots and backups to resolve.

      • noolu@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        same exact setup, I’m running arch for years on both server and desktop, btrfs and containers. It’s beautiful and I click perfectly with it’s maintenance workflow

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 months ago

      arch is great if you don’t really care about your server being reliable (eg home lab) but their ethos isn’t really great for a server that has to be reliable… the constant update churn causes issues a lot more than i’d personally like for a server environment

      • traches@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        I could not disagree more. Arch is unstable in the meaning that it pushes breaking changes all the time, (as opposed to something like Ubuntu where you get hit with them all at once), but that’s a very different thing from reliability.

        There are no backported patches, no major version upgrades for the whole system, and you get package updates as soon as they are released. Arch packages are minimally modified from upstream, which also generally minimizes problems.

        The result has been in my experience outstandingly reliable over many years. The few problems I do encounter are almost always my own fault, and always easily recovered from by rolling back a snapshot.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          disagreement is fine, but there was literally a thread about “linux disinformation” where the OP asked for examples of things people say about linux that are untrue

          the top answers by FAR are that arch is stable

          saying that arch is stable, or easy for newcomers is doing the linux ecosystem a disservice

          you should never use arch for a server - arbitrary, rather than controlled and well-tested updates to the bleeding edge is literally everything you want to avoid in a server OS

          • traches@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            I didn’t say it was stable, I specifically said it was unstable. Because it is. I said arch is reliable, which is a completely different thing.

            Debian is stable because breaking changes are rare. Arch is unstable because breaking changes are common. In my personal experience, arch has been very reliable, because said breaking changes are manageable and unnecessary complexity is low.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              that’s fair, and i think that in the context that we were both talking about, what we both wrote was reasonably correct

              arch is a reliable OS that is sometimes unstable

              but a server needs a stable OS to be reliable, which means that whilst arch can be a reliable OS, it does not make a particularly reliable server

          • @pupbiru @traches , I certainly second this. People don’t need to become experts in Linux Distros, but they need to know what they want and need from their OS.

            If it’s browsing and writing word documents, maybe you don’t need a constant stream up updates and a stable LTS would suffice. Maybe even a regular 6 month release like Fedora will probably suffice. Even Debian would be great, if upgrading is annoying and newest software isn’t really important.

            Gaming? There are distros for that.

              • traches@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                I’m also not new to the Linux scene, I also run a variety of distros on a variety of machines including servers and I also write software professionally. Arch is fucking great.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    16
    ·
    4 months ago

    Debian has been rock solid for me.

    It’s not insecure. Quite the contrary debian repositories only include packages that has been through extensive testing and had been found secure and stable. And of course it regularly introduce security updates.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      It’s not insecure.

      There’s the inconvenient truth: it’s easiest to secure an OS, say for enterprise life, the farther you are from the bleeding edge: churn is lower, the targets move dramatically slower, and testing an install set (as a set) is markedly easier. It’s why enterprise linux distros are ALL version-branched at a given version, and only port security fixes in: if you need to change a package and start the extensive testing, keep it to security fixes and similarly drastic reasons.

      So most ent-like distros aren’t insecure; not at all. Security is the goal and the reason they endure wave after yearly wave of people not understanding why they don’t surf that bleeding edge. They don’t get it.

      Enterprise distros also offer a really stable platform to release stuff on; that was a mantra the sales team used for Open that we’d stress in ISV Engineering too, as we dealt with companies and people porting onto Open. But ISVs had their own inexperienced types for whom the idea of a stable platform that guaranteed a long life to their product with guaranteed compatibility wasn’t as valuable as “ooh shiny”. But that was the indirect benefit: market your Sybase or ProgressDb on the brand new release and once it’s working you don’t have to care about library rug-pulls or similar surprises for a fucking decade (or half that as you start the next wave onto the next distro release). And 5 years is a much better cadence than ‘every week’.

      So while it’s easy to secure and support something that never moves, that’s also not feasible: you have to march forward. So ent distros stay a little back from the bleeding edge, market ‘RHL7’ or ‘OL31’ as a stable LTS distro, and try to get people onto it so they have a better time of it.

      Just, now devs have to cope with libs and tools that are, on average, 5 years stale. For some, that’s not acceptable. And that’s always the challenge.

  • secret300@lemmy.sdf.org
    link
    fedilink
    arrow-up
    14
    ·
    4 months ago

    I just use debian cause it’s rock solid and most of what I set up are in containers or VM’S anyways