(This is a repost of this reddit post https://www.reddit.com/r/selfhosted/comments/1fbv41n/what_are_the_things_that_makes_a_selfhostable/, I wanna ask this here just in case folks in this community also have some thoughts about it)
What are the things that makes a selfhostable app/project project good? Maybe another way to phrase this question is, what are the things that makes a project easier to self-host?
I have been developing an application that focuses on being easy to selfhost. I have been looking around for existing and already good project such as paperless-ngx, Immich, etc.
From what I gather the most important thing are:
- Good docs, this is probably the most important. The developer must document how to self-host
- Less runtime dependency–I’m not sure about this one, but the less it depends on other services the better
- Optional OIDC–I’m even less sure about this one, and I’m also not sure about implementing this feature on my own app as it’s difficult to develop. It seems that after reading this subreddit/community, I concluded that lots of people here prefer to separate identity/user pool and app service. This means running a separate service for authentication and authorization.
What do you think? Another question is, are there any more good project that can be used as a good example of selfhostable app?
Thank you
Some redditors responded on the post:
- easy to install, try, and configure with sane defaults
- availabiity of image on dockerhub
- screenshots
- good GUI
I also came across this comment from Hacker News lately, and I think about it a lot
https://news.ycombinator.com/item?id=40523806
This is what self-hosted software should be. An app, self-contained, (essentially) a single file with minimal dependencies.
Not something so complex that it requires docker. Not something that requires you to install a separate database. Not something that depends on redis and other external services.
I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.
Do you agree with this?
Not something so complex that it requires docker.
I disagree. Docker makes things a lot easier and I’m going to use it regardless.
My rule is pretty simple: not PHP. PHP requires configuring a web server, so either that’s embedded in the docker image, (violates the “do one thing” rule of docker) or it’s pushed onto the user. This falls under the dependencies part, but I uniquely hate dealing with standalone web servers and I don’t mind configuring databases, so I called it out.
I actually tried switching to OCIS from Nextcloud specifically to avoid PHP, but OCIS is even more complex so I bailed.
Give me an example configuration that works out of the box and detailed documentation about options and I’ll be happy. Don’t make me configure a web server any particular way, and do let me handle TLS myself. If you do that, I’ll probably check it out.
My list of items I look for:
- A docker image is available. Not some sort of make or build script which make gods know what changes to my system, even if the end result is a docker image. Just have a docker image out on Dockerhub or a Dockerfile as part of the project. A docker-compose.yaml file is a nice bonus.
- Two factor auth. I understand this is hard, but if you are actually building something you want people to seriously use, it needs to be seriously secured. Bonus points for working with my YubiKey.
- Good authentication logging. I may be an outlier on this one, but I actually look at the audit logs for my services. Having a log of authentication activity (successes and failures) is important to me. I use both fail2ban to block off IPs which get up to any fuckery and I manually blackhole entire ASNs when it seems they are sourcing a lot of attacks. Give me timestamps (in ISO8601 format, all other formats are wrong), IP address, username, success or failure (as a independent field, not buried in a message or other string) and any client information you can (e.g. User-Agent strings).
- Good error logging. Look, I kinda suck, I’m gonna break stuff. When I do, it’s nice to have solid logging giving me an idea of what I broke and to provide a standardized error code to search on. It also means that, when I give up and post it as an issue to your github page, I can provide you with some useful context.
As for that hackernews response, I’d categorically disagree with most of it.
An app, self-contained, (essentially) a single file with minimal dependencies.
Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.
Not something so complex that it requires docker.
“Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.Not something that requires you to install a separate database.
Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.
Not something that depends on redis and other external services.
Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.
@hono4kami To me, good documentation is the number one thing that makes a selfhostable application good.
Second would be “is it dockerized ?”Yep, documentation and a good base level default installation configuration/guide with minimal friction.
I’m perfectly willing to play around once I know at the basic level that the core flow is going to work for me. If it takes me digging through a stack of documentation (especially if it’s bad) to even get something to experiment with on my own system? I won’t bother.
To me, good documentation is the number one thing that makes a selfhostable application good.
I agree. If you don’t mind: what are your qualifications for good documentation? Do you have some good examples of good docs?
What helps a lot for apps with multiple config files:
- if you tell the user to “add code xy to the config file” : tell me which file. is it the main config file? the one of the reverse proxy etc.?
- provide a sensible example library of the config structure. For example: duting the implementation of an importer for beancount I was struggling with what goes where. The example structure was really, really helpful.
- also, if you have configurations which allow different options: TELL ME THE OPTIONS! If I get an error during startup, that for config.foo the value “bar” is not allowed, I need a list of options somehwere, so many hours lost to find out what I can write to config.foo
including examples for everything in the docs is the best way to explain imo
@hono4kami
One of the best documentation I’ve encountered so far:
https://borgbackup.readthedocs.io/en/stable/
Before even getting to documentation, I see so many projects that don’t have a short summary of what they do (and maybe what to not expect them to do).
As an example, Home Assistant. I can tell that it involves home automation, so can I replace Google Home with it? It seems like it doesn’t do voice recognition without add-ons and it can work with Google Assistant. Do I still need accounts with the providers of smart appliances, or can it control my bulbs directly?
None of that is very clear from the website.
I’ve seen plenty of other projects where it’s assumed there’s no need to explain it’s overall purpose.
For me, it’s screenshots.
I can’t even count how many self-hosted or open source projects I’ve wanted to check out, and the project page is just text.
If I don’t know exactly what I’m getting into in the first 10 seconds, I’m onto something else, especially when it’s something heavily based on UI/UX with frequent interaction.
EDIT: Also, I’m a fan of docker apps to run off my Synology NAS, but it better come with step-by-step instructions, or I won’t bother. There are some good resources for detailed instructions for various self-hosted/NAS/docker related content, but it’s nice when a project actually has this in their documentation.
10000% this.
Tell me what it does, and SHOW me what it does.
Because guessing what the hell your thing looks like and behaves like is going to get me to bounce pretty much immediately because you’ve now made it where I have to figure out how to deploy your shit if I want to know. And, uh, generally, if you have no screenshots, you have no good documentation and thus it’s going to suuuuck.
To me the number one thing is, that it is easy to setup via Docker. One container, one network (ideally no network but just using the default one), one storage volume, no additional manual configuration when composing the container.
No, I don’t want a second container for a database. No I don’t want to set up multiple networks. Yes, I already have a reverse proxy doing the routing and certificates. No, I don’t need 3 volumes for just one application.
Please just don’t clutter my environment.
I disagree with pretty much all of this, you are trading maintainability and security for easy setup. Providing a docker-compose file accomplishes the same thing without the sacrifice
- separate volumes for configuration, data, and cache because I might want to put them in different places and use different backup strategies. Config and db on SSD, large data on spinning rust, for example.
- separate container for the database because the official database images are guaranteed to be better maintained than whatever every random project includes in their image
- separate networks because putting your reverse proxy on a different network from your database is just prudent
No, I don’t want a second container for a database.
Unless you’re talking about using SQLite:
Isn’t the point of Docker container is to only have one software/process running? I’m sure you can use something like s6 or other lightweight supervisor, but I feel like that’s seems counterintuitive?
To me, the point of Docker is having one container for one specific application. And I see the database as part of the application. As well as all other things needed to run that application.
Since we’re here, lets take Lemmy for example. It wants 6 different containers with a total of 7 different volumes (and I need to manually download and edit multiple files before even touching anything Docker-related).
In the end I have lemmy, lemmy-ui, pictrs, postgres, postfix-relay, and an additional reverse proxy for one single application (Lemmy). I do not want or need or use any of the containers for anything else except Lemmy.
There are a lot of other applications that want me to install a database container, a reverse proxy, and the actual application container, where I will never ever need, or want, or use any of the additional containers for anything else except this one application.
So in the end I have a dozen of containers and the same amount of volumes just to run 2-3 applications, causing a metric shit-ton of maintenance effort and update time.
I agree with this. If you are going to be using multiple containers for a single app anyways, what is the point of it being in multiple containers? Stick all of it in one container and save everyone the hassle.
It’s because of updates and who owns the support.
The postgres project makes the postgres container, the pict-rs project makes the pict-rs container, and so on.
When you make a monolithic container you’re now responsible for keeping your shit and everyone else’s updated, patched, and secured.
I don’t blame any dev for not wanting to own all that mess, and thus, you end up with seperate containers for each service.
I can see why editing config files is annoying, but why exactly are two services and volumes in a docker-compose file any more difficult to manage than one?
See it in a broader scope. If I’d only host Lemmy with is multiple mandatory things, I couldn’t care less, but I already have some other applications that I run via Docker. Fortunately I was able to keep the footprint small, no multiple containers or volumes for one application, but as said: those exist. And they would clutter the setup and make it harder to maintain an manage.
I also stand by my point that it is counter-intuitive to have multiple containers and volumes for just one single application.
Ok but is there room for the idea that your intuitions are incorrect? Plenty of things in the world are counter-intuitive. ‘docker-compose up -d’ works the same whether it’s one container or fifty.
Computer resources are measured in bits and clock cycles, not the number of containers and volumes. It’s entirely possible (even likely) that an all-in-one container will be more resource-heavy than the same services split across multiple containers. Logging from an all-in-one will be a jumbled mess, troubleshooting issues or making changes will be annoying, it’s worse in every way except the length of output from ‘docker ps’
docker ps
or Portainer as a nice web-UI wrapper around the Docker commands are the two main use cases with Docker I have have on a regular basis.No, thank you. I am not going to maintain fifty containers and fifty + X volumes for just a handful of applications and will alway prefer self-contained applications over applications that spread over multiple containers for no real reason.
but there is a reason i just explained it to you
I came here to basically say this. It’s especially bad when you aren’t even sure if you want to keep the service and are just testing it out. If I already have to go through a huge setup/troubleshooting process just to test the app, then I’m not feeling very good about it.
I prefer this, but if the options are available its shows me that soneone actually thought about it while creating the software/conatiner
Not something so complex that it requires a docker
Docker is the thing that sandboxes your services from the host OS. I’d rather use Podman because of the true non-root mode, but Docker is still based. Plus, you can use Docker Swarm if you don’t want to switch to Kubernetes (though you don’t have easy storage integration for persistence).
The problem is when Docker is used to gift wrap a mess. Then there are rotting dependencies in the containers. The nice thing about Debian packaged things is the maintainer is forced to do things properly. Even more so if they get it into the repos.
My preference is Debian Stable in LXC or even KVM for services. I only go for Docker if that is the recommended option. There is stuff out there where the recommend way is their VM image which is full of their soup of Dockers.
Docker is in my pile of technologies I don’t really like or approve of, but don’t have the energy to really fight.
Docker is also the thing that allows the distribution of the app as “single file with minimal dependencies”.
Not a single app, not minimal dependencies. It’s a file that gets processed and creates many gigabites of leftovers, with an enormous runtime and piles of abstractions
IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.
I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.
What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.
My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.
A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down.
I see, it’s probably good to have some balance between those. Noted
A docker compose example with as few images as possible.
Ease of installation/use, I think, is the main big one, and one of the biggest obstacles.
People who want to give self-hosting a try aren’t going to be particularly fond of having to jump through a whole bunch of different configs, and manually set everything up.
They want something that they can just set up and go, without having to deal with server hosting, services, and all of that. Something you can just run on your computer, leave it be, and use it with relatively little fuss.
Second to that, would definitely be a case of better documentation/screenshots. A lot of self-hosted things, like Lemmy, didn’t provide much documentation of what the actual user side of it does, only what you need to do to set it up, which isn’t going to make me want to use the software, if I have no idea what it’s supposed to do, and how it compares to other things that do the same.
I’ve turned down many self-hosted options due to the complexity of >the setup and maintenance.
Do you agree with this?
No.
I like my services and stack.
From the *arrs and Jellyfin, HortusFox, Unifi Network Application (though it should run with just a SQL-Lite DB) over many things else.
Yes, databases are annoying but if the service that wants it is sane I have no problem doing it.What does grind my gears is when services have many breaking changes (e.g. Immich). If it wasnt for that I would be more open to finally start with that and maybe install good and working immich service.
The things redditors mentioned are very good already. Primarily screenshots.
Please, please always add screenshots to let me have a general idea of the UI.
At the very least a demo instance if you can’t be bothered to add screenshots (yes, I have seen many services that would rather share a demo instance than screenshots…)The things redditors mentioned are very good already. Primarily screenshots. Please, please always add screenshots to let me have a general idea of the UI.
I’ve read this mentioned many times. Is it really that bad XD
It’s annoying to figure out if the program is any good if it’s reliant on a UI.
Documentation, screenshots, a forum, one click installer or simple line to paste into the terminal.
My points are totally in the other direction:
- stable, this is critic, if the app is not able to performs its duties with. 2 weeks uptime, then it is bad. This also applies to random failures. I don’t want to spend endless days to fix it
- docker, with a all-in-image, and as a nice to have the possibility to connect external docker composes for vpn, or databases
- a moderate use of resources, not super critic, but nobody likes to have ram problems
And then as a second league that lean the balance:
- integration with LDAP or any central user repo
- relatively easy to backup and restore
- relatively low level of break changes from version to version
- the gui / ease of use (in like with the complexity of the problem I want to address)
- sane use of defaults and logging capabilities
That’s all from my side
One thing that makes a project good is knowing what it does, I’ve seen quite a few projects where they talk about all the features and technology and how to configure it but not a word about what it actually does, what problems it solves and so on.
I won’t self host your program if you don’t even tell me what it does, don’t make me search and clue together large parts of the documentation just to find if I want it. A simple explanation is enough but somehow I’ve seen quite a few programs that don’t have it.
Please be mindful of HDD spindown.
If your app frequently looks up stuff in a database and also has a bunch of files that are accessed on-demand, then please have an option to separate the data-directory from the appdata-directory.
A lot of stuff is self-hosted in homes and not everyone has the luxury of a dedicated server room.
separate the data-directory from the appdata-directory
Would you mind explaining more about this?
Take my setup for jellyfin as an example: There’s a database located on the SSD and there’s my media library located on an HDD array. The HDD is only spun up when jellyfin wants to access a media file.
In my previous setup, the nextcloud database was located on a HDD, which resulted in the HDD never spinning down, even if the actual files are never really accessed.
In immich, I wasn’t able to find out if they have this separation, which is very annoying.
All this is moot, if you simply offer a tiny service which doesn’t access big files that aren’t stored on SSDs.
Exactly. Separate configuration and metadata from data. If the metadata DB is relatively small, I’ll stick it on my SSD and backup to my HDD on a schedule.