

Care to share your quartet? I’m just getting into the quads with trixie out - and I haven’t gotten this working yet…
The permissions do seem intense; if you’re getting by without maybe those aren’t quite needed!
Care to share your quartet? I’m just getting into the quads with trixie out - and I haven’t gotten this working yet…
The permissions do seem intense; if you’re getting by without maybe those aren’t quite needed!
Great to hear! It’s seriously slick and “just works”. With those security features up you can tout them on the cloud offering too :)
No what I said isn’t about user registration; it’s about adding these to the docker-compose.yml
:
read_only: true
user: 6969:6969
to prevent running as root and making the file system read-only. The API needs to be exposed without a VPN or other proxy login since my parents’ can’t handle that, so if I was able to implement these recommended security steps I’d feel like I could open up the container to the internet at large without too much risk.
Per this issue https://github.com/linkwarden/linkwarden/issues/799 it seems like there’s a lot of steps to take to get these settings to work.
It would be also ideal if I didn’t have to give the container (but not a deal-breaker):
cap_add:
- CAP_SYS_ADMIN
- CAP_SYS_CHROOT
as the issue also states is required for the headless chrome scraper browser.
I am using it internally now and it’s really good, but to open it up for my parents (which I think they’d dig) I’d definitely want these security settings on without major issues. Linkwarden is an internet-facing application so these recommended security practicies are in its wheel-house, feature-wise, as well.
Hope that helps clear up my comment!
This is a fantastic tool, but I’d love to confidently expose the API to the internet for the shortcut. To do that you need read-only and running as a user; I saw that that’s not a thing that works from the issues.
Any thoughts on getting those security features working? Cause the app itself is so smooth I’d let my parents use it and be confident they wouldn’t need to be herded constantly.
I’ve been thinking about using client-side certificates that are validated by Caddy to bypass the Authentik wall (proxy provider) I use. I’ll give it a shot some time, it’s a good idea
Other user summarized very well.
No I have accrued knowledge of those things over time, no one stop shop that I know of. But knowing these things exist and their general use are half the battle!
I was lazy with the “Authentik wall” because I couldn’t remember what they called it. It is the “proxy” option in their “provider” section https://docs.goauthentik.io/add-secure-apps/providers/proxy/ . There are many guides for Authentik at least, it’s complicated but you only need to do specific things for it to work - and most tell you and the rest are applicable via matching similar looking things.
OIDC is an open login protocol many things support. I think jellyfin can use it with a plugin, but keep in mind that regular user creation still exists so it’s not a security and convenience feature like for most things, it’s just a convenience feature.
DMZ is de militarized zone. I used the acronym to mean a gap between your system and a system that deals directly with the outside Internet. That gap is the VM separation. LXC containers and docker containers do not have that separation, I deploy Internet-facing stuff in a VM as extra insurance in case they get zero-day-hacked; it means the rest of my server will hopefully not get ransomwared.
Incus is an alternative to proxmox, but less needy since it doesn’t require its own Linux kernel. Zabbly is a package source (vs built-in Debian sources) that has the web ui in it. See their documentation for installation, it tells you how to add the Zabbly package; use the “stable” version if you do use incus.
“In the compose” means in the docker-compose.yml file.
‘Cap-drop: all’ is an entry you can make in the docker-compose file. It increases security. All of the ones I listed are entries you can add to the docker-compose file. You’ll likely need a
tmpfs: /tmp
In the compose file you use read only.
Podman is the superior alternative to docker, and Podman quadlets are a way to deploy containers (they have a couple ways, like docker does - you don’t need a docker-compose.yml file to run docker containers). But it’s new and doesn’t have the community knowledge support via searching like docker does.
Hope that helps!
Thanks for the links! I had no idea there were special settings needed
I am not familiar with deploying client side certificates unfortunately. I hope it works, if the certificate is at the OS level and the application will use it, I feel it will work… not sure, in-browser feels straight forward at least
Reading jellyfin’s issues it’s clear its web ui and API cannot be allowed to talk to the general internet.
I’d push for a VPN solution first. Tailscale or wireguard. If you’re happy with cloudflare sniffing all traffic and that they make take it away suddenly someday use their tunnel with authentication.
The only other novel solution I’d suggest is putting jellyfin behind an Authentik wall (not OIDC, though you can use OIDC for users after the wall). That puts security on Authentik, and that’s their only job so hopefully that works. I’d use that if VPN (tailscale or wireguard) are problematic for access. The downside is that jellyfin apps will not be able to connect, only web browsers that can log into the Authentik web ui wall.
Flow would go caddy/other reverse proxy -> Authentik wall for jellyfin -> jellyfin
I’d put everything in docker, I’d put caddy and Authentik in a VM for a DMZ (incus + Zabbly repo web ui to manage the VM), I’d set all 3 in the compose to read-only, user:####:####, cap-drop all, no new privileges, limited named networks.
Podman quadlets would be even better security than docker, but there’s less help for that (for now). Do docker and get something working to start, then grow from there
I’m looking at Opnsense on an Incus VM soon, what was your fight there? Good to know what I’ll hit ;)
Agreed on that path - some networking (like mimicking proxmox’s bridge connections which give VMs their own MAC/IP) takes effort to find the solution. But the basic LXC/VM-shares-your-IP works super easily and the script ability is great. Plus it doesn’t feel like a yoke on your system that is heavy and drives it, but just another application! I feel it’s close enough, and when you get it where you want it, it’s perf. I assume they’ll get “one click” solutions for the harder stuff baked in as they get more attention and traction.
If you’ve got Debian already installed, I cannot resist advocating for Incus (stable branch from Zabbly repo with web ui https://blog.simos.info/how-to-install-and-setup-the-incus-web-ui/) in lieu of proxmox. Does the same thing but you don’t have to rip out the kernel Debian uses.
With Debian 13 you have access to podman quadlets, use that for any non-vm needs. The ease of docker compose files easily removes reason for programs in LXC containers, and podman removes reason for docker in an LXC. LXC is left only for programs that aren’t containerized. VMs for security DMZ. Podman for bulk of stuff you want.
Good luck!
Right right things don’t just have one… from searching I’ve found “SLAAC assisted mode” allows for the router to let SLAAC SLAAC while also being able to declare addresses for a server. Thanks for that tiny note!
I wanted Jellyfin on its own IP so I could think about implementing VLANs. I havent yet, and I’m not sure what I did is even needed. But I did do it! You very likely don’t need to do it.
There are likely guides on enabling Jellyfin hardware acceleration on your Asustor NAS - so just follow them!
I do try to set up separate networks for each service.
On one server I have a monolithic docker compose file with a ton of networks defined to keep services from talking to the internet or each other if it’s not useful (pdf converter is prevented from talking to the internet or the Authentik database, for example). Makes the most sense here, has the most power.
On this server I have each service split up with its own docker compose file. The network bit makes more sense on services that have an external database and other bits, it lets me set it up so only the service can talk to its database and its database cannot reach the internet at large (via adding a ‘internal: true’ to the networks: section). In this case, yes the pdf converter can talk to other services and I’d need to block its internet access at the router somehow.
The monolithic method gets more annoying to deal with with many services via virtue of a gigantic docker compose file and the up/down time (esp. for services that don’t acknowledge shutdown commands). But it lets me use fine-grained networking within the docker compose file.
For each service on its own, they expose a port and things talk to them from there. So instead of an internal docker network letting Authentik talk to a service, Authentik just looks up the address of the service. I don’t notice any difference in perceptible lag.
Good to know, didn’t know IPv6 can come with efficiency gains. Makes sense since the designers had a beat to think about why IPv4 sucks. I’ll avoid NAT IPv6
I got it, ULA for everything that doesn’t care, 1 GUA for the server. When everything else starts to care about the lack of IPv6 or has routing issues, convert the ULA to GUA and rock n roll.
Thanks for providing a sane way to approach it slowly and methodically!
I do appreciate you taking the time to write that up! Is the 50.50.0.0/22 crossing US and EU IPv4 allocations? From searching it looks like it’s around the boundary between US and Germany allocations. Interesting, I had no idea IP anonymization existed or was applied in such a haphazard way
Thanks for writing this up, really highlights the effective differences.
So for the internal delegation I’d SLAAC it and let things “just work” or DHCPv6 if I cared to specify IPv6s (which I will need to to have a static IPv6 address for a server to be reached at). Thanks again!
Thanks for taking the time to go into detail on this, it helps because I just haven’t been able to put acronyms to actionable meaning from just reading blogs and posts.
How do things outside the LAN talk to things inside the LAN that have ULA addresses (which I’m assuming are equivalent of 10.0.0.0/16 idea)? Will devices that are given ULA addresses be NAT’d just like IPv4 or will they not be able to talk to the outside world on IPv6?
Edit: I am getting more what you said; you answered this: the ULA addresses will not be able to talk to the outside world on IPv6 so those devices will be IPv4-only to websites that use IPv6 too. Follow-on Q would then be, is kludging NAT for IPv6 not a better solution versus ULA addresses? Or is the clear answer just use IPv6 as intended and let the devices handle their privacy with IPv6 privacy extensions?
I see now that a limitation I just understood for IPv4 (expose one port from one device only on the router) isn’t a thing for IPv6 working without NAT, every device on a LAN can be given a world wide routable address and expose the same port. Interesting, in my home I don’t think I’d ever run into that, but I can see issues like that pile up quick in big deployments.
Thanks for taking the time to explain all of this in detail!
Thanks! This’ll def help me get tooled up for podman :)