• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle








  • You’re seeing that toast about versions since backend version 0.18.0 switched away from using a websockets-based API to a REST API, and the Jerboa client app is (in a not-so-descriptive way) warning you that the backend you are connected to isn’t aligned with the app version in terms of what it expects of the backed. This should go away pretty soon as more servers update their backend version and the Jerboa app update hits more devices.


  • Oh yeah for sure, everyone should work on whatever they want without restriction or obligation to be focusing on what someone else wants. And more often than not a pet project is a way to learn a new language or framework with the goal of self-development. That’s a great thing.

    It’s just a thought I selfishly have sometimes when I see many apps in development for the same platform, I can’t help but wonder “if all of this effort were focused across fewer apps, could each of those be better than any of these current ones are individually today?” Of course the number of devs contributing to a project has no direct correlation when it comes to the quality or maturity of the product. That’s down to the management, skillset of the devs, etc. I’m well aware of all of that, and the pros and cons of the differences in scenarios.

    Just thought I’d share the thought out there. In any case, Lemmy getting all of this attention will no doubt lead to the rise of at least a few solid mobile apps that will stick around and not fizzle out into development neglect within a couple of months.


  • It’s awesome to see Lemmy getting lots of love, and choice in the mobile app space is great for everyone. But some part of me also kind of wishes that rather than spreading so much development effort out over so many mobile apps, that more developers would jump in and contribute to polishing up the official open source Lemmy mobile app, Jerboa. I can’t help but feel that it would be nice to see a focused effort somewhere in bringing that one in particular up to snuff, as a sort of “reference” app. And have a few others floating around out there just for some diversity and testing innovative ideas.

    Maybe it’s already that way, I don’t know. It kind of feels like there’s a new Lemmy mobile app announced every couple of days.



  • However, that’s come with other tradeoffs in useability, speed, and fediration experience.

    Like what? If properly configured none of the things listed should negatively impact hosting a Lemmy instance.

    sure I’ll be adding an exception/rule for that, but it’s not a straight forward task.

    It honestly should be to someone who would be hosting any public web application using Cloudflare. Cloudflare makes all of this quite easy, even to those with less experience.

    Heck, the removal of websockets will require quite a few changes in my Cloudflare config.

    What config are you referring to? In the Cloudflare console? For websockets changing to a REST API implementation there should be nothing at all you need to do.

    Sure, someone truly concerned with security knows to do this, but that’s definitely not going to be everyone

    And it shouldn’t have to be everyone, only those who take on the responsibility of hosting a public web application such as a Lemmy instance.

    No matter the capabilities inherent in what you choose to host, the onus rests on the owner of the infrastructure to secure it.

    Everyone should be free to host anything they want at whatever level of security (even none) if that’s what they want to do. But it’s not reasonable nor appropriate to expect it to be done for you by way of application code. It’s great if security is baked in, that’s wonderful. But it doesn’t replace other mitigations that according to best practices should rightfully be in place and configured in the surrounding infrastructure.

    In the case of the captcha issue we’re discussing here, there’s more than enough appropriate, free solutions that you can use to cover yourself.


  • There’s nothing stopping instance owners from incorporating their own security measures into their infrastructure as they see fit, such as a reverse proxy with a modern web application firewall, solutions such as Cloudflare and the free captcha capabilities they offer, or a combination of those and/or various other protective measures. If you’re hosting your own Lemmy instance and exposing it to the public, and you don’t understand what would be involved in the above examples or have no idea where to start, then you probably shouldn’t be hosting a public Lemmy instance in the first place.

    It’s generally not a good idea to rely primarily on security to be baked into application code and call it a day. I’m not up to date on this news and all of the nuances yet, I’ll look into it after I’ve posted this, but what I said above holds true regardless.

    The responsibility of security of any publicly hosted web application or service rests squarely on the owner of the instance. It’s up to you to secure your infrastructure, and there are very good and accepted best practice ways of doing that outside of application code. Something like losing baked in captcha in a web application should come as no big deal to those who have the appropriate level of knowledge to responsibly host their instance.

    From what this seems to be about, it seems like a non-issue, unless you’re someone who is relying on baked in security to cover for your lack of expertise in properly securing your instance and mitigating exploitation by bots yourself.

    I’m not trying to demean anyone or sound holier than thou, but honestly, please don’t rely on the devs for all of your security needs. There are ways to keep your instance secure that doesn’t require their involvement, and that are best practice anyways. Please seek to educate yourself if this applies to you, and shore up the security of your own instances by way of the surrounding infrastructure.



  • I just stood up a selfhosted Invidious instance the other day, and I replaced YouTube ReVanced with Clipious (an Invidious client for Android) on my phone. No ads, SponsorBlock built-in, no need for a YouTube/Google account to create subscriptions, playlists, etc. And it’s highly performant since I run it behind a reverse proxy with some custom caching configuration for things like thumbnail images, static assets, etc.

    Clipious can also be installed on an Android TV (has an actual Android TV interface). I’m going to end up installing it on mine, but I’m also using SmartTubeNext at the moment, which does require a YouTube/Google account for subscriptions, playlists, etc, but does have no ads, built-in SponsorBlock, and a slew of other great features. I’ll be keeping both around, since I do sometimes like to cast to my TV, and SmartTubeNext allows for that (Clipious does not, at least at this time).

    Unless YouTube somehow starts dynamically splicing in ads as part of the actual video stream, there’s always going to be a way to block ads, unless they do something pretty elaborate. But that’s probably not worth the effort on their end to go that far, since the vast, vast majority of people won’t know what to do to get around that, nor will they probably care enough to try. But I think it’s clear that DNS blocking using services such as AdGuard Home, PiHole, etc, are going to become less effective over time.


  • I’ve struggled with trying to find an alternative to Google Photos that actually works well enough, and reliably enough, for me to feel comfortable enough to fully replace it. I’ve tried everything on the Awesome Selfhosted list that would be a potential competitor, but nothing comes close to Google Photos. It’s honestly just such a solid product it’s really hard to find an open source/selfhosted replacement that works at least as well. And Google Photos is just so convenient when it comes to shared albums, it’s just slick.

    My ideal solution would be to have Google Photos remain the source of truth, but have something else as a secondary backup. I looked into the idea of using Rclone to mount Google Photos and another backend (ie. Wasabi), and just replicating periodically from Google Photos to another location. But unfortunately at this time (and maybe forever), the Google Photos API doesn’t allow you to access photos/videos in their original form, only compressed. But I want the originals of course, so this doesn’t fly. The next thing I’ll be looking into when I have more time is automating Google Takeout periodically to fetch the original quality photos/videos, then upload to a backup location. But it’s such a janky idea and it rubs me the wrong way… But it might be the only way. Rclone would have been so perfect if only it could get the original quality content, but that’s on Google not enabling that capability.


  • I have a single ASUS Chromebox M075U I3-4010U which I use as a Docker host. It’s neatly and inconspicously tucked away under my TV, and it’s quiet even when the fan’s on full if a heavy workload is running.

    Main Specs:

    • Processor: Intel Core i3-4010U 1.7 GHz
    • Memory: 4 GB DDR3 1600 (I upgraded this to 16 GB)
    • Storage: 16 GB SSD (I upgraded this to 64 GB)
    • Graphics: Intel HD Graphics 4400
    • OS: Google Chrome OS (Currently running Ubuntu 22.04)

    Full Specs: https://www.newegg.ca/asus-chromebox-m075u-nettop-computer/p/N82E16883220591R

    I started off with a single-node Kubernetes cluster (k3s) a few years ago for learning purposes, and ran with it for quite a long time, but have since gone back to Docker Compose for a few reasons:

    • Less overhead and more light-weight
    • Quicker and easier to maintain now that I have a young family and less time
    • Easier to share examples of how to run certain stacks with people that don’t have Kubernetes experience

    For logs, I’m only concerned with container logs, so I use Dozzle for a quick view of what’s going on. I’m not concerned with keeping historical logs, I only care about real-time logs, since if there’s an ongoing issue I can troubleshoot it then and there and that’s all I need. This also means I dont need to store anything in terms of logs, or run a heavier log ingestion stack such as ELK, Graylog, or anything like that. Dozzle is nice and light and gives me everything I need.

    When it comes to container updates, I just do it whenever I feel like, manually. It’s generally frowned upon to reference the latest tag for a container image to get the latest updates automatically for risk of random breaking changes. And I personally feel this holds true for other methods such as Watchtower for automated container updates. I like to keep my containers running a specific version of an image until I feel it’s time to see what’s new and try an update. I can then safely backup the persistent data, see if all goes well, and if not, do a quick rollback with minimal effort.

    I used to think monitoring tools were cool, fun, neat to show off (fancy graphs, right?), but I’ve since let go of that idea. I don’t have any monitoring setup besides Dozzle for logs (and now it shows you some extra info such as memory and CPU usage, which is nice). In the past I’ve had Grafana, Prometheus, and some other tooling for monitoring but I never ended up looking at any of it once it was up and “done” (this stuff is never really “done”, you know?). So I just felt it was all a waste of resources that could be better spent actually serving a better purpose. At the end of the day, if I’m using my services and not having any trouble with anything, then it’s fine, I don’t care about seeing some fancy graphs or metrics showing me what’s going on behind the curtain, because my needs are being served, which is the point right?

    I do use Gotify for notifications, if you want to count that as monitoring, but that’s pretty much it.

    I’m pretty proud of the fact that I’ve got such a cheap, low-powered little server compared to what most people who selfhost likely have to work with, and that I’m able to run so many services on it without any performance issues that I myself can notice. Everything just works, and works very well. I can probably even add a bunch more services before I start seeing performance issues.

    At the moment I run about 50 containers across my stacks, supporting:

    • AdGuard Home
    • AriaNG
    • Bazarr
    • Certbot
    • Cloudflared
    • Cloudflare DDNS
    • Dataloader (custom service I wrote for ingesting data from a bunch of sources)
    • Dozzle
    • FileFlows
    • FileRun
    • Gitea
    • go-socks5-proxy
    • Gotify
    • Homepage
    • Invidious
    • Jackett
    • Jellyfin
    • Lemmy
    • Lidarr
    • Navidrome
    • Nginx
    • Planka
    • qBittorrent
    • Radarr
    • Rclone
    • Reactive-Resume
    • Readarr
    • Shadowsocks Server (Rust)
    • slskd
    • Snippet-Box
    • Sonarr
    • Teedy
    • Vaultwarden
    • Zola

    If you know what you’re doing and have enough knowledge in a variety of areas, you can make a lot of use of even the most basic/barebones hardware and squeeze a lot out of it. Keeping things simple, tidy, and making effective use of DNS, caching, etc, can go a long way. Experience in Full Stack Development, Infrastructure, and DevOps practices over the years really helped me in terms of knowing how to squeeze every last bit of performance out of this thing lol. I’ve definitely taken my multi-layer caching strategies to the next level, which is working really well. I want to do a write-up on it someday.


  • Sure, I won’t post exactly what I have, but something like this could be used as a starting point:

    #!/bin/bash
    now="$(date +'%Y-%m-%d')"
    
    echo "Starting backup script"
    
    echo "Backing up directories to Wasabi"
    for dir in /home/USERNAME/Docker/*/
    do
        dir=${dir%*/}
        backup_dir_local="/home/USERNAME/Docker/${dir##*/}"
        backup_dir_remote="$now/${dir##*/}"
    
        echo "Spinning down stack"
        cd $backup_dir_local && docker compose down --remove-orphans
    
        echo "Going to backup $backup_dir_local to s3://BUCKET_NAME/$backup_dir_remote"
        aws s3 cp $backup_dir_local s3://BUCKET_NAME/$backup_dir_remote --recursive --profile wasabi
    
        echo "Spinning up stack"
        cd $backup_dir_local && docker compose up --detach
    done
    aws s3 cp /home/USERNAME/Docker/backup.sh s3://USERNAME/$now/backup.sh --profile wasabi
    
    echo "Sending notification that backup tasks are complete"
    curl "https://GOTIFY_HOSTNAME/message?token=GOTIFY_TOKEN" -F "title=Backup Complete" -F "message=All container data backed up to Wasabi." -F "priority=5"
    
    echo "Completed backup script"
    

    I have all of my stacks (defined using Docker Compose) in separate subdirectories within the parent directory /home/USERNAME/Docker/, this is the only subdirectory that matters on the host. I have the backup script in the parent directory (in reality I have a few scripts in use, since my real setup is a bit more elaborate than the above). For each stack (ie. subdirectory) I spin the stack down, make the backup and copy it up to Wasabi, then spin the stack backup, and progress through each stack until done. Then lastly I copy the backup script up itself (in reality I copy up all of the scripts I use for various things). Not included as part of the script and outside of the scope of the example is the fact that I have the AWS CLI configured on the host with profiles to be able to interact with Wasabi, AWS, and Backblaze B2.

    That should give you the general idea of how simple it is. In the above example I’m not doing some things I actually do, such as create a compressed archived, validate it to ensure there’s no corruption, pruning of files that aren’t needed for the backup within any of the stacks, etc. So don’t take this to be a “good” solution, but one that would do the minimum necessary to have something.


  • I keep all secrets and passwords in a selfhosted Bitwarden instance. I don’t maintain any kind of “documentation” since my deployment files and scripts are clean and tidy, so I can tell what’s going on at a glance by looking at them directly. They’re also constantly changing as I continuously harden services according to ever-changing standards, so it’s more efficient for me to just continue keeping my codebase clean than it is to maintain separate documentation that I myself will likely never read again once I’ve published it.

    I’m the only one that needs to know how my own services are deployed and what the infrastructure looks like, and it’s way faster for me to just look at the actual content of whatever deployment files or scripts I have.

    It’s a different story for things I work with professionally, because you can’t trust someone else working to maintain the same things as you has the same level of knowledge to “just know where to go and what to do”. But that doesn’t apply to personal projects where I just want to get things done.