My bet would be that JD would already be on the couch, face down. Donny is gonna have to find his own cushion.
My bet would be that JD would already be on the couch, face down. Donny is gonna have to find his own cushion.
“ok, now add a metric shit ton of swearing and further belittle parsers who can’t deal with tabs.”
It doesn’t kinda feel that way, doesn’t it?
Funny that predictive text seems to be more advanced in this instance but I suppose this is one of those scenarios that you want to make sure you get right.
When you came to space dock here, did you notice a sign out in front of my station that said “Dead Romulan Storage”?
I use Gitea myself and when the big dust up about the backing company came up, I didn’t feel like there was a big enough reason to migrate away from Gitea. Just because they could do something wasn’t enough of a reason for me. Sure it’s great that they are running a fork that I could switch to but I currently don’t see a reason to switch as of today.
I would also second Hugo which I use for my personal site and blog which I haven’t updated for a long time. Nice thing is that it has a minimal footprint of needing to watch out for updates unlike something like Wordpress which was known for being vulnerable stable if left unmaintained. It’s mostly looking out for old themes with vulnerable javascript.
Another popular options is Jekyll and I honestly can’t remember why I picked Hugo over it but if you don’t need dynamic content, why make things more complex?
I would start by checking for any sort of errors in your system logs, such as /var/log/syslog
or using dmesg -w
. In my experience, Linux is almost universally faster than Windows.
Maybe I don’t understand the problem but the only time that pinentry pops up for me is when I am signing something. What sort of situations does it just randomly pop up or what sort of specific apps/configuration would that happen at random?
I use apt cacher ng. Most of my use case though is for caching of packages related to Docker image builds as I build up to 200+ images daily. In reality, I have aggressive image caching so I don’t actually build anywhere close to that many each day but the stats are impressive. 8.1 GB of data fetched from the internet but 108 GB served from the acng instance as it shows in the stats page of recent history.
I have two internet connections - one is fiber and the other is cable. My cable is the backup connection and is a lower tier offering with a 1.2 TB/month cap while my primary fiber is 1gig symmetrical with no data cap. I use pfsense to handle failover in case of an outage.
I also use acme.sh. It has worked great for me and was dead simple to use. Super flexible on what it can do from just renewing the certs to web server integration. Love the simple to use hooks available too.
I uninstall apps from my phone that have ads that prevent the experience from being decent. I understand then need for ads but if you force me to regularly watch 30 second ads? You’re gone.
To me, zfs is like the Gentoo of file systems. If you actually use the zfs features and do a lot of digging and experimentation before you go all in on it, it’s not bad; it really can be quite good. If someone wants a filesystem that they format and forget, ext4 and xfs are still solid options. I used to use ext4 for most of my filesystem needs and xfs for my long term storage on top of mdadm. I just really wanted zfs snapshots.
I user homer. Really simple, basic config and it looks nice. The stats are pretty cool for certain integrations and are easy to add - I’ve added a few myself for services that didn’t have them. Only issue is slow PR review.
Sonarr is working great here :)
I’m in a similar boat except I just do everything on standard Docker containers but so do use Telegraf, Influx, and Grafana for everything. I’ve gone mostly to Discord notifications on any alerts. If I run into any problem scenarios, I figure out how to monitor it and add it via Telegraf and add an alert. I’m still just using Grafana alerts but it works fine for my home lab.
Even better if I can automate fixes to those problems. One of the best things I did was monitoring all of my network devices and all major hops. If I have internet or network issues, I know exactly where the problem is without having to troubleshoot. Lots of dpinger and shell scripts to input data to Telegraf.
You can do TCP proxying with nginx but many of the same features available in haproxy are behind the paywall. In nginx, layer 4 connections are dealt with through streams. You can do both TCP and UDP. I stick with haproxy for TCP streams with very few exceptions. HAproxy is most definitely more robust for situations where you have a pool of upstream servers. For single upstream instances, it’s not terrible. Most of the features I would use for better control of how the failover and balancing would work isn’t available in the open source nginx.
Well I didn’t have that on my bingo card.
Gordon Ramsey was spot on