It started with a popular mastodon posts on how to block openai crawlers I think, and I’d like to know whether people are actually implementing it.
It started with a popular mastodon posts on how to block openai crawlers I think, and I’d like to know whether people are actually implementing it.
Private project, not really security related: Crawling robots.txts to gather some statistics on which bots people are most often excluding - weirdly I couldn’t find any recent/regularly updated stats on this.
I really like the app for my personal reading tracking. Been using it for a couple years, and this year (?) there was a huge update that improved it a lot (better UX/UI and statistics if I’m not mistaken).
It’s mostly an app that does what it should, but not more, and gets out the way, which is awesome.
Awesome, I didn’t know that either! TIL
Thanks for recommending it, it does look really nice. I’ll definitely check it out when a fitting project comes along.
deleted by creator
I mean, this is a light-hearted meme, no offense to the people actually fixing things.
But at a company like GitHub the first status update should be and probably is created semi-automatic (just approved by a human). Afterwards they should follow a process to assign an incident communication lead, who takes over all communication so that the rest of the team can work on fixing the incident.
@GitHub: Hire me for more incident response tips from the backseat! :P
I’m actually not that into actual self-hosting (it feels to close to my day job). But i love the idea of it, and actually do host my own RSS Reader: It’s selfoss (PHP + SQLite, so, very simple) and i have been using ever since google reader shut down. It runs on my uberspace.de instance.
This sounds pretty awesome tbh. Will check out the two books mentioned.