• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • Discord has a nice UI and lots of neat features, and it’s popular among gamers especially, but it can hardly be recommended. See https://www.messenger-matrix.de/messenger-matrix-en.html for a comparison with other communication programs. Yes, Discord has approximately the most red flags there can be. Discord is essentially spyware, it supports the least amount of encryption, security and privacy techniques out of them all, and everything you type, write, say and show on it is being processed and analyzed by the Discord server and probably in turn sold to 3rd parties. Discord can’t make a living from selling paid features only, they have to sell tons of user data, and since all data is basically unencrypted, everything’s free for the taking. Discord doesn’t even try to hide it in the terms of service or so. They just plainly state they’re collecting everything. Well, at least they’re honest about it. It’s a minor plus. If I had to use Discord, I’d only ever use the web browser version, and I’d at least block its API endpoints for collecting random telemetry and typing data (it doesn’t only collect what you sent, it also collects what you started typing).

    Matrix, on the other hand, is a protocol. Element is a well-known Matrix client implementing the protocol. On Matrix, everything is encrypted using quite state of the art encryption. It’s technologically much more advanced than Discord is. It’s also similar, but it won’t reach feature parity with Discord. Discord is a much faster moving target, and it’s much easier for the Discord devs because they need to, oh, take care of exactly nothing while developing it further. While adding a new feature to Matrix is much more complicated because almost everything has to be encrypted and still work for the users inside the chat channels.

    This is just broadly written for context. The two are similar, and you should prefer Matrix whenever possible, but I do get that Discord is popular and as is the case with popular social media or communication tools, at some point you have to bite the bullet when you don’t want to be left out of something. I’m just urging everyone to keep their communication and usage on Discord to an absolute minimum, never install any locally running software from them (maybe using sandboxing), and when you’re chatting or talking on Discord, try to restrict yourself to the topics at hand (probably gaming) and don’t discuss anything else there. Discord is, by all measurements I know, the worst privacy offender I can think about. Even worse than Facebook Messenger, WhatsApp and such stuff, because they at least have some form of data protection implemented, even if they also collect a lot of stuff, especially all metadata.


  • Choice of distro isn’t as important anymore as it used to be in the past. There’s containerization and distro-independent packaging like Flatpak or AppImage. Also, most somewhat popular distors can be made to run anything, even things packaged for other distros. Sure, you can make things easier for yourself choosing the right distro for the right use case, but that’s unfortunately a process you need to go through yourself.

    Generally, there’s 3 main “lines” of popular Linux distros: RedHat/SuSE (counting them together because they use the same packaging format RPM), Debian/Ubuntu, and Arch. Fedora and OpenSuSE are derived from RedHat and SuSE respectively, Ubuntu is derived from Debian but also stands on its own feet nowadays (although both will always be very similar), Mint and Pop!OS are both derived from Ubuntu so will always be similar to Ubuntu and Debian as well), and Endeavour is derived from Arch.

    I’d recommend using Fedora if you don’t like to tinker much, otherwise use Arch or Debian. You can’t go wrong with any of those three, they’ve been around forever and they are rock solid with either strong community backing or both strong community and company backing in the case of Fedora. Debian is, depending on edition, less up to date than the other two, but still a rock solid distro that can be made more current by using either the testing or unstable edition and/or by installing backports and community-made up to date packages. It’s more work to keep it updated of course. Don’t be misled by Debian’s labels - Debian testing at least is as stable as any other distro.

    Ubuntu is decent, just suffers from some questionable Canonical decisions which make it less popular among veterans. Still a great alternative to Debian, if you’re hesitant about Debian because of its software version issues, but still want something very much alike Debian. It’s more current than Debian, but not as current as a rolling or semi-rolling release distro such as Arch or Fedora.

    OpenSuSE is probably similar in spirit and background to Fedora, but less popular overall, and that’s a minus because you’ll find less distro-specific help for it then. Still maybe a “hidden gem” - whenever I read about it, it’s always positive.

    Endeavour is an alternative to Arch, if pure Arch is too “hard” or too much work. It’s probably the best “Easy Arch-based” distro out of all of them. Not counting some niche stuff like Arco etc.

    Mint is generally also very solid and very easy, like Ubuntu, but probably better. If you want to go the Ubuntu route but don’t like Ubuntu that much, check out Mint. It’s one of the best newbie-friendly distros because it’s very easy to use and has GUI programs for everything.

    Pop!OS is another Ubuntu/Mint-like alternative, very current as well.

    For gaming and new-ish hardware support, I’d say Arch, Fedora or Pop!OS (and more generally, rolling / semi-rolling release distros) are best suited.

    Well that’s about it for the most popular distros.


  • Check out SyncThing for a peer2peer (device to device) solution which doesn’t necessarily need a server, but having an always-on device like a server is still great for using Syncthing as well. It’s easy to use, only slightly more involved than setting up Nextcloud or Dropbox or whatever. But all done via a web-based GUI. It works surprisingly well, stable and conflict-free for the complex syncing it has to do all the time. Basically you install SyncThing on all devices you want to keep in sync, and they will find each other via their IDs when they are online, and automatically sync all their directories which should be synced. Of course it’s open source and cross-platform too.


  • kyub@discuss.tchncs.detoLinux@lemmy.mlWhat is the /opt directory?
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    4 months ago

    Let’s say you want to compile and install a program for yourself from its source code form. There’s generally a lot of choice here:

    You could (theoretically) use / as its installation prefix, meaning its binaries would then probably go underneath /bin, its libraries underneath /lib, its asset files underneath /share, and so on. But that would be terrible because it would go against all conventions. Conventions (FHS etc.) state that the more “important” a program is, the closer it should be to the root of the filesystem (“/”). Meaning, /bin would be reserved for core system utilities, not any graphical end user applications.

    You could also use /usr as installation prefix, in which case it would go into /usr/bin, /usr/lib, /usr/share, etc… but that’s also a terrible idea, because your package manager respectively the package maintainers of the packages you install from your distribution use that as their installation prefix. Everything underneath /usr (except /usr/local) is under the “administration” of your distro’s packages and package manager and so you should never put other stuff there.

    /usr/local is the exception. It’s where it’s safe to put any other stuff. Then there’s also /opt. Both are similar. Underneath /usr/local, a program would be traditionally split up based on file type - binaries would go into /usr/local/bin, etc. - everything’s split up. But as long as you made a package out of the installation, your package manager knows what files belong to this program, so not a big deal. It would be a big deal if you installed it without a package manager though - then you’d probably be unable to find any of the installed files when you want to remove them. /opt is different in that regard - here, everything is underneath /opt/<programname>/, so all files belonging to a program can easily be found. As a downside, you’d always have to add /opt/<programname>/ to your $PATH if you want to run the program’s executable directly from the commandline. So /opt behaves similar to C:\Program Files\ on Windows. The other locations are meant to be more Unix-style and split up each program’s files. But everything in the filesystem is a convention, not a hard and fast rule, you could always change everything. But it’s not recommended.

    Another option altogether is to just install it on a per-user basis into your $HOME somewhere, probably underneath ~/.local/ as an installation prefix. Then you’d have binaries in ~/.local/bin/ (which is also where I place any self-writtten scripts and small single scripts/executables), etc. Using a hidden directory like .local also means you won’t clutter your home directory visually so much. Also, ~/.local/share, ~/.local/state and so on are already defined by the XDG FreeDesktop standards anyway, so using ~/.local is a great idea for installing stuff for your user only.

    Hope that helps clear up some confusion. But it’s still confusing overall because the FHS is a historically grown standard and the Unix filesystem tree isn’t really 100% rational or well-thought out. It’s a historically grown thing. Modern Linux applications and packaging strategies do mitigate some of its problems and try to make things more consistent (e.g. by symlinking /bin to /usr/bin and so on), but there are still several issues left over. And then you have 3rd party applications installed via standalone scripts doing what they want anyway. It’s a bit messy but if you follow some basic conventions and sane advice then it’s only slightly messy. Always try to find and prefer packages built for your distribution for installing new software, or distro-independent packages like Flatpaks. Only as a last resort you should run “installer scripts” which do random things without your package manager knowing about anything they install. Such installer scripts are the usual reason why things become messy or even break. And if you build software yourself, always try to create a package out of it for your distribution, and then install that package using your package manager, so that your package manager knows about it and you can easily remove or update it later.


    1. False promises early on

    We desktop Linux users are partly to blame for this. In ~1998 there was massive hype and media attention towards Linux being this viable alternative to Windows on the desktop. A lot of magazines and websites claimed that. Well, in 1998 I can safely say that Linux could be seen as an alternative, but not a mainstream compatible one. 25 years later, it’s much easier to argue that it is, because it truly is easy to use nowadays, but back then, it certainly wasn’t yet. The sad thing is, that we Linux users kind of caused a lot of people to think negatively about desktop Linux, just because we tried pushing them towards it too early on. A common problem in tech I think, where tech which isn’t quite ready yet is being hyped as ready. Which leads to the second point:

    1. FUD / lack of information / lack of access to good, up to date information

    People see low adoption rates, hear about “problems” or think it’s a “toy for nerds”, or still have an outdated view on desktop Linux. These things stick, and probably also cause people to think “oh yeah I’ve heard about that, it’s probably nothing for me”

    1. Preinstallations / OEM partnerships

    MS has a huge advantage here, and a lot of the like really casual ordinary users out there will just use whatever comes preinstalled on their devices, which is in almost 100% of all cases Windows.

    1. Schools / education

    They still sometimes or even often(?) teach MS product usage, to “better prepare the students for their later work life where they almost certainly use ‘industry standard’ software like MS Office”. This gets them used to the combo MS Windows+Office at an early age. A massive problem, and a huge failure of the education system to not be neutral in that regard.

    1. Hardware and software devs ALWAYS ensure that their stuff is compatible with Windows due to its market share, but don’t often ensure this for Linux, and whether 3rd party drivers are 100% feature complete or even working at all, is not sure

    So you still need to be a bit careful about what you use (hardware & software) on Linux, while for Windows it’s pretty much “turn your brain off, pick anything, it’ll work”. Just a problem of adoption rate though, as Linux grew, its compatibility grew as well, so this problem decreased by a lot already, but of course until everything will also automatically work on Linux, and until most devs will port their stuff to Linux as well as Windows and OS X, it will still need even more market share for desktop Linux. Since this is a known chicken-egg-effect (Linux has low adoption because software isn’t available, but for software to become available, Linux marketshare needs to grow), we need to do it anyway, just to get out of that “dilemma”. Just like Valve did when they said one day “ok f*ck this, we might have problems for our main business model when Microsoft becomes a direct competitor to Steam, so we must push towards neutral technologies, which is Linux”. And then they did, and it worked out well for them, and the Linux community as a whole benefited from this due to having more choice now on which platforms their stuff can run. Even if we’re talking about a proprietary application here, it’s still a big milestone when you can run so many more applications/games suddenly on Linux, than before, and it drives adoption rates higher as well. So there you have a company who just did it, despite market share dictating that they shouldn’t have done that. More companies need to follow, because that will also automatically increase desktop Linux marketshare, and this is all inter-connected. More marketshare, more devs, more compatibility, more apps available, and so on. Just start doing it, goddamnit. Staying on Windows means supporting the status quo and not helping to make any positive progress.

    1. Either the general public needs to become more familiar with CLI usage (I’d prefer that), or Linux desktop applications need to become more feature-complete so that almost everything a regular user needs can be done via GUI as well

    This is still not the case yet, but it’s gotten better. Generally speaking: If you’re afraid of the CLI, Linux is not something for you probably. But you shouldn’t be afraid of it. You also aren’t afraid of chat prompts. Most commands are easy to understand.

    1. The amount of choice the user is confronted with (multiple distros, desktop environments, and so on) can lead to option paralysis

    So people think they either have to research each option (extra effort required), or are likely to “choose wrong”, and then don’t choose at all. This is just an education issue though. People need to realize that this choice isn’t bad, but actually good, and a consequence of an open environment where multiple projects “compete” for the same spot. Often, there are only a few viable options anyway. So it’s not like you have to check out a lot. But we have to make sure that potential new users know which options are a great starting point for them, and not have them get lost in researching some niche distros/projects which they shouldn’t start out with generally.

    1. “Convenience is a drug”

    Which means a lot of people, even smart ones, will not care about any negatives as long as the stuff they’re using works without any perceived user-relevant issues. Which means: they’ll continue to use Windows even after it comes bundled with spyware, because they value the stuff “working” more than things like user control/agency, privacy, security and other more abstract things. This is problematic, because they position themselves in an absolute dependency where they can’t get out of anymore and where all sorts of data about their work, private life, behavior, and so on is being leaked to external 3rd parties. This also presents a high barrier of convincing them to start becoming more technically independent: why should they make an effort to switch away from something that works in their eyes? This is a huge problem. It’s the same with Twitter/X or Reddit, not enough people switch away from those, even though it’s easy to do nowadays. Even after so much negative press lately most still stick around. It’s so hard to get the general population moving to something better once they’ve kind of stuck with one thing already. But thankfully, at least on Windows, the process of “enshittification” (forced spyware, bloatware, adware, cloud integrations, MS accounts) continues at a fast pace, which means many users won’t need to be convinced to use Linux, but rather they will at some point be annoyed by Windows/Microsoft itself. Linux becoming easier to use and Windows becoming more annoying and user-hostile at the same time will thankfully accelerate the “organic” Linux growth process, but it’ll still take a couple of years.

    1. “Peer pressure” / feeling of being left alone

    As a desktop Linux user, chances are high that you’re an “outsider” among your peers who probably use Windows. Not everyone can feel comfortable in such a role over a longer period of time. Just a matter of market share, again, but still can pose a psychological issue maybe in some cases. Or it can lead to peer pressure, like when some Windows game or something isn’t working fully for the Linux guy, that there will be peer pressure to move to Windows just to get that one working. As one example.

    1. Following the hype of new software releases and thinking that you always need the most features or that you need the “industry standard” when you don’t really need it.

    A lot of users probably prefer something like MS Office with its massive feature set and “industry standard” label over the libre/free office suites. Because something that has less features could be interpreted as being worse. But here it’s important to educate such users that it really only matters whether all features they NEED are present. And if so, it wouldn’t matter for them which they use. MS Office for example has a multi-year lead in development (it was already dominating the office suite market world-wide when Linux was still being born so to say) so of course it has more features accumulated over this long time, but most users actually don’t need them. Sure, everyone uses a different subset of features, but it’s at least likely that the libre office suites contain everything most users need. So it’s just about getting used to them. Which is also hard, to make a switch, to change your workflows, etc., so it would be better if MS Office could work on Linux so that people could at least be able to continue to use that even though it’s not recommended to do so (proprietary, spyware, MS cloud integrations). But since I’m all for having more options, it would at least be better in general for it to be available as well. But until that happens, we need to tell potential new users that they probably can also live with the alternatives just fine.



  • Well, ever since Win8 or Win10 I stopped having much sympathy with Windows users. They deserve things like that, when they still remain on that ship. Since these things are being introduced in small portions (salami tactics), the users will slowly become familiar with these things and just accept them because they can’t change anything anyway, thus slowly incorporating a defeatist’s attitude towards all the bloat, ads and spying. AKA, learned helplessness. In a couple of years, Windows will be absolutely horrible, but people will be used to it. I’ll just say this: Windows used to NOT have this kind of crap integrated.


  • Yes. Even though not using all this crap may sometimes feel like you’re missing out on certain stuff, it is still the right thing to do. I don’t support abusive behavior, bloatware and spyware, so companies doing that will not receive any money from me if I can help it.

    We’re basically just one step ahead of the general population, who basically (still) eats up anything that’s being served by big tech corporations, without any second thoughts or hesitations. The general population IMHO is currently at the stage that nerds were like 25 years ago, in that they tend to be naively enthusiastic about every new piece of tech. But nowadays, tech can be abusive towards their users, and so it’s important to choose the right tech. The general population hasn’t made that realization yet (or they don’t care, which also must change).

    The media is also partly to blame for this, for example almost every new review of any Samsung or Apple phone is usually very positive, usually just reporting about the advancements in hardware and UI, without even mentioning any of the downsides these have on the software side. And so when reviews don’t even mention downsides anymore, there’s a lack of information available.

    And it’s not even that regular users don’t like the alternatives. For example I convinced a friend to move from a regular spyware-infested Samsung Galaxy phone (which he was using all the time, and he even wanted to buy a new one) to a Pixel with GrapheneOS. He’s not missing anything, even though his transition wasn’t super smooth, overall he’s happier now, and he mentioned that he likes the OS being so clean and unencumbered. He doesn’t particularly care about the privacy and security improvements which he now also enjoys, which is a bit sad, but at least he’s happy with the lean and unmodified Android (open source) experience.

    So, as usual, information/knowledge is power. People need to know that alternatives exist and that some alternatives are actually really, really good. And they need to know what the problems are with the “default stuff everyone uses”, so that they can make better informed decisions in the future. They also need to become less dependent on big tech companies. The alternatives have little to no PR and thus little public visibility in comparison, except via word of mouth, so we need to make the most out of that.


  • kyub@discuss.tchncs.detoLinux@lemmy.mlWindows 11 vs Linux supported HW
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    thorough hardware certification process

    Probably marketing speech for “an intern tested it once with the default setup and he reported there were no errors”

    Broken standby on Linux

    That is sometimes broken because of broken UEFI/ACPI implementations which the Windows drivers were made to respect and work around, but the Linux drivers who are often developed not by the hardware manufacture himself but rather 3rd parties who implement them according to the available docs/specifications, will then result in a semi-broken functionality because implementing something according to the specification doesn’t mean much unfortunately if there are quirks or bugs you have to circumvent as well. This improves over time though with more adoption of Linux. When you compare the hardware support of Linux today vs. 20 years ago, it’s become much, much better already due to more developers and users working on it / reporting issues, and also more and more hardware vendors becoming actually involved in the Linux driver development.

    GPU bugs and screen flickering on Linux

    Various hangs and crashed

    Definitely not normal. But it’s likely that it’s just a small configuration or driver issue. Since you didn’t provide any details, I just leave it as “easy to configure properly”. I get that it would be cooler if it worked OOTB, but sometimes that isn’t the case. It goes both ways, as well. It’s hard to generalize based on few occurrences, but I also had problems long ago with a mainboard with its Realtek audio drivers on Windows which didn’t work. Don’t remember the details because it was long ago but I had to hunt for a very specific driver version from Realtek (wasn’t easy to find), and couldn’t use the one the mainboard vendor provided as the Realtek driver, nor the one provided by Windows by default. Anyway, of course Windows is generally better supported on most notebooks, I won’t deny that, but that’s simply due to market share, not because it’s somehow made better. That’s important to realize. If Linux had 80% market share, it would be the other way round, every manufacturer would absolutely ensure that their driver will work on all their distro targets and all their hardware models. In the Linux world, the drivers are sometimes made by 3rd party developers because otherwise there would be no driver at all, and so it’s better to have a mostly functional driver than none at all. And that’s also just because the vendors CAN ignore Linux based on marketshare. They shouldn’t, but they can, and it makes short-term financial sense to do so, so it happens. Of course, if they market some of their models as explicitly Linux-friendly, they should absolutely ensure that such things will work OOTB. But even if they don’t, it’s usually not hard to make it work.

    new laptops and Windows 11, basically anything works

    Only because the manufacturer HAS to ensure that it works, while he DOESN’T HAVE to ensure that Linux will play nice with that hardware as well. I recommend using either notebooks from Linux-specific manufacturers (I had good experiences with Tuxedo for example) or you continue to use the “Linux-centric” notebook models from Dell/HP/… and then simply troubleshoot any shortcomings these might have. I don’t know the model but it’s very likely that it’s a simple configuration issue. And I wouldn’t recommend using the manufacturer’s default OS. Especially not with Windows notebooks. Always reinstall a fresh, unmodified OS, and work from there. I’d even assume that if you leave out any vendor-specific software or kernel modules, your problems will probably vanish already.

    I have effectively added €500 to my budget

    That’s an unfortunate reality also in other areas. Smaller vendors can’t produce in mass quantities, and so they have to sell their stuff for more money, even though it seems counter-intuitive at first. But this is also the case with e.g. the Librem 5 mobile phone which is also very expensive (but a great option if you want a mainline Linux phone) [in this case, it’s very expensive becaue you not only pay for the hardware, but also for the software development time], or well anything which isn’t cheaply produced on a mass scale where you get volume discounts. So in a sense, if you want to change the status quo, you have to pay extra. So yes, buying a brand new Linux notebook isn’t cheap, unless you want to specifically use an older notebook where Linux also happens to run on. But on the other hand, buying a pure Linux notebook also should generally ensure that it will work well. Similar to how when you buy hardware from Apple, they will ensure that OS X runs well on it.

    I don’t think that you can generalize anything from your or your friend’s experience, so it seems likely that your friend misconfigured something or installed something the wrong way, leading to such stability problems. General tip: stability issues are almost always driver-related. Same as on Windows. So first try to remove all non-essential drivers (kernel modules on Linux) and see whether that improves stability. And, of course, check the logs. In most cases, they will point out the issue. I’ve also installed Linux on several “Windows-only” (not marketed as Linux compatible) notebooks and it ran just fine without ANY stability or graphics issues. I have a Lenovo ThinkPad for work and it runs Arch Linux, it’s probably more stable than the Win11 it’s supposed to run with. At least among my colleagues who run Win11 on it, I’m the only one who didn’t yet have a driver or update issue within its lifespan. One of those colleagues even had to reinstall Win11 after a borked update. I also use Tuxedo notebooks (Linux-compatible by default) personally and they’re great as well. But of course I never use vendor-supplied software, so I’m not affected if such software behaves badly. I always configure my systems the way I want them, starting from a vanilla base.


  • kyub@discuss.tchncs.detoLinux@lemmy.mlWindows 11 vs Linux supported HW
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    11 months ago

    It depends. It could also be a better idea to introduce a sort of “IT driver’s license” for everyone to have basic understanding/skills to use their devices. Sure, modern software stacks are ridiculously complex and no one understands every detail down to each machine code/assembly instruction, so there’s always a big amount of abstraction or simplification needed, but I don’t think it’s a good idea to request that someone with literal zero knowledge whatsoever should be able to perfectly use an OS or device. That’s also not even possible. I see it with my mother, she started from zero knowledge but she had to learn some basics to be able to do the few things she needs to do. Of course she uses Linux. No prior Windows knowledge means a much easier start with Linux of course. She wouldn’t have been able to use Windows either with zero knowledge. So this is a point that some forget: even Windows users need knowledge to be able to use Windows, and they probably already earned that knowledge in much earlier years. This Windows knowledge also works against you building up Linux (or even OS X) knowledge because Windows works quite differently from a Unix-like OS. This is not irrelevant: a Windows user who spent like 30 years in Windows has a much harder time learning Linux, than someone who didn’t have that. But, again, not really the fault of Linux that you indoctrinated yourself with Windows-only MS product specific knowledge over the last decades. This is probably the biggest problem there is, because almost everyone on the planet has already acquired some amount of Windows knowledge in the past. This works against you when trying to switch. Windows knowledge is mostly Windows-specific. When learning about IT, you should make sure that you learn things in a preferably OS agnostic way. Which is also the reason why schools etc. should never teach “using MS products”. They should always teach fundamentals, irrelevant of what you use afterwards. And those fundamentals should of course not be taught using commercial products, but rather open source software.

    Then there are some fantasies which MS and Apple could establish in the broader population which aren’t true, for example that CLI/terminal usage is archaic and has no place on modern desktops anymore. CLI usage will always remain as a fast alternative to a lot of tasks which are hard or even impossible to do via GUI. Even MS has realized this and introduced Powershell, a new terminal, and winget, for example. As well as WSL (which was originally and still mostly is being used to have access to powerful Linux-based CLI utilities). Yet still a lot of people seem to think that CLI is obsolete or that it’s “hard”. Sure, if you do some scripting or complex one-liners, it can be too hard for someone without strong IT knowledge. But most commands are really basic and easy to understand. Even my mother is able to use basic commandline utilities, and she even prefers it sometimes over clicking around in the GUI. To claim that this is impossible or too hard to learn for a Windows user is, I don’t know. At least untrue. Probably even an insult to your own intelligence. And the main reason why most Linux users suggest doing things via commandline is that this is an almost distro- and desktop-independent way of doing things.

    Also, not a big fan of the “fan” label here. Regardless of whether or not you like Linux (I like Linux as an OS more than Windows, because I think the Unix-way is better, but it’s also about so much more), I see a neutral, free/libre open source (FLOSS) operating system as the base for our digital lives as a necessity, and so I see Windows or OS X as intrinsically worse. I don’t see it as a kind of war between different products on equal footing. One product denies you any rights and control (and in more recent times, also extracts even more value and data from you than just the price you paid for the license to use it), and one that gives you full rights and control (and pretty much never extracts any more from you). It’s not OK that we use our devices for so many things in life nowadays, that all aspects of your life are being done via digital means nowadays, and yet the most popular operating systems are still 100% proprietary black boxes fully controlled by big US companies. This needs to change, and it should have happened a long time ago already. And Linux is simply the most mature and most well supported FLOSS operating system out of all of them. I actually wouldn’t care if it would be FreeBSD or OpenBSD or whatever instead, but I see Linux as being the most mature, well-supported and mainstream-viable option here. I only care that it’s not a damn black box I don’t have any real control over.

    We need (almost) everyone on such open technologies like Linux, because the future (or even present) for Windows users looks like this: no control, no privacy (plus AI being trained on your work/data as well), big vulnerability when (not if) MS gets hacked (and they’re a huge, juicy target, and we already saw them being compromised twice in the last couple of years), pricey subscription to MS’ services which continues to get pricier once you’re successfully vendor-locked-in (once all your servers, desktops and data is in MS’ cloud, you won’t be able to easily leave their services anymore, so they are free to increase prices until it hurts you). Even if you happen to like the offering MS gives you, does that really seem like “the future” of computing to you? To me, that’s backwards. Or mainframe history repeating itself. Moving into proprietary clouds with vendor-lock-in only really benefits the cloud provider, which is why they want all users to join the “party”.

    I’m not a big fan of Stallman in general, but his fundamental propositions e.g. that FLOSS software is intrinsically better than proprietary black boxes, is true. I wonder how long we still need as a society, to arrive at that realization. I assumed that the Snowden revelations as well as the desaster that Windows 10 was for privacy, would have already started a change in thinking about such things. But that probably wasn’t enough (strangely). I’m not sure what else would need to happen, but I guess something like first MS shoving all their users into their cloud, and then MS being hacked (again) but this time with malicious auto-updates being pushed to all MS software users as well, impacting tons of businesses. Then, maybe, people will start thinking whether this was such a great idea to begin with to play along with what MS envisioned as the “grand future”. Unfortunately I see parallels with the human behavior concerning climate change here as well. It’s like we have to first destroy our climate and suffer the consequences, before we realize it’s a bad idea and we should do it differently RIGHT NOW. We are just incredibly short-sighted and we only learn AFTER disasters, which were even announced long before. It’s tragic.

    And for those people who know or think they could start using Linux but still use Windows because it’s more “aesthetically pleasing” or whatever else irrelevant aspect they make up to “justify” still staying on that sinking MS ship in 2023, please reconsider your priorities.


  • Wasn’t ignoring it. What matters is whether the software supports the features you NEED. That there will always be more features added, doesn’t mean that you need all those. What matters is whether the software is “feature-complete” for your specific needs. Look at MS Office. It’s the “industry standard” office suite (that term sucks btw, just means most popular), yet it has features that the majority of people do not need at all (probably even don’t know those exist). So, LibreOffice or OnlyOffice for example can be viable replacements in such cases. You get what you need out of your office suite AND you have it in FOSS format with 100% user control, without a company stealing sensitive info from your documents in the background.


  • Best option: Use Linux and alternatives to Adobe stuff, if possible. These programs continue to evolve, at some point you might not need the Adobe stuff anymore.

    Second best option: Use Linux and run the Adobe stuff inside a Windows VM. GPU passthrough is not that difficult to configure if you need it. You can run your Windows games on Linux in many cases, so it’s most likely not needed to run a Windows VM with GPU passthrough just for gaming.

    Third best option: Use OS X instead of Windows or Linux, and run the Adobe stuff on OS X (it’s also natively supported there)

    Worst option: Continue to use Windows


  • But Windows is broke. I recommend using it only if you truly have to (e.g. software dependencies for your work). If you think or know you don’t need it, then don’t use it, don’t recommend it and also please don’t claim it’s not harmful or “just a tool like everything else”. Tools don’t spy on its users. The monopoly situation due to too many users still using is also in itself harmful for competition/alternatives, and on top of that its users suffer from massive amounts of privacy invasions.

    If you don’t want to continue to use Windows (which is an important realization to make), but feel like you can’t use Linux yet for whatever reason, use OS X. It’s sort of middle of the road. Also not great for various reasons, and also not recommended, but it will at least ease your transition to Linux later on because OS X is also Unix-like, and it’s at least slightly less bad than Windows. Always re-evaluate from time to time, whether you still need Windows or OS X, and if not, switch to Linux.


  • Some issues aren’t Linux problems but more like anti Linux solutions

    These exist but often you can avoid them by using alternatives. I recommend not supporting LInux-hostile companies/services at all. Problem solved. This problem will continue to exist as long as Linux has low marketshare. So, the answer is not moving away from Linux, but rather to it, so that companies can’t ignore Linux users anymore. Also, using Linux has many advantages in termss of user control/agency, privacy and security.

    He hates Ubuntu because he feels like Ubuntu diminishes the reasoning to get Linux in the first place

    That’s nonsense, there is no “true way” to use Linux. It’s an operating system and there are distros which abstract a lot of lower level stuff away just like Windows or OS X do (e.g. Ubuntu, Mint, Fedora, OpenSuSE, …) and there are distros which don’t (or which simply don’t care about including such things) and thus are considered more “for advanced users”, where more stuff needs to be maintained/configured by hand and where less GUI-based tools are available by default. Some people actually like that sort of minimalism and the increased control, but of course it’s not for everyone.

    Also, if he has trouble with the commandline usage, then it doesn’t make any sense for him NOT to use e.g. Ubuntu. Because then he obviously needs the “hand holding” of an “easier to use” distro like Ubuntu. So he shouldn’t complain about it. But this is not meant to disrespect the accomplishments of Ubuntu. The most popular OSes/distros are theones which are easier to use and which abstract a lot of things away. Because otherwise, it’d just be a distro for more tech savvy people, period. Then again, if he’s a dev he should in theory be more than tech savvy enough to use Linux as a daily driver.

    Then again, he doesn’t like that 99% of apps or there like discord just don’t have a good Linux path so you have to randomly trust some potentially bad actor to keep discord updated.

    I recommend using the Flatpak versions of GUI apps (in general). It’s very easy and it’s a trusted source to get tons of applications from. Although for Discord in particular, I don’t reocmmend it, I’d just use the web version tucked away in a browser (ideally sandboxed) without too many permissions on your system. Because Discord is spyware, so it’s best to keep it in check, if you need to use it. Running it in a browser automatically limits the amount of data they can gather about your system.





  • SuSE @ 1999, then Slackware in the same year.

    Tried SuSE (bought as a box) as an alternative to the annoying, unstable and insecure Windows 9x, it was also the time when Linux as an alternative desktop OS was starting to get hyped in the media. Especially in regards to stability and security. Well, it wasn’t hard to beat Win9x in those areas. Tried it a bit, didn’t like it that much (I think it was KDE 1.x) and also didn’t understand much of it. I was still intrigued though and wanted to really learn it starting from the commandline, but I felt I couldn’t with all the SuSE stuff like YaST being preinstalled.

    So I bought a big book (by Michael Kofler), it was the de facto standard book for really learning Linux from the ground up back then. And I chose a distribution which would be much more minimalistic (because I felt that makes it easier to learn). So I installed Slackware. I used it for like 3 years and learned a lot (all the basics), it was a hard journey though and other distros started appearing and they promised to be more modern or better than Slackware.

    So I tried Debian next, then Crux, then Arch. This was all around 2002-2006. I can’t remember exactly how long I used each, but I do know I’ve used Slack for quite a lot, then Debian rather shortly, then Crux also not very long (basically I just wanted to test a source based distro but compile times were annoyingly long back in the day), and then it was Arch all the way. Arch was fast, rather simple, always up to date, and it had the great AUR. I didn’t ever look back.

    I did take a break from Linux as my primary OS from approximately 2009 to 2017, mostly due to playing a ton of video games (Windows only, not runnable at all on Linux back then) and also due to my career path making me work with lots of Windows Servers, Powershell and other Microsoft stuff.

    Since about 2017/2018 I’m back to Linux as primary OS (Arch, again) and haven’t looked back since. Even managed to fully delete all physical Windows partitions now (I only keep it in a VM in case I need to test something).

    I’m testing NixOS on my notebook currently, it seems to be “the future”, but my main desktop will probably stay Arch for a bit longer still.

    Looking back at using Slackware early on, I don’t regret it, since I learned a ton, but it was tough using Slackware around the 2000s. I still remember a lot of fighting with programs which wouldn’t compile due to dependency errors or other compilation errors. And a lot of Google searches for various compilation errors leading to rare and hard to understand solutions found in random forum posts. Compared to that, any Linux distro feels like mainstream these days. But it was an efficient way to learn.


  • Valve is doing this for strategic reasons and also because they wanted to start the handheld PC market (Steam Deck). Strategic reasons: Microsoft could at any point buy several important gaming studios or distributors, distribute a lot of games (maybe exclusively) via their own store (they even announced that several years ago, but they didn’t do it in the end). MS could even implement small things which make Steam perform worse on Windows, as its 100% controlled by MS. If you compete directly with Microsoft on the Windows platform, you will eventually lose because MS can do some very tiny tweaks which happen to make your product be more annoying or slower to use than Microsoft’s own. That way they’ll still fly under the radar for anti-competitive behavior. So Valve has to ensure that their main business model (selling/distributing games on Steam) remains future-proof, and that means more independent from Microsoft’s agenda. To do this, they need to push a fully neutral, but viable alternative to Windows for gaming. Which is Linux.