• 0 Posts
  • 6 Comments
Joined 8 months ago
cake
Cake day: March 6th, 2024

help-circle
  • It’s ironic that a company that was well regarded for the quality GUI on their OS is so fucking bad atmaking GUIs now.

    Teams, Windows Settings, Azure, even the Microsoft login page, it’s all godawful.

    There’s some new tech called “Large Language Models”. Apparently this lets people, including programmers, work way faster. These so-called LLMs can ingest our own natural instructions, like “make me a UI which is not totally fucked”, or “refactor this dogshit code which we only keep around because it makes us more money than we know what to do with”! Not only that, the LLM will actually respond with code!

    And more code is exactly what the software industry - nay, the whole entire world - needs. Astonishing.

    I’m 100000% confident that Microsoft has not heard of this amazing tech, otherwise we would not see such a total shitshow.



  • No worries! Writing that down actually helped clarify some of my thoughts.

    Something extra: distributed computing.

    Let’s say you have 3 processes that need to communicate with one another. There’s heaps of tooling available in OSs to manage those processes. Logging, networking, filesystem access, privilege separation, resource allocation… all provided by the host OS without installing anything. But what if those 3 processes can’t run on one “machine”? Which process should go where? What if it needs 8GB memory but there’s only 6GB available on some of the machines? Who controls that?

    Systems like Kubernetes, Nomad, Docker Swarm etc. offer a way to manage this. They let us say something like:

    • run this process (by specifying a container image),
    • give it at least these resources (xGB memory, xvCPUs)
    • let it communicate with these other processes (e.g. pods, overlay networks…)

    These systems manage containers. If you want to do distributed computing and want to take advantage of those systems to manage it, stuff needs to be run in containers.

    Containers are not the only way to do distributed computing - far from it! But over the past few years this particular approach has become popular in the, umm… “commercial software development industry”.

    Opinion. Are Linux containers something to look into as someone who doesn’t work in the industry? Unless you’re interested in how containers themselves work and/or distributed computing; frankly - no. Computers are still getting faster and cheaper. So why is all this stuff so popular in the commercial world? I’ll end with some tongue-in-cheek.

    Partly it’s because the software development industry is made up of actual human beings who have their own emotions and desires. Distributed computing is a fun idea because tech people are faced with challenges tech people are interested in.

    Boring: can we increase our real estate agency brand recognition by 200%? We could provide property listings as both a CSV and PDF to our partners! Our logo could go on the PDF! Wow! Who knows how popular our brand could be?

    Fun: can we increase throughput in this part of the system by 200%? We might need to break that component out to run on a separate machine! Wow! Who knows who fast it could go?



  • Containers are used for a whole bunch of reasons. I’ll address just one: process isolation. I’ll only do one because I’ve ran into times when containers were not helpful. And it may lead to some funny stories and interesting discussion from others!

    A rule of thumb for me is that if the process is well-behaved, has its dependencies under control and doesn’t keep uneccesary state, then it may not need the isolation provided by a container and all the tooling that comes with it.

    On one extreme, should we run ls in a container? Probably not. It doesn’t write to the filesystem and has only a handful of dependencies available on pretty much any Unix-like/Linux system.

    But on the other extreme, what about that big bad internal Node.JS application which requires some weird outdated Python dependencies that has many hardcoded paths and package versions? The original developer is long gone. It dumps all sorts of shit to the filesystem. Nobody is really sure whether those files are used as a cache or they contain some critical state management. Who wants to spend the time and money to tidy that thing up? In this scenario containers can be used to hermetically seal a fragile thing. This can come back to bite you. Instead of actually improving the software to be portable and robust enough to work in varied execution environments (different operating systems, on a laptop, as a library…), you kick the can down the road.


  • It’s a nice thought, but the White House encouraging memory safety seems like a relatively insignificant push. It’s the weight of legacy code and established solutions that will hold us back for a long time.

    Absolutely. Memory-safe languages have been around for decades. The reason there is so much poor code - including ones with manual memory management bugs - out there is not a technical problem. There are hordes and hordes of programmers, managers, companies etc. who would love to get paid to port this stuff. They’ll do it for 10% of the price those stupid lumbering tech consultancies do it for.

    But who gets the contracts in the end? Give me a f’ing break!