I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
. I think to be a good software developer it helps to know what’s happening under the hood when you take an action.
There’s so many layers of abstractions that it becomes impossible to know everything.
Years ago, I dedicated a lot of time understanding how bytes travel from a server into your router into your computer. Very low-level mastery.
That education is now trivia, because cloud servers, cloudflare, region points, edge-servers, company firewalls… All other barriers that add more and more layers of complexity that I don’t have direct access to but can affect the applications I build. And it continues to grow.
Add this to the pile of updates to computer languages, new design patterns to learn, operating system and environment updates…
This is why engineers live alone on a farm after they burn out.
It’s not feasible to understand everything under the hood anymore. What’s under the hood grows faster than you can pick it up.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
yeah i wish it was a requirement that you’re nerdy enough to build your own computer or at least be able to install an OS before joining SWE industry. the non-nerds are too political and can’t figure out basic shit.
You should if you want to be a science writer or academic, which lets be honest is a better comparison here. If your job involves latin for names and descriptions then you probably should take at least a year or two of latin if you don’t want to make mistakes here and there out of ignorance.
I like informing yourself about the note taking app you’re writing with a little more. It makes it a bit more obvious that it’s kind of obvious but can have many advantages.
Personally though I don’t really see upside of building a computer as you could also just research things and not build it or vice versa. (Maybe it’s good for looking at bug reports?)
A 30 minute explanation on how CPUs work that I recently got to listen in on was likely more impactful on my C/assembly programming than building my own computer was.
you wouldn’t want somebody that hates animals to become a veterinarian just because of money-lust. the animals would suffer, the field as a whole, too. maybe they start buying up veterinary offices and squeeze the business for everything they can, resulting in worse outcomes- more animals dying and suffering, workers get shorted on benefits and pay.
people chasing money ruin things. we want an industry full of people that want to actually build things.
I think software was a lot easier to visualise in the past when we had fewer resources.
Stuff like memory becomes almost meaningless when you never really have to worry about it. 64,000 bytes was an amount that made sense to people. You could imagine chunks of it. 64 billion bytes is a nonsense number that people can’t even imagine.
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language,
I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
But, it is definitely true.
There’s so many layers of abstractions that it becomes impossible to know everything.
Years ago, I dedicated a lot of time understanding how bytes travel from a server into your router into your computer. Very low-level mastery.
That education is now trivia, because cloud servers, cloudflare, region points, edge-servers, company firewalls… All other barriers that add more and more layers of complexity that I don’t have direct access to but can affect the applications I build. And it continues to grow.
Add this to the pile of updates to computer languages, new design patterns to learn, operating system and environment updates…
This is why engineers live alone on a farm after they burn out.
It’s not feasible to understand everything under the hood anymore. What’s under the hood grows faster than you can pick it up.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
One may also end up developing in the areas that the above post considers inaccessible where their knowledge is likely still required.
yeah i wish it was a requirement that you’re nerdy enough to build your own computer or at least be able to install an OS before joining SWE industry. the non-nerds are too political and can’t figure out basic shit.
This is like saying before you can be a writer, you need to understand latin and the history of language.
You should if you want to be a science writer or academic, which lets be honest is a better comparison here. If your job involves latin for names and descriptions then you probably should take at least a year or two of latin if you don’t want to make mistakes here and there out of ignorance.
Before you can be a writer, you need to sharpen your own pencil.
I like informing yourself about the note taking app you’re writing with a little more. It makes it a bit more obvious that it’s kind of obvious but can have many advantages.
Personally though I don’t really see upside of building a computer as you could also just research things and not build it or vice versa. (Maybe it’s good for looking at bug reports?)
A 30 minute explanation on how CPUs work that I recently got to listen in on was likely more impactful on my C/assembly programming than building my own computer was.
you wouldn’t want somebody that hates animals to become a veterinarian just because of money-lust. the animals would suffer, the field as a whole, too. maybe they start buying up veterinary offices and squeeze the business for everything they can, resulting in worse outcomes- more animals dying and suffering, workers get shorted on benefits and pay.
people chasing money ruin things. we want an industry full of people that want to actually build things.
I don’t really see the connection to my comment.
In this example wouldn’t the programmer be more of a pharmacist? (The animal body the computer and its brain the user?)
Your statement is not wrong, it just seems unrelated.
weird, i studied latin and the history of language just because i found it interesting. i am always seeking to improve my writing skills tho.
I think software was a lot easier to visualise in the past when we had fewer resources.
Stuff like memory becomes almost meaningless when you never really have to worry about it. 64,000 bytes was an amount that made sense to people. You could imagine chunks of it. 64 billion bytes is a nonsense number that people can’t even imagine.
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
But that IS computer science.