This phenomenon of bad software isn’t new. Vernor Vinge mentioned this in passing in A Deepness In The Sky.
I do agree with the nature of self sufficiency. That is the start of durability. Most people find this revolting though. The goal, for most people, isn’t stuff that works properly. The goal is inclusion and comfort, a social baseline opposed to a utility baseline.
Until customer Barry chimes in that he wants "this" feature, which they are never going to use, but they are also customer who is making 30% of your whole revenue. You can either say no and give opening to your competitor while keeping your ideals, or do what they want.
I hoped it would most of all meant not using LLMs, but this is good as well
At some point we might be able to be confident that the current version of all our dependecies has been carefully reviewed by enough reliable people, but right now we're not even moving in that direction; so, minimizing the dependencies is the proper thing to do.
Also, software is hard to write, and second system syndrome causes loads of loathing and misery even when the goal is to "do it right, aka simple, this time."
That said, sometimes we need to take on the risk and effort of making a second system. I have often thought about the relearning/doomed-to-repeat-history problem, and I wonder if software - especially some open source software - might be uniquely positioned to build a second system due to bug trackers.
The bug trackers in software like Firefox effectively capture a large percentage of a project's history and design decisions. It seems to me that the bug tracker for a projects' predecessor could lay the proper frame for its successor.
I agree! However in many cases, these edge cases (I'm not speaking of curl now) are not needed for my personal use. E.g. if I use linux/windows/os, I do not care about how my tool behaves on the other os, I do not want to support all kind of hardware etc. If I am the only users, I can prune these use-cases (and features I mentioned in the post) significantly. E.g. I reimplement a subset of vim at the moment, I do not use LSPs or syntax highlighting in my work, I do not need to implement support for them in my editor.
I'm thinking of something like curl specifically where the edge case isn't your machine, it's the machine you're talking to. Can I write my own curl-like downloader in a few hundred lines of code? Yes. Is it going to work first try with a 30 year old apache file server? Probably not. Do I want something that works "good enough" which breaks when I'm in the middle of a time crunch... or do I want something production tested that's probably not going to fail on me at the worst possible moment.
I'm willing to accept a little bloat and pass on inventing wheels myself if I can grab something reliable off the shelf. I don't think that makes me less self reliant.
Yeah, I don’t think the curl example was meant as a knock against curl or advice not to use it. I think any command-line user would agree that curl is not bad or bloated software, which is what they criticize.
The point seemed to be that even a rock-solid tool like curl started out tiny — just a few hundred lines — before growing to cover all the edge cases you’re describing. It’s more about showing that you can start with something simple for your own needs and customize it without depending on someone else.
Archaeologists have discovered Sumerian cuneiform tablets which complain that software quality isn't what it used to be.
This phenomenon of bad software isn’t new. Vernor Vinge mentioned this in passing in A Deepness In The Sky.
I do agree with the nature of self sufficiency. That is the start of durability. Most people find this revolting though. The goal, for most people, isn’t stuff that works properly. The goal is inclusion and comfort, a social baseline opposed to a utility baseline.
Good intention, lacking in detailed follow through.
Until customer Barry chimes in that he wants "this" feature, which they are never going to use, but they are also customer who is making 30% of your whole revenue. You can either say no and give opening to your competitor while keeping your ideals, or do what they want.
I hoped it would most of all meant not using LLMs, but this is good as well
At some point we might be able to be confident that the current version of all our dependecies has been carefully reviewed by enough reliable people, but right now we're not even moving in that direction; so, minimizing the dependencies is the proper thing to do.
Those "over complex" code bases like curl handle a lot of edge cases. Your 300 lines of C is starting over and relearning what they already learned.
Also, software is hard to write, and second system syndrome causes loads of loathing and misery even when the goal is to "do it right, aka simple, this time."
That said, sometimes we need to take on the risk and effort of making a second system. I have often thought about the relearning/doomed-to-repeat-history problem, and I wonder if software - especially some open source software - might be uniquely positioned to build a second system due to bug trackers.
The bug trackers in software like Firefox effectively capture a large percentage of a project's history and design decisions. It seems to me that the bug tracker for a projects' predecessor could lay the proper frame for its successor.
I agree! However in many cases, these edge cases (I'm not speaking of curl now) are not needed for my personal use. E.g. if I use linux/windows/os, I do not care about how my tool behaves on the other os, I do not want to support all kind of hardware etc. If I am the only users, I can prune these use-cases (and features I mentioned in the post) significantly. E.g. I reimplement a subset of vim at the moment, I do not use LSPs or syntax highlighting in my work, I do not need to implement support for them in my editor.
I'm thinking of something like curl specifically where the edge case isn't your machine, it's the machine you're talking to. Can I write my own curl-like downloader in a few hundred lines of code? Yes. Is it going to work first try with a 30 year old apache file server? Probably not. Do I want something that works "good enough" which breaks when I'm in the middle of a time crunch... or do I want something production tested that's probably not going to fail on me at the worst possible moment.
I'm willing to accept a little bloat and pass on inventing wheels myself if I can grab something reliable off the shelf. I don't think that makes me less self reliant.
Yeah, I don’t think the curl example was meant as a knock against curl or advice not to use it. I think any command-line user would agree that curl is not bad or bloated software, which is what they criticize.
The point seemed to be that even a rock-solid tool like curl started out tiny — just a few hundred lines — before growing to cover all the edge cases you’re describing. It’s more about showing that you can start with something simple for your own needs and customize it without depending on someone else.
and a lot of legacy baggage.
We should begin collecting and centralizing the insights learned from the development of software outside the source code of specific projects
The author is on to something with this essay.
Just another rant