For sure. I'd argue to write the "stupid" code to get started, get that momentum going. The sooner you are writing code, the sooner you are making your concept real, and finding the flaws in your mental model for what you're solving.
I used to try to think ahead, plan ahead and "architect", then I realized simply "getting something on paper" corrects many of the assumptions I had in my head. A colleague pushed me to "get something working" and iterate from there, and it completely changed how I build software. Even if that initial version is "stupid" and hack-ish!
You should always have an architecture in mind. But it should be appropriate for the scale and complexity of your application _right now_, as opposed to what you imagine it will be in five years. Let it evolve, but always have it.
I think this is mostly true, but also I’d highlight the necessity of having a mental model, and iterating.
I think it is common for a programmer to just start programming without coming up with any model, and just try to solve the problem by adding code on top of code.
There are also many programmers who go with their first “working” implementation, and never iterate.
These days, I think the pendulum has swung too far from thinking about the program, maybe mapping it out a bit on paper before writing code.
This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.
This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.
Depending on how low-level your code is, this... may not work out in those terms.
In other words, I’d say that if you actually want good software—and that includes making sure its speed falls within a reasonable factor of the napkin-math theoretical maximum achievable on the platform—your three steps can easily constitute three entire rewrites or at least substantial refactors. You might well need to rearchitect if the “working well” version has multiple small loops split by domain-level concern when the hardware really wants a single large one, or if you’re doing a lot of pointer-chasing and need to flatten the whole thing into a single buffer in preorder, or if your interface assumes per-byte ops where SIMD can be applied.
This is not a condemnation of the strategy, mind you. Crap code is valuable and I wish I were better at it. I just disagree that the transition from step 2 to step 3 can be described as an optimization pass. If that’s what you limit yourself to, you’ll quite likely be forced to leave at least an order of magnitude’s worth of performance on the table.
And yes, most consumer software is very much not good by that definition.
(For instance, I’m expecting that the Ladybird devs will be able to get their browser to work well for daily tasks—which I would count a tremendous achievement—but I’m not optimistic about it then becoming any faster than the state of the art even ten or fifteen years ago.)
Some optimization problems require an entire PHD dissertation and research budget to actually optimize, so some algorithms require far more effort applied to this than is reasonable for most products. As mentioned, sometimes you can combine these all into one step -- when you know the domains well.
Sometimes, it might even be completely separate people working on each step... separated by time and space.
In any case, most software generally stops at (2) simply due to the fact that any effort towards (3) isn't worth the effort -- for example, there's very little point in spending two weeks optimizing a report generation that runs in the middle of the night, once a month. At some point, there may be, but usually not anytime soon.
Totally agree. Iteration is key. Mapping things out on paper after you've written the code can also be illuminating. Analysis and design doesn't imply one-and-done Architect -> Implement waterfall methods.
Knowing hard requirements up front can be critical to building the right thing. It's scary how many "temporary" things get built in top of and stuck in production. Obviously loose coupling / clear interfaces can help a lot with this.
But an easy example is "just build the single player version" (of an application) can be worse than just eating your vegetables. It can be very difficult to tack-on multiplayer, as opposed to building for this up front.
I once retrofitted a computer racing game/sim from single-player to multi-player.
I thought it was a masterpiece of abusing the C pre-processor to ensure that all variables used for player physics, game state, inputs, and position outputs to the graphics pipeline were guarded with macros to ensure as the (overwhelmingly) single-player titles continued to be developed that the code would remain clean for the two titles that we hoped to ship with split-screen support.
All the state was wrapped in ss_access() macros (“split screen access”) and compiled to plain variables for single-player titles, but with the variable name changed so writing plain access code wouldn’t compile.
I was proud of the technical use/abuse of macros. I was not proud that I’d done a lot of work and imposed a tax on the other teams all for a feature that producers wanted but that in the end we never shipped a single split-screen title. One console title was cancelled (Saturn) and one (mine) shipped single-player only (PlayStation).
That's a great point, and I feel like it is relevant for a lot more than games.
We should definitely have a plan before we start, and sketch out the broad strokes both in design and in actual code as a starting point. For smaller things it's fine to just start hacking away, but when we're designing nå entire application i think the right way to approach it is to plan it out and then solve the big problems first. Like multiplayer.
They don't have to be completely solved, it's an iterative process but they should be part of the process from the beginning.
An example from my own work: I took over an app two other developers had started. The plan was to synchronize data from a third party to our own db, but they hadn't done that. They had just used the third party api directly. I don't know why. So when they left and I took over, I ended up deleting/refactoring everything because everything was built around this third party api and there was a whole bunch of problems related to that and how they were just using the third party's data structure directly rather than shaping the data the way we wanted it. The frontend took 30-60+ seconds to load a page because it was making like 7 serialized requests, waiting for a response before sending the next one and the backend did the same thing.
Now it's loading instantly, but it did require that I basically tear out everything they've done and rewrite most of the system from scratch.
In many project it's impossible to know the requirements up front, or they are very vagues.
Business requirements != programming requirements/features.
Very often both the business requirements and programming requirements change a lot since unless you have already written this one thing, in the exact form that you are making it now, you will NEVER get it right the first time.
The problem is people don't adapt properly. If the business requirements change so much that it invalidates your previous work then you need to re-do the work. But in reality people just duct tape a bunch of workarounds together and you end up with a frankensystem that doesn't do anything right.
It is possible to build systems that can adapt to change, by decoupling and avoiding cross cutting concerns etc you can make a lot of big sweeping changes quite easily in a well designed system. It's just that most developers are bad at software development, they make a horrible mess and then they just keep making it worse while blaming deadlines and management etc.
or C): You cultivate a culture of continuous rewrite to match updated requirements and understandings as you code. So, so many people have never learned that, but once you do reach that state, it is very liberating as there will be no more sacred ducks.
That said, it takes quite a bit of practice to become good enough at refactoring to actually practice that.
I frequently do both. It takes longer but leads to great overall architecture. I write a functional thing from scratch to understand the requirements and constraints. It grows organically and the architecture is bad. Once I understand better the product, I think deeply on a better architecture first before basically rewriting from scratch. I sometimes need several iterations on the most complex products.
>>> When I finished school in 2010 (yep, along time ago now), I wanted to go try and make it as a musician. I figured if punk bands could just learn on the job, I could too. But my mum insisted that I needed to do something, just in case.
Amusing coincidence. I also wanted to be a rock star, or at least a successful working musician. My mom also talked me out of it. Her argument was: If there's no way to learn it in school, then go to school anyway and learn something fun, like math. Then you can still be a rock star. Or a programmer, since I had already learned programming.
So I went to college as a math major, and eventually ended up with a physics degree.
I still play music, but not full time, and with the comfort of supporting myself with a day job.
> Amusing coincidence. I also wanted to be a rock star, or at least a successful working musician.
> I still play music, but not full time, and with the comfort of supporting myself with a day job.
Some people say;
Pursue your dream or you will regret it.
This is said by people who regret their own choices.
Other people say;
Don't make your dream a job, because all it will
be is a job and no longer special.
This is said by people who had misconceptions about what pursuing their dream actually entailed.
I say;
Happiness is found in neither a dream chased nor a chosen
profession. It is instead a choice we make each day in
what we do, in how we view same, and if we allow
ourselves to possess it.
What constitutes each day is immaterial.
Easy to say it's immaterial when you're probably an american or european with plenty of material comfort. When was the last time you didn't eat for lack of food, for instance? Adieu to logic, indeed.
I do think you need the dream to be happy. Money/career is a prerequisite of dreams.
Some want to live on the seas. They can be perfectly happy as a sailor, even if poor and single.
Some want a family, educated children, respect. They would likely need a nice house, enough resources to get a scholarship, a shot at retirement. This is obtainable working in public service, even without money.
But most have multiple dreams. That's what makes things complex. The man who wishes for a wife but also wishes to be on the seas will find much fewer paths available. Sailors also don't generally get respected by most in laws.
To mix the two, they try to find the dream-job. Perhaps work for a big oil company and be 'forced' to go offshore.
Eventually people learn that desire is suffering in some form and cut down on the number of dreams. They may even see this as mature and try to educate others that this is the way. Those who have kids often are forced to pick kids as the dream. So there's a selection bias as well.
"Don't put one foot in your job and the other in your dream, Ed. Go ahead and quit, or resign yourself to this life. It's just too much of a temptation for fate to split you right up the middle before you've made up your mind which way to go".
But that's a trap, the money you "need" is partly decided by how much you have available. Once you're used to the money from a 40 day job it's hard to do with less, but other people manage fine because they never got used to having a lot.
I think PP's point was .. that even if you spend your whole life pressed into laboring to produce a surplus to satisfy the excessive consumption of the elites of your heretical society, in ways that create existential risk for future generations, and are at odds with your own inner values and moral compass..
you can still see the 'immateriality' of all that in the grand scheme of things and choose to be happy.
There is no sociopolitical statement, no call-to-arms, no pontification as to the measure of one's life, no generational implications. There is an existential consideration, but not of the nature your post implies.
Happiness is an individual choice, available to us all at any time.
They’re not entirely wrong, and your comment is seemingly unhinged and unprovoked… but there’s a lot of literature on stuff like mindfulness, CBT, and the impact thoughts can have on one’s emotions, especially happiness.
Luckily you can still pursue being a musician without all the pressure of having to be successful. On this road, one day you are free to declare your own success to yourself
Indeed. On the other hand, I also know my limitations, since roughly half the people I play with are pro's with music degrees. And I'm still trying to improve.
I'm inspired by the quote from Pablo Casals when he was in his 90s. They asked him why he still needed to practice, and he said: "Because I'm finally beginning to see some improvement."
Maybe if the internet and piracy hadn't fucked artists over, they could have made decent money as a musician selling their work without having to be a major-label superstar. Alas, we do not live in that timeline.
Yes. Mostly, until relatively recently on a historical time scale. In the middle ages, musicians were employed by towns, and had a guild. They also worked for princes, the church, etc. I read an article saying that they often did double duty as cops on market days.
There was perhaps less of a distinction between "arts" and trades. People did all kinds of work on paintings, sculptures, etc., and expected to get paid for it. They rarely put their names on their works.
I've read a bit about Bach's life, and he was always concerned about making money.
One music history textbook I read identified the invention of printed music as the start of the "music industry." Before the recording era, people composed and published sheet music. There were pianos in middle class parlors, and people bought sheet music to play. Two names that come to mind were Scott Joplin and Jelly Roll Morton. Movie theaters hired pianists or organists, though that employment vanished when talking movies came out. The musicians of the jazz era were making their livings from music. One familiar name is Miles Davis. His father was a dentist, and his parents considered music to be a respectable middle class career for their son. People did make a living from recordings before the Internet era. Today, lucrative work in the arts still exists for things like advertising and sports broadcasting.
(Revealing my bias, I'm a jazz musician).
In fact the expectation that an artist should not earn a decent living is kind of a new thing.
Piracy didn't fuck artists over I think (anecdotal), because it was the precursor to Spotify which has been great for artist discovery. Until the industry / artists caught on and started pushing shit. And the payment model for Spotify is bad, a million streams earns about $3-5K according to a quick google and few actually get that far.
But it's good for discovery, and artists generally don't make much off album sales either; concerts and merchandise is where it's at.
Really still kicking myself for not majoring in robotics in school. I wanted to program, so I studied computer engineering but hadn't really absorbed that much in classes. But I will likely never have access to all the robotics stuff my school had, nor the guided learnings.
Never too late to try stuff out of course, but very little beats structured higher ed education in relatively small classes (think there was only about 24 people in the robotics major?)_
Nothing beats concentrated work. You can do that without formal education. It might even be easier: you can probably afford pretty good arduinos and raspberries and H-bridges and sensors and actuators and...
It shouldn't be hard to go beyond what almost all universities provide.
On the other hand, the one robotics course I took involved getting access to computers at 3am and doing horrific matrix multiplications by hand that took hours. Of course, this was a long time ago.
While I generally agree with the conclusion of that, I think it might be a bit too naive.
The quantity group has a trivial way to "hack" the metric. I can just sit there snapping photos of everything. I could just set up a camera to automatically snap photos all day and night. To be honest, if I'm not doing this at a stationary wall there's probably a good chance I get a good photo since even a tiny probability can be an expected result given enough samples.
But I think the real magic ingredient comes from the explanation
> The group never worried about the quality of their work, so they spent time experimenting with lighting, composition, and such.
The way I read this is "The quantity group felt assured in their grade, so used the time to be creative and without pressure." But I think if you modified the experiment so that you'd grade students on a curve and in proportion to the number of photos they took then the results might differ. Their grade might not feel secure as it could take just one person to communicate that they're just snapping photos all day as fast as they can. In this setting I think you'd have even less ability to explore and experiment than the quality group.
I do think the message is right though and I think this is the right strategy in any creative or primarily mental endeavor (including coding). The more the process depends on creativity the more time needs to be allocated to this type of exploration and freedom. I think in jobs like research that this should be the basis for how they are structured and effectively you should mostly remove evaluation metrics and embrace the fundamentally ad hoc nature. In things like coding I think you need more of a mix and the right mix depends highly on the actual objectives. But I wanted to make the above distinction because I think it is important if we're trying to figure out what those objectives are.
Yesterday I spent the entire day working on a lib to create repos in Github from inside Emacs.
It was the first time in 3y that I had touched it.
When googling, I saw potential candidates that were much better than my simple one.
But I kept going, for the pleasure of making my own thing.
I learned a lot, and felt very accomplished, even if, at the end, it was messy, and I'll have to go back and reorganize it.
It feels like making _my_ thing, even if it is drawing my copy of Monalisa.
I agree but there are certain types of unnecessary stupidity, which feel more easy at at first, but hurt more than they help very quickly (measured in amount of code):
The first one that comes to mind relates closely to naming. If we think about a program in terms of its user facing domain, then we might start to name and structure our data, functions, types too specifically for that domain. But it's almost always better to separate computational, generic data manipulation from domain language.
You only need a _little bit_ more time to move much of the domain specific stuff into your data model. Think of domain language as values rather than field names or types. This makes code easier to work with _very quickly_.
Another stupidity is to default to local state. Moving state up requires a little bit of planning and sometimes refactoring and one has to consider the overall data model in order to understand each part. But it goes a long way, because you don't end up with entangled, implicit coordination. This is very much true for anything UI related. I almost never regret doing this, but I have regretted not doing this very often.
A third thing that is unnecessarily stupid is to spread around logic. Harder to explain, but everyone knows the easy feeling of putting an if statement (or any kind of branching, filtering etc.) that adds a bunch of variables somewhere, where it doesn't belong. If you feel pressed to do this, re-consider whether your data is rich enough (can it express the thing that I need here) and consistent enough.
> we might start to name and structure our data, functions, types too specifically for that domain.
I once worked on a Perl script that had to send an email to "Harry". (Name changed to protect the innocent). I stored Harry's email address in a variable called "$HARRY".
Later on a second person (with a different name) wanted to get the emails as well. No problem, just turn the scalar into an array, "@HARRIES".
I both agree and disagree with this post, but I might be misunderstanding it.
Near the end, it states:
“Enjoy writing it, it doesn’t have to be nice or pretty if it’s for you. Have fun, try out that new runtime or language.”
It doesn’t have to be nice or pretty EVEN if it’s NOT for you.
The value in prototyping has always been there and it’s been very concrete: to refine mental models, validate assumptions, uncover gaps in your own thinking (or your team’s), you name it.
Unfortunately it feels that the pendulum has swung in the completely opposite direction. There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software.
When you get stuck in planning mode you let wrong assumptions grow and get baked in into the design so the sunken cost keeps rising.
Simply have a BASIC and SHARED mental model of the end goal with your team and start prototyping. LLMs have made this RIDICULOUSLY CHEAP. But, the industry is still stuck in all the wrong ways.
I feel like this is the time to mention "How Big Things Get Done", by Bent Flyvbjerg. "Long planning vs. start prototyping" is a false dichotomy. Prototyping IS planning.
Put another way, refining tickets for weeks isn't the problem; the problem is when you do this without prototyping, chances are you aren't actually refining the tickets.
Planning stops when you take steps that cannot be reverted, and there IS value in delaying those steps as much as possible, because your project then becomes vulnerable to outside risk. Long planning is valuable because of this; it's just that many who advocate for long planning would just take a long time and not actually use that time for planning.
> It doesn’t have to be nice or pretty EVEN if it’s NOT for you.
> There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software.
I'd love to have a "high paying job" where I am allowed to start prototyping and modelling the problem and then iteratively keep on improving it into fully functional solution.
I won't deny that the snowballing of improvements and functional completeness manifests as acceleration of "delivery speed" and as a code-producing experience is extremely enjoyable. Depth-first traversal into curiosity driven problem solving is a very pleasurable activity.
However, IME in real world, someone up the chain is going to ask "when will you deliver this". I have ever only once been in a privileged enough a position in a job to say "I am on it and I will finish it when I finish it... and it will be really cool"
Planning and task breakdown, as a developer, is pretty much like my insurance policy. Because when someone up the chain (all the way down to my direct manager) comes asking "How much progress you have made ?" I can say (OR "present the data" as it is called in a certain company ?) "as per the agreed plan, out of the N things, I have done k (< N) things so far. However at this (k+1)th thing I am slowing down or blocked because during planning that-other-thing never got uncovered and we have scope-creep/external-dependency/cattle-in-the-middle-of-the-road issue". At which point a certain type of person will also go all the way to push the blame to another colleague to make themselves appear better hence eligible for promotion.
I would highly encourage everyone to participate in the "planning theatre" and play your "role".
OR, if possible start something of your own and do it the way you always wanted to do it.
For my money, certain types of software shouldn't have tests, too much planning, or any maintenance whatsoever
Prototypes (start ups) rarely have the luxury of "getting it right", their actual goal is "getting it out there FAST to capture the market (and have it working enough to keep the market)"
(Some - apologies but I'm not a game dev enough to be able to say what types this applies to) Game devs - they're more or less build it, ship it, and be done with it, players tend to be forgiving of most bugs, and they move on to the next shiny thing long before it's time to fix all the things.
Once the product has traction in the market, and you have paying customers, then it's time to deal with the load (scale up) and bugs, I recall reading somewhere that it's probably best to drop the start up team, they did their job (and are now free to move on to the next brilliant idea), and replace them with a scale up team, who will do the planning, architecting, and preparation for the long term life of the software.
I think that that approach would have worked for Facebook (for example) they had their PHP prototype that captured the market very quickly, and (IMO) they should have moved through to a scale up team (who could have used the original code as a facade, strangling it to replace it with something funky (Java/C++ would have been what was available at the time, but Go would be what I would suggest now)
It's like riding a bike. You need to start in a low gear and get some momentum - even if you are going in circles. Starting from zero in the highest gear is difficult and hard to ballance. Once you have some speed, everything gets easier.
> It’s small, it’s dumb, and there were probably plenty of options out there.
Oh, this sort of "dumb" code. That is just exercise. It bothers me that in this field we don't think we should rehearse and exercise and instead use production projects for that.
Actual dumb code is one that disregards edge cases or bets on things being guaranteed when they're not.
@author the blog scales poorly on smaller devices. The header doesn't fit the screen, margin's too big and lines are too crammed (line height needs a bit mor love).
I do not believe that the real struggle is "starting", nowadays, since AI impresses 90% that is able to complete a task. We struggle in architecting the whole thing we want to start.
Even if the code is for yourself or for a collaborative team for a project or for a company the quality matters. Also the software replicability, reproducibility and reliability are significant indicators for viable code and guaranteed results
I like this philosophy. It's interesting to me that the author writes about trying deno, specifically out of curiosity for compiling binaries with it, because that is something that's been specifically tickling the back of my mind for awhile now, but I've had no real reason to try it. I think this gave me the motivation to write some "stupid" code just to play with it.
You should write stupid code, but you should write good code too.
Writing stupid code is like walking to the shop. You're not going to improve your marathon time, but that's not the point. It's just using an existing skill to do something you need to do.
But you should also study and get better at things. If you learnt to cycle you could get to the shop in a third of the time. Similarly, if you learn new languages, paradigms, features etc. you will become a more powerful programmer.
One thing I have found to be a very valuable habbit is to first think about what your software has to do on paper and draw some shitty flow charts, lists and other things, without too much care about whether you will do it (especially if it isn't software that you strictly need to do for some reason).
Whether an idea is good or not can often only be judged when it becomes more concrete. The actual finished project is as concrete as it gets, but it takes time and work to get there. So the next best thing is to flesh it out as much as possible ahead and decide based on that whether it is worth doing it that way.
Most people have the bad habit of being too attached to their own ideas. Kill your darlings. Ideas are meant to be either done, shelved or thrown into the bin. It doesn't do any good to roll them around in your head forever.
I feel like people should be writing stupid code, and in the case where its a compiled language, we should ask compiler or the language for better optimization. The other day, I was writing a check of a struct that have certain structures (protobuf probably have something like this)
struct S { int a; int b; int c; int d; int e; /* about 15 more members */ }
so I wrote
const auto match_a = s.a == 10;
const auto match_b = s.c == 20;
const auto match_c = s.e == 30;
/* about 15 more of these */
if (match_a && match_b && match_c) { return -1; }
Turns out compilers (I think because of the language) totally shit the bed at this. It generates a chain of 20 if-else instead of a mask using SIMD or whatever. I KNOW this is possible, so I asked an LLM, it was able to produce said code that uses SIMD.
Is LLM output the kind of clever we're talking about here? I always thought the quote was about abstraction astronautics, not large amounts of dumb just-do-it code.
It applies to LLM code, but if you take the law at face value, it's a very damaging one. Cleverness should be used to make your code easier to verify, not harder.
He said it with a very specific idea in mind, and like most of software engineering "laws", if you know enough to know when to apply it, you don't need the law.
No, it just means you'll be spending extra time debugging it. The most clever code is often cleverness which isn't from you, but derived from the field over time.
I get more done by writing the stupid code, and fixing it, than junking the old code... but every now and then I can see clearly a structure for a rewrite, and then I rewrite, but its rare.
I'm working on in-kernel ext3/4fs journalling support for NetBSD. The code is hot garbage but I love it because of the learning journey it's taken me on: about working in a kernel, about filesystems, etc. I'm gonna clean it up massively once I've figured out how to make the support complete, and even then I expect to be raked over the coals by the NetBSD devs for my code quality. On top of that there's the fact that real ones use ZFS or btrfs these days, and ext4 is a toy; like FAT, by comparison, so this may not even be that useful. But it's fun and lets me say hey Ma, I'm a kernel hacker now!
Ext4 is most certainly still in use and not a toy. Its trusted. It takes a lot for folks to adopt a new file system.
I worked on a research topic in grad school and learned about holes in files, and how data isn’t removed until the last fd is closed. I use that systems knowledge in my job weekly.
A tip. Kernel development can be lonely, share what you are working on and find others.
Partially true, in that on Safari on iOS, you can use that to enlarge the text. But that doesn't change what's really broken about the layout, which is that it forces the column width to allow only a small number of words-per-line, which is what makes for uncomfortable reading. Another Safari-on-iOS option would be to use the built-in "Reader" function, which re-flows the text into a cleaner layout.
Maybe you don't read much, but it's obvious they weren't making some universal statement about code. They are referring to the code you write when you are just experimenting by yourself, for yourself. The point is to not let irrelevant things like usefulness, quality, conventions, etc. limit just tinkering and learning.
I think the people who think there is no stupid code don't actually ever witness truly bad code. The worst code that they come across is, at worst, below average. And since that's the worst they see, it gets mentally defined as bad.
I think that's basically an impossibility, unless the only code they look at is from people who have 5 minutes of coding experience and attempt to get working code from vibes (without the LLM). Even suggesting this makes me think you haven't even seen truly stupid code.
I'm talking code from people with no programming experience, trying to contribute to open-source mod projects by pattern matching words they see in the file. They see the keyword static a lot, so they just put static on random things.
And reading the Linux kernel mailing list would allow him to... do what exactly? And by when? Compared to writing simple, working, usable apps in TypeScript, immediately after reading about how Deno/TypeScript/etc. work?
Linux still works by email-submitted patches, the workflow for which git was originally designed.
And if an unacceptable patch made it to Linus's desk, someone downstream hasn't been doing their damn job. The submaintainers are supposed to filter the stupid out, perhaps by more gentle guidance toward noob coders. The reason why Linus gets so angry is because the people who let it through should know better.
For sure. I'd argue to write the "stupid" code to get started, get that momentum going. The sooner you are writing code, the sooner you are making your concept real, and finding the flaws in your mental model for what you're solving.
I used to try to think ahead, plan ahead and "architect", then I realized simply "getting something on paper" corrects many of the assumptions I had in my head. A colleague pushed me to "get something working" and iterate from there, and it completely changed how I build software. Even if that initial version is "stupid" and hack-ish!
You should always have an architecture in mind. But it should be appropriate for the scale and complexity of your application _right now_, as opposed to what you imagine it will be in five years. Let it evolve, but always have it.
I think this is mostly true, but also I’d highlight the necessity of having a mental model, and iterating.
I think it is common for a programmer to just start programming without coming up with any model, and just try to solve the problem by adding code on top of code.
There are also many programmers who go with their first “working” implementation, and never iterate.
These days, I think the pendulum has swung too far from thinking about the program, maybe mapping it out a bit on paper before writing code.
My philosophy:
1. Get it working.
2. Get it working well.
3. Get it working fast.
This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.
This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.
> Finally, do an optimization pass.
Depending on how low-level your code is, this... may not work out in those terms.
In other words, I’d say that if you actually want good software—and that includes making sure its speed falls within a reasonable factor of the napkin-math theoretical maximum achievable on the platform—your three steps can easily constitute three entire rewrites or at least substantial refactors. You might well need to rearchitect if the “working well” version has multiple small loops split by domain-level concern when the hardware really wants a single large one, or if you’re doing a lot of pointer-chasing and need to flatten the whole thing into a single buffer in preorder, or if your interface assumes per-byte ops where SIMD can be applied.
This is not a condemnation of the strategy, mind you. Crap code is valuable and I wish I were better at it. I just disagree that the transition from step 2 to step 3 can be described as an optimization pass. If that’s what you limit yourself to, you’ll quite likely be forced to leave at least an order of magnitude’s worth of performance on the table.
And yes, most consumer software is very much not good by that definition.
(For instance, I’m expecting that the Ladybird devs will be able to get their browser to work well for daily tasks—which I would count a tremendous achievement—but I’m not optimistic about it then becoming any faster than the state of the art even ten or fifteen years ago.)
Some optimization problems require an entire PHD dissertation and research budget to actually optimize, so some algorithms require far more effort applied to this than is reasonable for most products. As mentioned, sometimes you can combine these all into one step -- when you know the domains well.
Sometimes, it might even be completely separate people working on each step... separated by time and space.
In any case, most software generally stops at (2) simply due to the fact that any effort towards (3) isn't worth the effort -- for example, there's very little point in spending two weeks optimizing a report generation that runs in the middle of the night, once a month. At some point, there may be, but usually not anytime soon.
Slight variation: - Make it work - Make it right - Make it fast
Totally agree. Iteration is key. Mapping things out on paper after you've written the code can also be illuminating. Analysis and design doesn't imply one-and-done Architect -> Implement waterfall methods.
Knowing hard requirements up front can be critical to building the right thing. It's scary how many "temporary" things get built in top of and stuck in production. Obviously loose coupling / clear interfaces can help a lot with this.
But an easy example is "just build the single player version" (of an application) can be worse than just eating your vegetables. It can be very difficult to tack-on multiplayer, as opposed to building for this up front.
I once retrofitted a computer racing game/sim from single-player to multi-player.
I thought it was a masterpiece of abusing the C pre-processor to ensure that all variables used for player physics, game state, inputs, and position outputs to the graphics pipeline were guarded with macros to ensure as the (overwhelmingly) single-player titles continued to be developed that the code would remain clean for the two titles that we hoped to ship with split-screen support.
All the state was wrapped in ss_access() macros (“split screen access”) and compiled to plain variables for single-player titles, but with the variable name changed so writing plain access code wouldn’t compile.
I was proud of the technical use/abuse of macros. I was not proud that I’d done a lot of work and imposed a tax on the other teams all for a feature that producers wanted but that in the end we never shipped a single split-screen title. One console title was cancelled (Saturn) and one (mine) shipped single-player only (PlayStation).
That's a great point, and I feel like it is relevant for a lot more than games.
We should definitely have a plan before we start, and sketch out the broad strokes both in design and in actual code as a starting point. For smaller things it's fine to just start hacking away, but when we're designing nå entire application i think the right way to approach it is to plan it out and then solve the big problems first. Like multiplayer.
They don't have to be completely solved, it's an iterative process but they should be part of the process from the beginning.
An example from my own work: I took over an app two other developers had started. The plan was to synchronize data from a third party to our own db, but they hadn't done that. They had just used the third party api directly. I don't know why. So when they left and I took over, I ended up deleting/refactoring everything because everything was built around this third party api and there was a whole bunch of problems related to that and how they were just using the third party's data structure directly rather than shaping the data the way we wanted it. The frontend took 30-60+ seconds to load a page because it was making like 7 serialized requests, waiting for a response before sending the next one and the backend did the same thing.
Now it's loading instantly, but it did require that I basically tear out everything they've done and rewrite most of the system from scratch.
In many project it's impossible to know the requirements up front, or they are very vagues.
Business requirements != programming requirements/features.
Very often both the business requirements and programming requirements change a lot since unless you have already written this one thing, in the exact form that you are making it now, you will NEVER get it right the first time.
You are both right and that's why so many projects are over budget or even fail miserably.
The problem is people don't adapt properly. If the business requirements change so much that it invalidates your previous work then you need to re-do the work. But in reality people just duct tape a bunch of workarounds together and you end up with a frankensystem that doesn't do anything right.
It is possible to build systems that can adapt to change, by decoupling and avoiding cross cutting concerns etc you can make a lot of big sweeping changes quite easily in a well designed system. It's just that most developers are bad at software development, they make a horrible mess and then they just keep making it worse while blaming deadlines and management etc.
This is also why I'm not a fan of the "software architect" that doesn't write code, or at least not the code that they've architected.
> I'd argue to write the "stupid" code to get started, get that momentum going.
Yes and no, depending on how dependent you become on that first iteration, you might drown an entire project or startup in technical debt.
You should only ever just jump in if:
A) it's a one off for some quick results or a demo or whatever
B) it's easy enough to throw away and nobody will try to ship it and make you maintain it
That said, having so much friction and analysis paralysis that you never ship is also no good.
or C): You cultivate a culture of continuous rewrite to match updated requirements and understandings as you code. So, so many people have never learned that, but once you do reach that state, it is very liberating as there will be no more sacred ducks.
That said, it takes quite a bit of practice to become good enough at refactoring to actually practice that.
I frequently do both. It takes longer but leads to great overall architecture. I write a functional thing from scratch to understand the requirements and constraints. It grows organically and the architecture is bad. Once I understand better the product, I think deeply on a better architecture first before basically rewriting from scratch. I sometimes need several iterations on the most complex products.
>>> When I finished school in 2010 (yep, along time ago now), I wanted to go try and make it as a musician. I figured if punk bands could just learn on the job, I could too. But my mum insisted that I needed to do something, just in case.
Amusing coincidence. I also wanted to be a rock star, or at least a successful working musician. My mom also talked me out of it. Her argument was: If there's no way to learn it in school, then go to school anyway and learn something fun, like math. Then you can still be a rock star. Or a programmer, since I had already learned programming.
So I went to college as a math major, and eventually ended up with a physics degree.
I still play music, but not full time, and with the comfort of supporting myself with a day job.
> Amusing coincidence. I also wanted to be a rock star, or at least a successful working musician.
> I still play music, but not full time, and with the comfort of supporting myself with a day job.
Some people say;
This is said by people who regret their own choices.Other people say;
This is said by people who had misconceptions about what pursuing their dream actually entailed.I say;
But that's just me.Easy to say it's immaterial when you're probably an american or european with plenty of material comfort. When was the last time you didn't eat for lack of food, for instance? Adieu to logic, indeed.
I do think you need the dream to be happy. Money/career is a prerequisite of dreams.
Some want to live on the seas. They can be perfectly happy as a sailor, even if poor and single.
Some want a family, educated children, respect. They would likely need a nice house, enough resources to get a scholarship, a shot at retirement. This is obtainable working in public service, even without money.
But most have multiple dreams. That's what makes things complex. The man who wishes for a wife but also wishes to be on the seas will find much fewer paths available. Sailors also don't generally get respected by most in laws.
To mix the two, they try to find the dream-job. Perhaps work for a big oil company and be 'forced' to go offshore.
Eventually people learn that desire is suffering in some form and cut down on the number of dreams. They may even see this as mature and try to educate others that this is the way. Those who have kids often are forced to pick kids as the dream. So there's a selection bias as well.
vonnegut said:
"Don't put one foot in your job and the other in your dream, Ed. Go ahead and quit, or resign yourself to this life. It's just too much of a temptation for fate to split you right up the middle before you've made up your mind which way to go".
I say if you have money to do whatever you want everyday, there’s an overwhelming chance you’ll be happy.
The rest of those sayings are just for us plebs that have to rationalize working 40-60 hours a week.
But that's a trap, the money you "need" is partly decided by how much you have available. Once you're used to the money from a 40 day job it's hard to do with less, but other people manage fine because they never got used to having a lot.
My key point is that happiness is a choice. I hope everyone can find a way to choose it.
That's wrong. Plenty of people have "found a way to choose happiness" already, and we've all seen what exactly they've wrought.
I think PP's point was .. that even if you spend your whole life pressed into laboring to produce a surplus to satisfy the excessive consumption of the elites of your heretical society, in ways that create existential risk for future generations, and are at odds with your own inner values and moral compass.. you can still see the 'immateriality' of all that in the grand scheme of things and choose to be happy.
No, my point is happiness is a choice.
There is no sociopolitical statement, no call-to-arms, no pontification as to the measure of one's life, no generational implications. There is an existential consideration, but not of the nature your post implies.
Happiness is an individual choice, available to us all at any time.
Full stop.
Yeah, no.
What you're promoting is a deeply narcissistic worldview, and I hope either the cure or the consequences reach you soon.
Though maybe those are going to present as the same thing.
They’re not entirely wrong, and your comment is seemingly unhinged and unprovoked… but there’s a lot of literature on stuff like mindfulness, CBT, and the impact thoughts can have on one’s emotions, especially happiness.
I really don't see how your comment has to do with his at all. You're both talking about completely different things, I think.
OK, just to make sure we're on the same page here: you've already tried trying to make the connection, right?
Please step right into this here experience machine, it won't hurt you one bit
Luckily you can still pursue being a musician without all the pressure of having to be successful. On this road, one day you are free to declare your own success to yourself
Indeed. On the other hand, I also know my limitations, since roughly half the people I play with are pro's with music degrees. And I'm still trying to improve.
I'm inspired by the quote from Pablo Casals when he was in his 90s. They asked him why he still needed to practice, and he said: "Because I'm finally beginning to see some improvement."
Maybe if the internet and piracy hadn't fucked artists over, they could have made decent money as a musician selling their work without having to be a major-label superstar. Alas, we do not live in that timeline.
Did most artists (in whatever form) ever have a good living in and of itself?
No. The best/luckiest did, at least some of the time. Most didn't.
Yes. Mostly, until relatively recently on a historical time scale. In the middle ages, musicians were employed by towns, and had a guild. They also worked for princes, the church, etc. I read an article saying that they often did double duty as cops on market days.
There was perhaps less of a distinction between "arts" and trades. People did all kinds of work on paintings, sculptures, etc., and expected to get paid for it. They rarely put their names on their works.
I've read a bit about Bach's life, and he was always concerned about making money.
One music history textbook I read identified the invention of printed music as the start of the "music industry." Before the recording era, people composed and published sheet music. There were pianos in middle class parlors, and people bought sheet music to play. Two names that come to mind were Scott Joplin and Jelly Roll Morton. Movie theaters hired pianists or organists, though that employment vanished when talking movies came out. The musicians of the jazz era were making their livings from music. One familiar name is Miles Davis. His father was a dentist, and his parents considered music to be a respectable middle class career for their son. People did make a living from recordings before the Internet era. Today, lucrative work in the arts still exists for things like advertising and sports broadcasting.
(Revealing my bias, I'm a jazz musician).
In fact the expectation that an artist should not earn a decent living is kind of a new thing.
Ah, one of the luxuries of not having to support myself from music... I have no interest in making recordings. That audience means nothing to me.
Piracy didn't fuck artists over I think (anecdotal), because it was the precursor to Spotify which has been great for artist discovery. Until the industry / artists caught on and started pushing shit. And the payment model for Spotify is bad, a million streams earns about $3-5K according to a quick google and few actually get that far.
But it's good for discovery, and artists generally don't make much off album sales either; concerts and merchandise is where it's at.
Really still kicking myself for not majoring in robotics in school. I wanted to program, so I studied computer engineering but hadn't really absorbed that much in classes. But I will likely never have access to all the robotics stuff my school had, nor the guided learnings.
Never too late to try stuff out of course, but very little beats structured higher ed education in relatively small classes (think there was only about 24 people in the robotics major?)_
Nothing beats concentrated work. You can do that without formal education. It might even be easier: you can probably afford pretty good arduinos and raspberries and H-bridges and sensors and actuators and...
It shouldn't be hard to go beyond what almost all universities provide.
On the other hand, the one robotics course I took involved getting access to computers at 3am and doing horrific matrix multiplications by hand that took hours. Of course, this was a long time ago.
I worked with robotics engineers, their code and development methods were poor even though software is essential. You need both sides.
I'm reminded of the quantity vs. quality groups in a photography class:
https://sebastianhetman.com/why-quantity-matters/
Do stuff, and you learn stuff. Go play.
While I generally agree with the conclusion of that, I think it might be a bit too naive.
The quantity group has a trivial way to "hack" the metric. I can just sit there snapping photos of everything. I could just set up a camera to automatically snap photos all day and night. To be honest, if I'm not doing this at a stationary wall there's probably a good chance I get a good photo since even a tiny probability can be an expected result given enough samples.
But I think the real magic ingredient comes from the explanation
The way I read this is "The quantity group felt assured in their grade, so used the time to be creative and without pressure." But I think if you modified the experiment so that you'd grade students on a curve and in proportion to the number of photos they took then the results might differ. Their grade might not feel secure as it could take just one person to communicate that they're just snapping photos all day as fast as they can. In this setting I think you'd have even less ability to explore and experiment than the quality group.I do think the message is right though and I think this is the right strategy in any creative or primarily mental endeavor (including coding). The more the process depends on creativity the more time needs to be allocated to this type of exploration and freedom. I think in jobs like research that this should be the basis for how they are structured and effectively you should mostly remove evaluation metrics and embrace the fundamentally ad hoc nature. In things like coding I think you need more of a mix and the right mix depends highly on the actual objectives. But I wanted to make the above distinction because I think it is important if we're trying to figure out what those objectives are.
Quantity has a quality all its own.
It worked for Garry Winogrand, for one.
Yesterday I spent the entire day working on a lib to create repos in Github from inside Emacs. It was the first time in 3y that I had touched it. When googling, I saw potential candidates that were much better than my simple one. But I kept going, for the pleasure of making my own thing. I learned a lot, and felt very accomplished, even if, at the end, it was messy, and I'll have to go back and reorganize it. It feels like making _my_ thing, even if it is drawing my copy of Monalisa.
I agree but there are certain types of unnecessary stupidity, which feel more easy at at first, but hurt more than they help very quickly (measured in amount of code):
The first one that comes to mind relates closely to naming. If we think about a program in terms of its user facing domain, then we might start to name and structure our data, functions, types too specifically for that domain. But it's almost always better to separate computational, generic data manipulation from domain language.
You only need a _little bit_ more time to move much of the domain specific stuff into your data model. Think of domain language as values rather than field names or types. This makes code easier to work with _very quickly_.
Another stupidity is to default to local state. Moving state up requires a little bit of planning and sometimes refactoring and one has to consider the overall data model in order to understand each part. But it goes a long way, because you don't end up with entangled, implicit coordination. This is very much true for anything UI related. I almost never regret doing this, but I have regretted not doing this very often.
A third thing that is unnecessarily stupid is to spread around logic. Harder to explain, but everyone knows the easy feeling of putting an if statement (or any kind of branching, filtering etc.) that adds a bunch of variables somewhere, where it doesn't belong. If you feel pressed to do this, re-consider whether your data is rich enough (can it express the thing that I need here) and consistent enough.
> we might start to name and structure our data, functions, types too specifically for that domain.
I once worked on a Perl script that had to send an email to "Harry". (Name changed to protect the innocent). I stored Harry's email address in a variable called "$HARRY".
Later on a second person (with a different name) wanted to get the emails as well. No problem, just turn the scalar into an array, "@HARRIES".
I thought it was very funny but nobody else did.
I both agree and disagree with this post, but I might be misunderstanding it. Near the end, it states:
“Enjoy writing it, it doesn’t have to be nice or pretty if it’s for you. Have fun, try out that new runtime or language.”
It doesn’t have to be nice or pretty EVEN if it’s NOT for you. The value in prototyping has always been there and it’s been very concrete: to refine mental models, validate assumptions, uncover gaps in your own thinking (or your team’s), you name it.
Unfortunately it feels that the pendulum has swung in the completely opposite direction. There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software. When you get stuck in planning mode you let wrong assumptions grow and get baked in into the design so the sunken cost keeps rising.
Simply have a BASIC and SHARED mental model of the end goal with your team and start prototyping. LLMs have made this RIDICULOUSLY CHEAP. But, the industry is still stuck in all the wrong ways.
I feel like this is the time to mention "How Big Things Get Done", by Bent Flyvbjerg. "Long planning vs. start prototyping" is a false dichotomy. Prototyping IS planning.
Put another way, refining tickets for weeks isn't the problem; the problem is when you do this without prototyping, chances are you aren't actually refining the tickets.
Planning stops when you take steps that cannot be reverted, and there IS value in delaying those steps as much as possible, because your project then becomes vulnerable to outside risk. Long planning is valuable because of this; it's just that many who advocate for long planning would just take a long time and not actually use that time for planning.
> It doesn’t have to be nice or pretty EVEN if it’s NOT for you.
> There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software.
I'd love to have a "high paying job" where I am allowed to start prototyping and modelling the problem and then iteratively keep on improving it into fully functional solution.
I won't deny that the snowballing of improvements and functional completeness manifests as acceleration of "delivery speed" and as a code-producing experience is extremely enjoyable. Depth-first traversal into curiosity driven problem solving is a very pleasurable activity.
However, IME in real world, someone up the chain is going to ask "when will you deliver this". I have ever only once been in a privileged enough a position in a job to say "I am on it and I will finish it when I finish it... and it will be really cool"
Planning and task breakdown, as a developer, is pretty much like my insurance policy. Because when someone up the chain (all the way down to my direct manager) comes asking "How much progress you have made ?" I can say (OR "present the data" as it is called in a certain company ?) "as per the agreed plan, out of the N things, I have done k (< N) things so far. However at this (k+1)th thing I am slowing down or blocked because during planning that-other-thing never got uncovered and we have scope-creep/external-dependency/cattle-in-the-middle-of-the-road issue". At which point a certain type of person will also go all the way to push the blame to another colleague to make themselves appear better hence eligible for promotion.
I would highly encourage everyone to participate in the "planning theatre" and play your "role".
OR, if possible start something of your own and do it the way you always wanted to do it.
> ... for WEEKS before actually starting to write code...
I'm curious who is in these kinds of jobs. Because I've never seen this in practice.
For my money, certain types of software shouldn't have tests, too much planning, or any maintenance whatsoever
Prototypes (start ups) rarely have the luxury of "getting it right", their actual goal is "getting it out there FAST to capture the market (and have it working enough to keep the market)"
(Some - apologies but I'm not a game dev enough to be able to say what types this applies to) Game devs - they're more or less build it, ship it, and be done with it, players tend to be forgiving of most bugs, and they move on to the next shiny thing long before it's time to fix all the things.
Once the product has traction in the market, and you have paying customers, then it's time to deal with the load (scale up) and bugs, I recall reading somewhere that it's probably best to drop the start up team, they did their job (and are now free to move on to the next brilliant idea), and replace them with a scale up team, who will do the planning, architecting, and preparation for the long term life of the software.
I think that that approach would have worked for Facebook (for example) they had their PHP prototype that captured the market very quickly, and (IMO) they should have moved through to a scale up team (who could have used the original code as a facade, strangling it to replace it with something funky (Java/C++ would have been what was available at the time, but Go would be what I would suggest now)
It's like riding a bike. You need to start in a low gear and get some momentum - even if you are going in circles. Starting from zero in the highest gear is difficult and hard to ballance. Once you have some speed, everything gets easier.
> It’s small, it’s dumb, and there were probably plenty of options out there.
Oh, this sort of "dumb" code. That is just exercise. It bothers me that in this field we don't think we should rehearse and exercise and instead use production projects for that.
Actual dumb code is one that disregards edge cases or bets on things being guaranteed when they're not.
I would say that much of my code starts out stupid and, hopefully, becomes better with refinement.
First do it, then do it right, then do it better.
Or just ship it, always an option
Yeah! That's great, thank you! I will share this quote with a couple people with whom I've discussed this topic recently :)
The more popular quote ime is "make it work, make it right, make it fast" (1983!)
https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast
@author the blog scales poorly on smaller devices. The header doesn't fit the screen, margin's too big and lines are too crammed (line height needs a bit mor love).
https://i.imgur.com/Ev6Ea1b.png
I do not believe that the real struggle is "starting", nowadays, since AI impresses 90% that is able to complete a task. We struggle in architecting the whole thing we want to start.
This reminds me of the (excellent!) book by Jamie Buck: https://pragprog.com/titles/jbmaze/mazes-for-programmers/
They write a maze algo in any new language they learn just to learn bits of the language.
A modern variant would be to do a year id Advent of Code in the new language.
Even if the code is for yourself or for a collaborative team for a project or for a company the quality matters. Also the software replicability, reproducibility and reliability are significant indicators for viable code and guaranteed results
Also read "stupid" code :)
I didn't know about Deno and streams, but this looks fine
Looks like straight out of Dart.
Shame that the writer didn’t tie up the initial story about waiting to be a musician without knowing anything about that with the end of the story.
Also, 2010 was just yesterday my young friend :)
I like this philosophy. It's interesting to me that the author writes about trying deno, specifically out of curiosity for compiling binaries with it, because that is something that's been specifically tickling the back of my mind for awhile now, but I've had no real reason to try it. I think this gave me the motivation to write some "stupid" code just to play with it.
Buns faster and the binaries are a bit smaller last I checked.
You should write stupid code, but you should write good code too.
Writing stupid code is like walking to the shop. You're not going to improve your marathon time, but that's not the point. It's just using an existing skill to do something you need to do.
But you should also study and get better at things. If you learnt to cycle you could get to the shop in a third of the time. Similarly, if you learn new languages, paradigms, features etc. you will become a more powerful programmer.
Where did you study games? Seems like we have similar trajectories.
One thing I have found to be a very valuable habbit is to first think about what your software has to do on paper and draw some shitty flow charts, lists and other things, without too much care about whether you will do it (especially if it isn't software that you strictly need to do for some reason).
Whether an idea is good or not can often only be judged when it becomes more concrete. The actual finished project is as concrete as it gets, but it takes time and work to get there. So the next best thing is to flesh it out as much as possible ahead and decide based on that whether it is worth doing it that way.
Most people have the bad habit of being too attached to their own ideas. Kill your darlings. Ideas are meant to be either done, shelved or thrown into the bin. It doesn't do any good to roll them around in your head forever.
"In the beginning you always want the results. In the end all you want is control."
I feel like people should be writing stupid code, and in the case where its a compiled language, we should ask compiler or the language for better optimization. The other day, I was writing a check of a struct that have certain structures (protobuf probably have something like this)
struct S { int a; int b; int c; int d; int e; /* about 15 more members */ }
so I wrote
const auto match_a = s.a == 10; const auto match_b = s.c == 20; const auto match_c = s.e == 30; /* about 15 more of these */ if (match_a && match_b && match_c) { return -1; }
Turns out compilers (I think because of the language) totally shit the bed at this. It generates a chain of 20 if-else instead of a mask using SIMD or whatever. I KNOW this is possible, so I asked an LLM, it was able to produce said code that uses SIMD.
Why is this a struct and not an array of ints ?
The Kernighan law says debugging code is twice as hard as creating it.
Therefore, if you push yourself to the limit of your abilities to create the most clever code you can, you won't be able to debug it.
> The Kernighan law says debugging code is twice as hard as creating it.
> Therefore, if you push yourself to the limit of your abilities to create the most clever code you can, you won't be able to debug it.
If only advocates of LLM-based code generation understood this lemma.
Is LLM output the kind of clever we're talking about here? I always thought the quote was about abstraction astronautics, not large amounts of dumb just-do-it code.
It applies to LLM code, but if you take the law at face value, it's a very damaging one. Cleverness should be used to make your code easier to verify, not harder.
He said it with a very specific idea in mind, and like most of software engineering "laws", if you know enough to know when to apply it, you don't need the law.
No, it just means you'll be spending extra time debugging it. The most clever code is often cleverness which isn't from you, but derived from the field over time.
I get more done by writing the stupid code, and fixing it, than junking the old code... but every now and then I can see clearly a structure for a rewrite, and then I rewrite, but its rare.
Today its vibe stupid code.
I'm working on in-kernel ext3/4fs journalling support for NetBSD. The code is hot garbage but I love it because of the learning journey it's taken me on: about working in a kernel, about filesystems, etc. I'm gonna clean it up massively once I've figured out how to make the support complete, and even then I expect to be raked over the coals by the NetBSD devs for my code quality. On top of that there's the fact that real ones use ZFS or btrfs these days, and ext4 is a toy; like FAT, by comparison, so this may not even be that useful. But it's fun and lets me say hey Ma, I'm a kernel hacker now!
Ext4 is most certainly still in use and not a toy. Its trusted. It takes a lot for folks to adopt a new file system.
I worked on a research topic in grad school and learned about holes in files, and how data isn’t removed until the last fd is closed. I use that systems knowledge in my job weekly.
A tip. Kernel development can be lonely, share what you are working on and find others.
he said netbsd - there I would expect ext4 to be considered a toy even though it is used a lot in linux land. Different worlds.
This is a particularly bad mobile layout. Fix your margins.
At least on IOS, if you double tap the text it’ll fill the viewport just nicely.
Partially true, in that on Safari on iOS, you can use that to enlarge the text. But that doesn't change what's really broken about the layout, which is that it forces the column width to allow only a small number of words-per-line, which is what makes for uncomfortable reading. Another Safari-on-iOS option would be to use the built-in "Reader" function, which re-flows the text into a cleaner layout.
I appreciate the sentiment, but "There is no stupid code" is the dumbest sentence I've ever read.
Maybe you don't read much, but it's obvious they weren't making some universal statement about code. They are referring to the code you write when you are just experimenting by yourself, for yourself. The point is to not let irrelevant things like usefulness, quality, conventions, etc. limit just tinkering and learning.
Stupid code is fine. Make it work/exist first, you can make it good later.
Yeah, I think he’s trying to equate it to something like “there are no stupid questions.” That’s a pretty silly analogy, but you get the idea.
when you're paralyzed into not putting anything on the page, it's important to just get the dumb idea onto the IDE and refactor from there.
He will counter with "There are no stupid sentences"!
I think the people who think there is no stupid code don't actually ever witness truly bad code. The worst code that they come across is, at worst, below average. And since that's the worst they see, it gets mentally defined as bad.
That’s a charitable interpretation. The other more pessimistic one is that they only see stupid code, which cannot be made any stupider.
I think that's basically an impossibility, unless the only code they look at is from people who have 5 minutes of coding experience and attempt to get working code from vibes (without the LLM). Even suggesting this makes me think you haven't even seen truly stupid code.
I'm talking code from people with no programming experience, trying to contribute to open-source mod projects by pattern matching words they see in the file. They see the keyword static a lot, so they just put static on random things.
> Fast forward to today. I’ve been doing a dive on JavaScript/TypeScript and different runtimes like NodeJS and Deno,
That's why. If all codes in a project are stupid, there's no stupid code indeed relatively.
Go read Linux kernel mailing list.
And reading the Linux kernel mailing list would allow him to... do what exactly? And by when? Compared to writing simple, working, usable apps in TypeScript, immediately after reading about how Deno/TypeScript/etc. work?
It would allow him to brutally roast anyone who submits a sub-optimal merge request.
Linux still works by email-submitted patches, the workflow for which git was originally designed.
And if an unacceptable patch made it to Linus's desk, someone downstream hasn't been doing their damn job. The submaintainers are supposed to filter the stupid out, perhaps by more gentle guidance toward noob coders. The reason why Linus gets so angry is because the people who let it through should know better.
To avoid writing stupid code since they will see qualified people put reason on why some codes are "garbage" (I'm not saying this).