However, things have actually changed since then. A lot.
Here is Brooks at a "20 year retrospective" panel[1] at OOPSLA '07:
"When ‘96 came and I was writing the new edition of The Mythical Man-Month and did a chapter on that, there was not yet a 10 fold improvement in productivity in software engineering. I would question whether there has been any single technique that has done so since. If so, I would guess it is object oriented programming, as a matter of fact."
Large-scale reuse, which was still a pipe-dream in 1986, and only beginning in 1996 is now a reality. One which has brought its own set of problems, among them the potential for a lot of accidental complexity.
We now use 50 million lines of code to run a garage door opener[2]. I can declare with some level of confidence that the non-essential complexity in that example is more than the 90% that Brooks postulated as a safe upper limit. Five million lines of code is not the essential complexity of a garage door opener.
And while that is an extreme example, it seems closer to the normal case these days than the comparatively lean systems Brooks was thinking about.
I have a hypothesis as to how much of this happens. There's a seemingly paradoxical idea: simplicity requires complexity while complexity can be achieved with simplicity.
I think a lot of this complexity happens with everyone's efforts to make things simple. The problem is actually in that there's multiple ways to define simple. I have a good example of this that happened just last week. I have some Papa John's gift cards from Costco, and was ordering a pizza with a friend. So we put in the gift card amount and find out it is just short of the order. No problem, we'll pay the difference with another card. Problem is... that's not possible. We had to edit the order so we could bring the price down. In fact, I think we see things like this happen with pre-paid Visa/MasterCard cards. The problem was that the programmer made too simple of an assumption: customers only need one gift card. How would we fix this? Well now we should probably write a method that allows an arbitrary number of gift cards and credit cards. Our method would need to allow this arbitrary splitting and the question then is how much we'd want to expose to the user (do we want them to split a purchase with two credit cards?).
I think all of us can hear a likely conversation in our head about implementing such a payment system. Some person is going to say that we're over complicating the problem. Especially when we're talking about credit cards, it probably is a safe assumption to make that the order will be completed with a single card. Except it's fucking pizza. How often do you end up venmoing your buddy after you order a pizza? You didn't actually make the things simpler, you just moved the problem elsewhere and actually added complexity. It's definitely more complex to have a whole other app to facilitate a second transaction than it is to just allow arbitrary payment cards.
I see this happen in code all the time and I think Unix Philosophy[0] can be really beneficial. You try to make your functions to very basic things and then put them together. I try very hard to write flexible code because the only thing I know is that the code is the goals of the code are going to change over time. So you don't write just to get a specific problem solved, you write to get your problem solved and for others to be able to use your building blocks. And your building blocks should just allow things to be modifiable. How many of those 50 million lines of code are redundant? I'd wager a fair amount![1] Not hard, you just use very minimal abstraction. Like if you write a function to calculate tax you don't fix the value of sales tax even though your business only operates in <state of your choosing>, you just expose the variable. Extrapolate this out. This is technically more complex if we're viewing the function in isolation, but it is simpler if we are viewing the program as a whole (and viewing it over it's progressive lifetime). Of course, you can go too far with this too and striking that balance is difficult. But typically I see code that is inflexible rather than too flexible.
[1] I hear people say they get a lot of utility from coding agents because it takes care of boiler plate and repetition. I think that says something. If you're doing something over and over (even with a bit of variation) then there's probably a better way to do this. Problem is that if you're just rushing to finish Jira tickets you'll probably never have the time to figure that out. If your manager just measures your performance in number of tickets completed they'll never care how many tickets your code created. Sometimes to go fast you gotta slow down. IMO the most time consuming parts of writing code are the planning and debugging (analyzing, rethinking, rewriting, not just fixing errors) stages. Writing lines is much smaller in comparison and can be done in parallel.
To add to this, I think that in the past decades, the rise of large software ecosystems (e.g. around Python, Java, Windows, AWS, Azure), have had both positive and negative effects on accidental complexity.
On the one hand libraries and platforms save developers from reimplementing common functionality. But on the other hand, they introduce new layers of accidental complexity through dependency management, version conflicts, rapid churn, and opaque toolchains.
This means that accidental complexity has not disappeared, it has only moved. Instead of being inside the code we write, it now lives in the ecosystems and tools we must manage. The result is a fragile foundation that often feels risky to depend on.
> this means that accidental complexity has not disappeared, it has only moved
I would argue that it is essential complexity that we reuse (what the library does), at the added cost of some accidental complexity from dependency management, etc.
Which is a fair price where the essential complexity reuse is larger than the overhead (e.g. it generally makes no sense to bring in isOdd as a separate library, but for larger functionality you are likely better off doing so).
This is a general trend in society and it is psychologically stressful as issues affecting you shift outside of your circle of control or even influence.
It's always humbling to re-read Brooks. His central thesis—that the real difficulty is the "essential complexity" of fashioning conceptual structures, not the "accidental" complexity of our tools—has held up for decades. As many in this thread have noted, it feels more relevant than ever.
Brooks masterfully identified the core of the problem, including the "invisibility" of software, which deprives the mind of powerful geometric and spatial reasoning tools. For years, the industry's response has been better human processes and better tools for managing that inherent complexity.
The emerging "agentic" paradigm might offer the first fundamentally new approach to tackling the essence itself. It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.
The idea is to shift the role of the senior developer from a "master builder," who must hold the entire invisible, complex structure in their mind, to a *"Chief Architect" of an autonomous agent crew.*
In this model, the human architect defines the high-level system logic and the desired outcomes. The immense cognitive load of managing the intricate, interlocking details—the very "essential complexity" Brooks identified—is then delegated to a team of specialized AI agents. Each agent is responsible for its own small, manageable piece of the conceptual structure. The architect's job becomes one of orchestration and high-level design, not line-by-line implementation.
It's not that the complexity disappears—it remains essential. But the human's relationship to it changes fundamentally. It might be the most significant shift in our ability to manage essential complexity since the very ideas Brooks himself proposed, like incremental development. It's a fascinating thing to consider.
This is a misunderstanding of what essential complexity is.
If it could be subdivided into its own small, manageable piece, then we wouldn't really have a problem as human teams either.
But the thing is, composing functions can lead to significantly higher complexity than the individual pieces themselves have -- and in the same vein, a complex problem may not be nicely subdivisible, there is a fix, essential complexity to it, on a foundational, mathematical level.
That's a perfect analogy. The essential complexity (the plutonium) doesn't go away, but our ability to manipulate it from a safer, more strategic distance (the robot claw) is what's changing.
> It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.
I think you're right but people are treating it like the silver bullet. They're saying "actually the AI will just eliminate all the accidental complexity by being the entire software stack, from programming language to runtime environment."
So we use the LLM to write Python, and one day hope that it will just also eliminate all the compilers and codegen sitting between the language and the metal. That's silver bullet thinking.
What LLMs are doing is managing some accidental complexity, but it's adding more. "prompt engineering" and "context engineering" are accidental complexity. The special config files LLMs use are accidental complexity. The peculiarities of how the LLM sometimes hallucinates, can't answer basic questions, and behaves differently based on the time of day or how long you've been using it are accidental complexity. And what's worse, it's stochastic complexity, so even if you get your head around it, it's still not predictable.
So LLMs are not a silver bullet. Maybe they offer a new way of approaching the problem, but it's not clear to me we arrive at a new status quo with LLMs that does not also have more accidental complexity. It's like, we took out the spec sheet and added a bureaucracy. That's not any better.
A classic. I was just getting started, when he wrote that, and that kind of thinking informed a lot of my personal context, throughout my career.
I feel as if a lot of multipliers have happened that he didn't anticipate, but I also feel as if the culture of software engineering has kind of decomposed, since his day.
We seem to be getting a lot of fairly badly-done work out the door, very quickly, these days.
> I feel as if a lot of multipliers have happened that he didn't anticipate
Such as? I think his essay still stands the time that no single multiplier is even close to an order of magnitude productivity boost, with the exception of already existing code.
LLMs are possibly the biggest change to how software is developed, but they are also nowhere near this magnitude - if any - in case of more complicated software.
I know that OOP was just getting its feet under it, when he wrote that. It turned out to have a huge multiplying effect on productivity, but also introduced a whole new universe of footguns.
Maybe if OOP had been introduced, along with some of the disciplines that evolved, it might have been a big multiplier, but that took time.
I guess, upon reflection, each of our big “productivity boosts” were really evolutionary movements, that took time to evolve.
> It turned out to have a huge multiplying effect on productivity
In retrospect, I am highly sceptical that OOP has had any positive impact on productivity. Its popularity coincided with the advent of automatic memory management, the World Wide Web and relational databases, and later with dependency management tools such as Maven. It is much more likely that it is these factors that have improved productivity, rather than OOP.
I think we only got to this point because of a near-complete erosion of personal responsibility
- agile and devops both conspire to treat developers as replaceable standins
- we're not even really expected to hang around and see the consequences of our decisions
- on arriving in a new organization, we're presented with a heap of trash we're asked to just sort of keep it running, certainly not to fix it
- 'industry standard best practices' win over a well designed bespoke solution every time, developers are just expected to write a little glue at most
- managers aren't expected to know anything about the domain at all, but to track people to make sure they did what they said they were going to
- speed to feature is the only metric. instability can be papered over with bodies
- we pretty much stopped systemic testing a couple decades ago
so given that we've been on autopilot to a vibe-coding wonderland for quite some time, I guess we shouldn't be surprised that we've reached the promised land.
Sadly, I have to agree. I was fortunate to work for a company that absolutely insisted that I take full personal Responsibility and Accountability for my work.
I was there for almost 27 years, so had plenty of time to deal with the consequences of my decisions.
They were insane about Quality, so testing has always been a big part of my work, and still is, though I haven't been at that company for eight years.
> 'industry standard best practices' win over a well designed bespoke solution every time, developers are just expected to write a little glue at most
Sometimes for good reason. "Well designed bespoke solutions" often turn out to be badly designed reinventions of the wheel. Industry standard best practices sometimes prevent problems that you yet know you will run into.
And sometimes they just are massively overdesigned overkill. There is a real art to knowing which is which.
Well, to be fair, there's what I call "pure" Agile (as in the Manifesto), and then "real-world Agile" (what has the name, but doesn't really seem to follow the Manifesto).
I always liked the Manifesto, but it's really rather vague, and we engineers don't do "vague" so well, which leaves a lot of room for interpretation.
And authors.
And consultants.
And conference speakers.
Those are the ones that form what is eventually implemented. I'm not really sure any of the original signatories ever rolled up their sleeves, and worked to realize their vision.
It's my experience, that this is where the wheels start to come off the grand ideas.
That's one thing that I have to hand to Grady Booch. He came up with the idea, wrote it down, and then started to actually make tools to make it happen. Not sure if they really launched, but he did work to make his ideas into reality.
Every year or two I like to re-read my 25th Anniversary Edition of Mythical Man Month (which included No Silver Bullet). Every time I pull out something different.
I think that the willingness to rely completely on OSS libraries has fundamentally changed SWE practices from 2004 when I earned my first paycheck. We all just agreed that it's okay to use this library where you don't know who wrote it, there is no contract on it, and everyone just accepts that it will probably have massive security holes, and you hope the maintainer will still be around when those are announced. This was not true in 1986, and it mostly wasn't true in 2006, but it feels like every week we get more announcements on new CVE's that it turns out half the internet- very much including products people paid real hard currency for- were using. And we just accepted it.
And yeah, mostly the ability to CD a new deployment immediately- plus force a download of a patched version- meant we could get away with it, but it trained us all to accept lower quality standards. And I feel it in my bones, that the experience of using software is markedly worse than in 1996 or 2006, despite vastly more CPU, RAM, and disk.
Before advanced AI, "essential complexity" was a bottleneck and Brooks was right that there couldn't be continuous exponential gains in software productivity. However advanced AI will handle essential complexity as well, which can end up making it 10x or 100x faster to develop software. We still need humans currently, but there's no area that one can point to and say we'll always need people for this aspect of software development. The latest coding agents are already reasoning with requirements before they write code, and they will only improve...
AI solves "essential complexity" the same way they solve the Halting problem...
These are fundamental CS concepts, you don't solve them.
Also, I would first wait for LLMs to have reliable reasoning capabilities on trivial logic puzzles, like Missionaries and cannibals, before claiming they can correctly "reason" about concurrency models and million LOC program behavior at runtime.
Always a great read!
However, things have actually changed since then. A lot.
Here is Brooks at a "20 year retrospective" panel[1] at OOPSLA '07:
"When ‘96 came and I was writing the new edition of The Mythical Man-Month and did a chapter on that, there was not yet a 10 fold improvement in productivity in software engineering. I would question whether there has been any single technique that has done so since. If so, I would guess it is object oriented programming, as a matter of fact."
Large-scale reuse, which was still a pipe-dream in 1986, and only beginning in 1996 is now a reality. One which has brought its own set of problems, among them the potential for a lot of accidental complexity.
We now use 50 million lines of code to run a garage door opener[2]. I can declare with some level of confidence that the non-essential complexity in that example is more than the 90% that Brooks postulated as a safe upper limit. Five million lines of code is not the essential complexity of a garage door opener.
And while that is an extreme example, it seems closer to the normal case these days than the comparatively lean systems Brooks was thinking about.
[1] https://www.infoq.com/articles/No-Silver-Bullet-Summary/
[2] https://berthub.eu/articles/posts/a-2024-plea-for-lean-softw...
I have a hypothesis as to how much of this happens. There's a seemingly paradoxical idea: simplicity requires complexity while complexity can be achieved with simplicity.
I think a lot of this complexity happens with everyone's efforts to make things simple. The problem is actually in that there's multiple ways to define simple. I have a good example of this that happened just last week. I have some Papa John's gift cards from Costco, and was ordering a pizza with a friend. So we put in the gift card amount and find out it is just short of the order. No problem, we'll pay the difference with another card. Problem is... that's not possible. We had to edit the order so we could bring the price down. In fact, I think we see things like this happen with pre-paid Visa/MasterCard cards. The problem was that the programmer made too simple of an assumption: customers only need one gift card. How would we fix this? Well now we should probably write a method that allows an arbitrary number of gift cards and credit cards. Our method would need to allow this arbitrary splitting and the question then is how much we'd want to expose to the user (do we want them to split a purchase with two credit cards?).
I think all of us can hear a likely conversation in our head about implementing such a payment system. Some person is going to say that we're over complicating the problem. Especially when we're talking about credit cards, it probably is a safe assumption to make that the order will be completed with a single card. Except it's fucking pizza. How often do you end up venmoing your buddy after you order a pizza? You didn't actually make the things simpler, you just moved the problem elsewhere and actually added complexity. It's definitely more complex to have a whole other app to facilitate a second transaction than it is to just allow arbitrary payment cards.
I see this happen in code all the time and I think Unix Philosophy[0] can be really beneficial. You try to make your functions to very basic things and then put them together. I try very hard to write flexible code because the only thing I know is that the code is the goals of the code are going to change over time. So you don't write just to get a specific problem solved, you write to get your problem solved and for others to be able to use your building blocks. And your building blocks should just allow things to be modifiable. How many of those 50 million lines of code are redundant? I'd wager a fair amount![1] Not hard, you just use very minimal abstraction. Like if you write a function to calculate tax you don't fix the value of sales tax even though your business only operates in <state of your choosing>, you just expose the variable. Extrapolate this out. This is technically more complex if we're viewing the function in isolation, but it is simpler if we are viewing the program as a whole (and viewing it over it's progressive lifetime). Of course, you can go too far with this too and striking that balance is difficult. But typically I see code that is inflexible rather than too flexible.
[0] https://en.wikipedia.org/wiki/Unix_philosophy
[1] I hear people say they get a lot of utility from coding agents because it takes care of boiler plate and repetition. I think that says something. If you're doing something over and over (even with a bit of variation) then there's probably a better way to do this. Problem is that if you're just rushing to finish Jira tickets you'll probably never have the time to figure that out. If your manager just measures your performance in number of tickets completed they'll never care how many tickets your code created. Sometimes to go fast you gotta slow down. IMO the most time consuming parts of writing code are the planning and debugging (analyzing, rethinking, rewriting, not just fixing errors) stages. Writing lines is much smaller in comparison and can be done in parallel.
To add to this, I think that in the past decades, the rise of large software ecosystems (e.g. around Python, Java, Windows, AWS, Azure), have had both positive and negative effects on accidental complexity.
On the one hand libraries and platforms save developers from reimplementing common functionality. But on the other hand, they introduce new layers of accidental complexity through dependency management, version conflicts, rapid churn, and opaque toolchains.
This means that accidental complexity has not disappeared, it has only moved. Instead of being inside the code we write, it now lives in the ecosystems and tools we must manage. The result is a fragile foundation that often feels risky to depend on.
Reminds me of this comic: https://www.monkeyuser.com/2018/implementation/
> this means that accidental complexity has not disappeared, it has only moved
I would argue that it is essential complexity that we reuse (what the library does), at the added cost of some accidental complexity from dependency management, etc.
Which is a fair price where the essential complexity reuse is larger than the overhead (e.g. it generally makes no sense to bring in isOdd as a separate library, but for larger functionality you are likely better off doing so).
This is a general trend in society and it is psychologically stressful as issues affecting you shift outside of your circle of control or even influence.
Related. Others?
No Silver Bullet with Dr. Fred Brooks (2017) [video] - https://news.ycombinator.com/item?id=40233156 - May 2024 (1 comment)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=32423356 - Aug 2022 (43 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=25926136 - Jan 2021 (9 comments)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=20818537 - Aug 2019 (85 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=15476733 - Oct 2017 (8 comments)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=10306335 - Sept 2015 (34 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=3068513 - Oct 2011 (2 comments)
"No Silver Bullet" Revisited - https://news.ycombinator.com/item?id=239323 - July 2008 (6 comments)
--- and also ---
Fred Brooks has died - https://news.ycombinator.com/item?id=33649390 - Nov 2022 (211 comments)
It's always humbling to re-read Brooks. His central thesis—that the real difficulty is the "essential complexity" of fashioning conceptual structures, not the "accidental" complexity of our tools—has held up for decades. As many in this thread have noted, it feels more relevant than ever.
Brooks masterfully identified the core of the problem, including the "invisibility" of software, which deprives the mind of powerful geometric and spatial reasoning tools. For years, the industry's response has been better human processes and better tools for managing that inherent complexity.
The emerging "agentic" paradigm might offer the first fundamentally new approach to tackling the essence itself. It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.
The idea is to shift the role of the senior developer from a "master builder," who must hold the entire invisible, complex structure in their mind, to a *"Chief Architect" of an autonomous agent crew.*
In this model, the human architect defines the high-level system logic and the desired outcomes. The immense cognitive load of managing the intricate, interlocking details—the very "essential complexity" Brooks identified—is then delegated to a team of specialized AI agents. Each agent is responsible for its own small, manageable piece of the conceptual structure. The architect's job becomes one of orchestration and high-level design, not line-by-line implementation.
It's not that the complexity disappears—it remains essential. But the human's relationship to it changes fundamentally. It might be the most significant shift in our ability to manage essential complexity since the very ideas Brooks himself proposed, like incremental development. It's a fascinating thing to consider.
This is a misunderstanding of what essential complexity is.
If it could be subdivided into its own small, manageable piece, then we wouldn't really have a problem as human teams either.
But the thing is, composing functions can lead to significantly higher complexity than the individual pieces themselves have -- and in the same vein, a complex problem may not be nicely subdivisible, there is a fix, essential complexity to it, on a foundational, mathematical level.
Plutonium remains dangerous, although is more easily handled these days with a robot claw controlled by a remote operator.
That's a perfect analogy. The essential complexity (the plutonium) doesn't go away, but our ability to manipulate it from a safer, more strategic distance (the robot claw) is what's changing.
Well put.
I fear for the day that bot responses are not identifiable by em dashes and "not x but y" structures.
> It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.
I think you're right but people are treating it like the silver bullet. They're saying "actually the AI will just eliminate all the accidental complexity by being the entire software stack, from programming language to runtime environment."
So we use the LLM to write Python, and one day hope that it will just also eliminate all the compilers and codegen sitting between the language and the metal. That's silver bullet thinking.
What LLMs are doing is managing some accidental complexity, but it's adding more. "prompt engineering" and "context engineering" are accidental complexity. The special config files LLMs use are accidental complexity. The peculiarities of how the LLM sometimes hallucinates, can't answer basic questions, and behaves differently based on the time of day or how long you've been using it are accidental complexity. And what's worse, it's stochastic complexity, so even if you get your head around it, it's still not predictable.
So LLMs are not a silver bullet. Maybe they offer a new way of approaching the problem, but it's not clear to me we arrive at a new status quo with LLMs that does not also have more accidental complexity. It's like, we took out the spec sheet and added a bureaucracy. That's not any better.
A classic. I was just getting started, when he wrote that, and that kind of thinking informed a lot of my personal context, throughout my career.
I feel as if a lot of multipliers have happened that he didn't anticipate, but I also feel as if the culture of software engineering has kind of decomposed, since his day.
We seem to be getting a lot of fairly badly-done work out the door, very quickly, these days.
> I feel as if a lot of multipliers have happened that he didn't anticipate
Such as? I think his essay still stands the time that no single multiplier is even close to an order of magnitude productivity boost, with the exception of already existing code.
LLMs are possibly the biggest change to how software is developed, but they are also nowhere near this magnitude - if any - in case of more complicated software.
You’re probably right, when I think about it.
I know that OOP was just getting its feet under it, when he wrote that. It turned out to have a huge multiplying effect on productivity, but also introduced a whole new universe of footguns.
Maybe if OOP had been introduced, along with some of the disciplines that evolved, it might have been a big multiplier, but that took time.
I guess, upon reflection, each of our big “productivity boosts” were really evolutionary movements, that took time to evolve.
He really was quite prescient.
> It turned out to have a huge multiplying effect on productivity
In retrospect, I am highly sceptical that OOP has had any positive impact on productivity. Its popularity coincided with the advent of automatic memory management, the World Wide Web and relational databases, and later with dependency management tools such as Maven. It is much more likely that it is these factors that have improved productivity, rather than OOP.
No, it helped. What OOP gave us, was the ability to intelligently abstract complexity.
I’ve been coding since we used flint knives, and can tell you, from personal experience, that it did.
However, when it was introduced, it was done so, with some of the most outrageous claptrap I’ve ever heard. It came out of the starting gate, hobbled.
Those of us grizzled vets, with hype-scars, ignored that shit, and did pretty well.
I think we only got to this point because of a near-complete erosion of personal responsibility
so given that we've been on autopilot to a vibe-coding wonderland for quite some time, I guess we shouldn't be surprised that we've reached the promised land.Sadly, I have to agree. I was fortunate to work for a company that absolutely insisted that I take full personal Responsibility and Accountability for my work.
I was there for almost 27 years, so had plenty of time to deal with the consequences of my decisions.
They were insane about Quality, so testing has always been a big part of my work, and still is, though I haven't been at that company for eight years.
> 'industry standard best practices' win over a well designed bespoke solution every time, developers are just expected to write a little glue at most
Sometimes for good reason. "Well designed bespoke solutions" often turn out to be badly designed reinventions of the wheel. Industry standard best practices sometimes prevent problems that you yet know you will run into.
And sometimes they just are massively overdesigned overkill. There is a real art to knowing which is which.
> There is a real art to knowing which is which.
Absolutely, but that “art” is really important, and also, fairly rare.
Many folks just jam in any dependency that is a first hit in a search, with more than 50 GH stars, and a shiny Web site.
One “red flag” phrase that I’ve learned is “That’s a solved problem!”. When I hear that, I know I should be skeptical of the prescribed “solution.”
That said, there’s stuff that definitely should be delegated to better-qualified folks. One example, that I was just working on[0], is Webauthn stuff.
[0] https://littlegreenviper.com/addendum-a-server-setup/
"agile and devops both conspire to treat developers as replaceable standins "
There is a lot of irony in that since the first plank of the agile manifesto is to put individuals in interactions first.
And I noticed you put the development process/structure first over the people who want to treat people as fungible.
Well, to be fair, there's what I call "pure" Agile (as in the Manifesto), and then "real-world Agile" (what has the name, but doesn't really seem to follow the Manifesto).
I always liked the Manifesto, but it's really rather vague, and we engineers don't do "vague" so well, which leaves a lot of room for interpretation.
And authors.
And consultants.
And conference speakers.
Those are the ones that form what is eventually implemented. I'm not really sure any of the original signatories ever rolled up their sleeves, and worked to realize their vision.
It's my experience, that this is where the wheels start to come off the grand ideas.
That's one thing that I have to hand to Grady Booch. He came up with the idea, wrote it down, and then started to actually make tools to make it happen. Not sure if they really launched, but he did work to make his ideas into reality.
Classic Brooks piece, always worth a re-read. Still informs how our team makes decisions today.
Though I recommend a full read, for those who want a gloss, I made a mind map that identifies the major questions posed by the essay, with summary answers and supporting rationale: https://app.gwriter.io/#/mindmap/view/b52b7d35-4d8d-4164-ba5...
There is an excellent Future of Coding episode where they cover No Silver Bullet.
https://futureofcoding.org/episodes/062.html
In my life I've just see 2 advancements in technology that create a radical improvement in software development productivity:
1. The internet with its forums, nowadays best represented by StackOverflow. 2. LLMs coding capabilities.
All other things: languages, tools, IDEs, techniques, libraries, are marginal improvements.
Every year or two I like to re-read my 25th Anniversary Edition of Mythical Man Month (which included No Silver Bullet). Every time I pull out something different.
I think that the willingness to rely completely on OSS libraries has fundamentally changed SWE practices from 2004 when I earned my first paycheck. We all just agreed that it's okay to use this library where you don't know who wrote it, there is no contract on it, and everyone just accepts that it will probably have massive security holes, and you hope the maintainer will still be around when those are announced. This was not true in 1986, and it mostly wasn't true in 2006, but it feels like every week we get more announcements on new CVE's that it turns out half the internet- very much including products people paid real hard currency for- were using. And we just accepted it.
And yeah, mostly the ability to CD a new deployment immediately- plus force a download of a patched version- meant we could get away with it, but it trained us all to accept lower quality standards. And I feel it in my bones, that the experience of using software is markedly worse than in 1996 or 2006, despite vastly more CPU, RAM, and disk.
Obligatory XKCD: https://xkcd.com/2347/
Before advanced AI, "essential complexity" was a bottleneck and Brooks was right that there couldn't be continuous exponential gains in software productivity. However advanced AI will handle essential complexity as well, which can end up making it 10x or 100x faster to develop software. We still need humans currently, but there's no area that one can point to and say we'll always need people for this aspect of software development. The latest coding agents are already reasoning with requirements before they write code, and they will only improve...
AI solves "essential complexity" the same way they solve the Halting problem...
These are fundamental CS concepts, you don't solve them.
Also, I would first wait for LLMs to have reliable reasoning capabilities on trivial logic puzzles, like Missionaries and cannibals, before claiming they can correctly "reason" about concurrency models and million LOC program behavior at runtime.
Essential complexity is essential to your problem space... AI can't figure that out.