A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.
You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).
The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.
Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").
I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.
Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.
Two minutes later, the SRE agents had the bug fixed and ready to be merged.
"understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.
AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.
The article gets at this briefly and moves on: "I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time."
I think this dynamic applies to any use of AI, or indeed, any form of outsourcing. You can outsource a task effectively if you understand the complete task and its implementation very deeply. But if you don't, then you don't know if what you are getting back is correct, maintainable, scalable.
There will obviously be companies that build a vibe coded app which too many people depend on. There will be some iteration (maybe feature addition, maybe bug fix) which will cause a catastrophic breakage and users will know.
But there will also be companies who add a better mix of incantations to the prompts, who use version control and CI, who ensure the code is matched with tests, who maintain the prompts and requirements documents.
The former will likely follow your projected path. The latter will do fine and may even thrive better than either traditional software houses of cheap vibe coding shops.
Then again, there are famous instances of companies who have tolerated terribly low investment in IT, including SouthWest Airlines.
There are people out there who truly believe that they can outsource the building of highly complex systems by politely asking a machine, and ultimately will end up tasking the same machine to tell them how these systems should be built.
Now, if I were in business with any of these people, why would I be paying them hundreds of thousands, plus the hundreds of thousands in LLM subscriptions they need to barely function, when they cannot produce a single valuable thought?
I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.
I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.
I had the opportunity to test out the latest version of Claude this afternoon. After a very good analysis of the existing code, I asked it to implement an optimisation that it had identified.
It introduced a race condition into the code which I could tell just by looking at the diff. More worrying, is that after telling it that there was now a race condition, it provided a solution that was no fix at all. That concerns me.
I'm certain you can work with Claude in such a way that it will avoid those errors but I can't help worry about those developers who don't even know what a race condition is, ploughing on and committing the change.
These tools are very good in many ways and I can see how they can be helpful, but they're being mismarketed in my opinion.
Software engineers have been confidently wrong about a lot of things.
E.g. OOP and "patterns" in 90s. What was the last time you implemented a "visitor"?
P. Norvig mentioned most of the patterns are transparent in Common Lisp: e.g. you can just use a `lambda` instead of "visitor". But OOP people kept doing class diagrams for a simple map or fold-like operation.
AI producing a flawed code and "not understanding" are completely different issues. Yes, AI can make mistakes, we know. But are you certain your understanding is really superior?
I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable
Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?
My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."
Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).
I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.
The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).
This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.
LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.
Why does there seem to be such a divide in opinions on AI in coding? Meanwhile those who "get it" have been improving their productivity for literally years now.
This comment ignores the key insight of the article. Design is what matters most now. Design is the difference between vibe coding and software engineering.
Given a good design, software engineers today are 100x more productive. What they produce is high quality due to the design. Production is fast and cheap due to the agents.
You are correct, there will be a reckoning for large scale systems which are vibe coded. They author is also correct, well designed systems no longer need frameworks or vendors, and they are unlikely to fail because they were well designed from the start.
You still "find most stuff by building/struggling". You just move up stack.
> there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign
For those who are "vibe code only", perhaps. But it's no different than the "coding bootcamp only" developers who never really learned to think holistically. Or the folks who learned the bare minimum to get those sweet dotcom boom dollars back in the day, and then had to return to selling cars when it call came crashing down.
The winners have been, and will always be, those who can think bigger. The ones today who already know how to build from scratch but then find the superpower is in architecture, not syntax, and suddenly find themselves 10x more productive.
> Good time to be in business if you can see through the bs and understand how these systems actually function
You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.
I think it would be the opposite and we are all in for a rude awakening. If you have tried playing with Opus 4.6 you would know what I am talking about.
Yeah I completely disagree with the author actually, but also with you.
The frameworks are what make the AI write easily understandable code. I let it run nextjs with an ORM, and it almost always creates very well defined api routes, classes & data models. etter then I would do often,
I also ask it be way more correct on the validation & error handling then I would ever do. It makes mistakes, I shout at it and corrects quickly.
So the project I've been "vibe coding" have a much better codebase then I used to have on my solo projects.
Business has been operating on a management/executive culture for many decades now.
These people get paid millions a year to fly around and shake hands with people aka shit fuck all.
At times in the past I have worked on projects that were rushed out and didn't do a single thing that they were intended to do.
And you know what management's response was? They loved that shit. Ooooh it looks do good, that's so cool, well done. Management circle jerking each other, as if using everyone else's shafts as handles to climb the rungs of the ladder.
It's just...like it kills me that this thing I love, technology/engineering/programming...things that are responsible for many of the best things present in our modern lives, have both been twisted to create some of the worst things in our modern lives in the pursuit of profit. And the people in charge? They don't even care if it works or not, they just want that undeserved promotion for a job that a Simpsons-esque fucking drinking bird is capable of.
The future is already here. Been working a few years at a subsidiary of a large corporation where the entire hierarchy of companies is pushing AI hard, at different levels of complexity, from office work up through software development. Regular company meetings across companies and divisions to discuss methods and progress. Overall not a bad strategy and it's paying dividends.
A experiment was tried on a large and very intractable code-base of C++, Visual Basic, classic .asp, and SQL Server, with three different reporting systems attached to it. The reporting systems were crazy being controlled by giant XML files with complex namespaces and no-nos like the order of the nodes mattering. It had been maintained by offshore developers for maybe 10 years or more. The application was originally created over 25 years ago. They wanted to replace it with modern technology, but they estimated it'd take 7 years(!). So they just threw a team at it and said, "Just use prompts to AI and hand code minimally and see how far you get."
And they did wonderfully (and this is before the latest Claude improvements and agents) and they managed to create a minimal replacement in just two months (two or maybe three developers full time I think was the level of effort). This was touted at a meeting and given the approval for further development. At the meeting I specifically asked, "You only maintain this with prompts?" "Yes," they said, "we just iterate through repeated prompts to refine the code."
It has all mostly been abandoned a few months later. Parts of it are being reused, attempting a kind of "work in from the edges" approach to replacing parts of the system, but mostly it's dead.
We are yet to have a postmortem on this whole thing, but I've talked to the developers, and they essentially made a different intractable problem of repeated prompting breaking existing features when attempting to apply fixes or add features. And breaking in really subtle and hard to discern ways. The AI created unit tests didn't often find these bugs, either. They really tried a lot of angles trying to sort it out - complex .md files, breaking up the monolith to make the AI have less context to track, gross simplification of existing features, and so on. These are smarty-pants developers, too, people who know their stuff, got better than BS's, and they themselves were at first surprised at their success, then not so surprised later at the eventual result.
There was also a cost angle that became intractable. Coding like that was expensive. There was a lot of hand-wringing from managers over how much it was costing in "tokens" and whatever else. I pointed out if it's less cost than 7 years of development you're ahead of the game, which they pointed out it would be a cost spread over 7 years, not in 1 year. I'm not an accountant, but apparently that makes a difference.
I don't necessarily consider it a failed experiment, because we all learned a lot about how to better do our software development with AI. They swung for the fences but just got a double.
Of course this will all get better, but I wonder if it'll ever get there like we envision, with the Star Trek, "Computer, made me a sandwich," method of software development. The takeaway from all this is you still have to "know your code" for things that are non-trivial, and really, you can go a few steps above non-trivial. You can go a long way not looking to close at the LLM output, but there is a point at which it starts to be friction.
As a side note, not really related to the OP, but the UI cooked up by the LLMs was an interesting "card" looking kind of thing, actually pretty nice to look at and use. Then, when searching for a wiki for the Ball x Pit game, I noticed that some of the wikis very closely resembled the UI for the application. Now I see variations of it all over the internet. I wonder if the LLMs "converge" on a particular UI if not given specific instructions?
Come to the redteam / purpleteam side. We're having fun times right now. The definition of "every software has bugs" is now on a next level, because people don't even care about sql injection anymore. It's right built into every vibecoded codebase.
Authentication and authorization is as simple as POST /api/create/admin with zero checks. Pretty much every API ever slop coded looks like this. And if it doesn't, it will forget about security checks two prompts later and reverse the previously working checks.
Back in the 00s people like you were saying "no one will put their private data in the cloud!"
"I am sick of articles about the cloud!"
"Anyone know of message boards where discussing cloud compute is banned?"
"Businesses will not trust the cloud!"
Aside from logistics of food and medicine, most economic activity is ephemeral wank.
It's memes. It's a myth. Allegory.
These systems are electrical state in machines and they can be optimized at the hardware layer.
Your Python or Ruby or whatever you ship 9,000 layers of state and abstraction above the OS running in the data center has little influence on how these systems actually function.
To borrow from poker; software engineers were being handed their hat years ago. It's already too late.
> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
To be blunt, I think it's a form of mania that drives someone to reject human-written code in favor of LLM-generated code. Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM. When they automate this much writing, it makes me wonder how much of their own reading they automate away too.
The below captures this perfectly. The author is trying to explain that vibe-coding their own frameworks lets them actually "understand" the code, while not noticing that the LLM-generated text they used to make this point is talking about cutting and sewing bricks.
> But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.
What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.
Yeah, I'm huge on using LLMs for coding, but one of the biggest wins for me is that the LLM already knows the frameworks. I no longer need to learn whatever newest framework there is. I'll stick to my frameworks, especially when using an LLM to code.
my problem with frameworks has always been that the moment I want to do something the framework writers aren't interested in, I now have three problems: my problem, how to implement it in the underlying platform and how to work around the framework to not break my feature.
after 3 decades as SWE I mostly found both i) and ii) to not be true, for the most part. a lot of frameworks are not built from the ground up as “i am building a thing to solve x” but “i had a thing and built something that may (or may not) be generally useful.” so a lot of them carry weight from what they were originally built from. then people start making requests to mold the framework to their needs, some get implemented, some don’t. those that don’t good teams will build extensions/plugins etc into the framework and pretty soon you got a monster thing inside of your codebase you probably did not need to begin with. i think every single ORM that i’ve ever used fits this description.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.
Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.
'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
'Revise that so that Sam is Harsher and Frodo more stubborn.'
Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
> It's strange to me when articles like this describe the 'pain of writing code'.
I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.
In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?
Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.
CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?
I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.
I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.
These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
Your comment is spot on, but the nuance people who are still new to these LLMs don't yet see is the real reason "he'd be better off just writing the damned book instead."
1. That prompt is always a slot machine. It's never 100% deterministic and that's why we haven't seen an explosion of claude skills. When it works for you, and it's magical, everyone is wowed. However, there is a set of users who then bang their head, wondering why their identical attempt is garbage compared to their coworker. "It must be a skills issue." No, it's just the LLM being an LLM.
2. Coding agents are hyper localized and refuse to consider the larger project when it solves something. So you end up with these "paper cuts" of duplicated functions or classes that do one thing different. Now the LLM in future runs has to decide which of these classes or functions to use and you end up with two competing implementations. Future you will bang your head trying to figure out how to combine them.
3. The "voice" of the code it outputs is trained on public repositories so if your internal codebase is doing something unique, the LLM will consistently pick the voice it's trained on, forcing you to rewrite behind it to match your internal code.
4. It has no chill. If I set any "important" rules in the prompt then it sometimes adheres to it at the expense of doing the "right" thing in its changes. Or it completely ignores it and does its own thing, when it would have been the perfect time to follow the rule. This is to your point that, if I had just written the code myself, it would have been less words than any "perfect" prompt it would have taken to get the same code change.
I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.
I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.
I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.
It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.
People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.
I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.
Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.
Tolkien's book is an art, programs are supposed to do something.
Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.
> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
Is exactly what programs are. Not the minutiae of the language within.
Writing he code should be the easy part and one of the smaller time sinks actually. The fruits of the labour is in the planning, the design, the architecture and the requirements that you want to achieve now and potentially in the future.. these all require a serious amount of effort and foresight to plan out.
When you're ready, maybe you've done some POC in areas you were unsure, maybe some good skeletons work to see a happy path draw a shadow of s solution, iterate over your plans and then put some real "code"/foundation in place.
It's a beautiful process. Starting out I used to just jump into s project deep with the code first and hit that workaround button one too many times and it's far more expensive, we all know that.
I don't find writing code painful, but I do find it tedious. The amount of time wasted on boilerplate keeps me from getting to the good stuff. LLMs let me speed run through all of that.
To take it back to your example, let's imagine Tolkien is spending a ton of time on setting up his typewriter, making sure he had his correction tape handy, verifying his spelling and correcting mistakes, ensuring his tab stops were setup to his writing standard, checking for punctuation marks, etc. Now imagine eliminating all that crap so he can focus on the artistic nature of the dialogue.
I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?
I didn't fully realize how much pain there was until I started delegating the coding to AI. It's very freeing. Unfortunately I think this will soon lead to mass layoffs.
Sometimes you are writing a marketing copy for a new Nissan that's basically the same as last year Nissan, yet you need to sell it somehow. Nobody will REALLY read it more than 2 seconds and your words will be immediately forgotten. Maybe some AI is good then.
“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.
Please forgive me for being blunt, I want to emphasize how much this strikes me.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing.
The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.
You get the privilege of patching Node.js.
Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.
This piece seems to say more about React than it says about a general shift in software engineering.
Don't like React? Easiest it's ever been not to use it.
Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
I wanted to believe this article, but the writing is difficult to follow, and the thread even harder. My main issue is the contradiction about frameworks and using what the large tech companies have built vs real engineering.
The author seems to think that coding agents and frameworks are mutually exclusive. The draw of Vercel/next.js/iOS/React/Firebase is allowing engineers to ship. You create a repo, point to it, and boom! instant CICD, instant delivery to customers in seconds. This is what you're complaining about!? You're moaning that it took 1 click to get this for free!? Do you have any idea how long it would take to setup just the CI part on Jenkins just a few years ago? Where are you going to host that thing? On your Mac mini?
There's a distinction between frameworks and libraries. Frameworks exist to make the entire development lifecycle easier. Libraries are for getting certain things that are better than you (encryption, networking, storage, sound, etc.) A framework like Next.js or React or iOS/macOS exist because they did the heavy work of building things that need to already exist when building an application. Not making use of it because you want to perform "real engineering" is not engineering at all, that's just called tinkering and shipping nothing.
Mixing coding agents with whatever framework or platform to get you the fastest shipping speed should be your #1 priority. Get that application out. Get that first paid customer. And if you achieve a million customers and your stuff is having scaling difficulties, then you already have teams of engineers to work on bringing some of this stuff in house like moving away from Firebase/Vercel etc. Until then, do what lets you ship ASAP.
This is an interesting idea but the problem is where to stop as you travel down through layers of frameworks?
Say we take it to an absurd extreme: You probably won’t have your agent code up Verilog and run your website on an ASIC.. and you aren’t going to write an assembler to code up your OS and kernel and all the associated hardware support, so you probably want a server and an OS to run your code and maybe some containers or a process model.. so the agentic reinvention has to stop somewhere.
One helpful mindset is to choose frameworks and components that avoid rediscovery. Tailwind for example contains some very well thought out responsive breakpoints that a ton of thought went into designing. With `:md` which is only a couple of tokens, your agent can make use of all that knowledge without having to reinvent everything that went into those decisions.
I fail to see the obvious wisdom in having AI re-implement chunks of existing frameworks without the real-world battle testing, without the supporting ecosystem, and without the common parlance and patterns -- all of which are huge wins if you ever expand development beyond a single person.
It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.
In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title
> the supporting ecosystem, ... the common parlance and patterns
Which are often the top reason to use a framework at all.
I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.
AI has a lot of "leaders" currently working through a somewhat ignorant discovery of existing domain knowledge (ask me how being a designer has felt in the last 15 years of UX Leadership™ slowly realizing there's depth to the craft).
In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!
In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!
They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.
I have been using Cursor w/ Opus 4.x to do extensive embedded development work over the past six months in particular. My own take on this topic is that for all of the chatter about LLMs in software engineering, I think a lot of folks are missing the opportunity to pull back and talk about LLMs in the context of engineering writ large. [I'm not capitalizing engineering because I'm using the HN lens of product development, not building bridges or nuclear reactors.]
LLMs have been a critical tool not just in my application but in my circuit design, enclosure design (CAD, CNC) and I am the conductor where these three worlds meet. The degree to which LLMs can help with EE is extraordinary.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
Where I agree with the author of this post is that I feel like perhaps it's time for a lot of libraries to sunset. I don't think replacing frameworks is the correct abstraction at all but I do think that it no longer makes sense to spend time integrating libraries when what you really need are purpose-built functions that do exactly what you want instead of what some library author thought you should want.
It seems to me that a lot of the discussion stems around different definitions of the word framework and I believe library is probably the more appropriate term to use here. I wouldn't replace .net framework with something I vibe coded but your example of a library of not so specific functions is ripe for replacement. If you're only using 5% of a library you've probably written as much adapter code as you would have if it was just specific code to solve your problem.
I didn’t even give Claude (Opus 4.1) the registers when I did this for a recent ESP32 + ST7789 Rust project. I think I literally just said “make a driver with a double frame buffer for the ST7789 on SPI1, with DMA updates“. And it did it.
In my experience, often the libs provided by manufacturers are thin wrappers over physical interface setup and communication in the form of a single header and cpp file. Isnt it easier to just use them instead of generating differently phrased copies of them?
This article has some cowboy coding themes I don't agree with. If the takeaway from the article is that frameworks are bad for the age of AI, I would disagree with that. Standardization, and working with a team of developers all using the same framework has huge benefits. The same is true with agents. Agents have finite context, when an agent knows it is using rails, it automatically can assume a lot about how things work. LLM training data has a lot of framework use patterns deeply instilled. Agents using frameworks that LLMs have extensive training on produce high quality, consistent results without needing to provide a bunch of custom context for bespoke foundational code. Multiple devs and agents all using a well known framework automatically benefit from a shared mental model.
When there are multiple devs + agents all interacting with the same code base, consistency and standards are essential for maintainability. Each time a dev fires up their agent for a framework their context doesn't need to be saturated with bespoke foundational information. LLM and devs can leverage their extensive training when using a framework.
I didn't even touch on all the other benefits mature frameworks bring outside of shared mental model: security hardening, teams providing security patches, performance tuning, dependability, documentation, 3rd party ecosystems. etc.
I would think that frameworks make more sense than ever with LLMs.
The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.
LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.
As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.
If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.
It’s fun to ask the models their input. I was working on diagrams and was sure Claude would want some python / js framework to handle layout and nodes and connections. It said “honestly I find it easiest to just write the svg code directly”.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
Do I live in a different engineering world? Because that's so much not the exhausting labour part of my work, it's not even the same universe. The exhausting manual labour for me is interacting with others in the project, aligning goals and distributing work, reviewing, testing, even coming up with test concepts, and… actually thinking through what the code conceptually will work like. The most exhausting thing I've done recently is thinking through lock-free/atomic data structures. Ouch, does that shit rack your brain.
My biggest concern with AI is that I'm not sure how a software engineer can build up this sort of high-level intuition:
> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.
Without a significant development period of this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.
Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.
I'm not sure. I don't know about deep expertise and mastery, but I can attest that my fluency skyrocketed as the result of AI in several languages, simply because the friction involved in writing them went own by orders of magnitude. So I am writing way more code now in domains that I previously avoided, and I noticed that I am now much more capable there even without the AI.
What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.
Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.
The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.
People said the same thing about the transition to higher levels of abstraction in the past. “How will they write good code if they don’t know assembly? How can they write efficient code if they don’t understand how a microprocessor works?”
These arguments basically just amount to the intellectual equivalent of hazing. 90% of engineers don’t need to know how these things work to be productive. 90% of engineers will never work on a global scale system. Doing very basic things will work for those engineers. Don’t let perfect be the enemy of good enough.
Also, I’d argue that AI will advance enough to capture system design soon too.
This is a wild take. Good frameworks come with clever, well-thought-out abstractions and defensive patterns for dealing with common problems experienced when working in the space the framework covers. frameworks are also often well-documented and well-supported by the community, creating common ways of doing things with well understood strengths and weaknesses.
In some cases, it's going to make sense to drop your dependency and have AI write that functionality inline, but the idea that the AI coding best practice is to drop all frameworks and build your own vibe-coded supplychain de novo for every product is ludicrous. At that point, we should just take out the middle man and just have the LLMs write machine code to fulfill our natural language product specs.
The other thing that's dumb about this is frameworks are usually consolidating repetitive boilerplate so it's going to cost a lot more tokens for an AI to inline everything a framework does.
There are a few interesting points in the comments here.
The pro case for getting rid of frameworks: they're bulky, complex, there are security holes, updates to keep up with, things keep changing. LLMs can write you something perfectly customized to what you're doing. You get some free security by obscurity.
The con case: LLMs are excellent at getting you up to speed with a framework and understanding issues. As avidiax says in this thread, "The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing. You get the privilege of patching Node.js." Security by obscurity is generally a bad design. To me, the general architecture and maintainability is a huge issue when you have LLMs write everything from scratch. Not that a Node or React app is a paragon of maintainability or architecture, but it's certainly better than something made from scratch by an LLM. The code quality of a framework is also far higher.
I personally feel like the best path today is to use something lightweight, like Svelte. You get the best of both worlds. Light structure but nothing overbearing.
You can never have a good substitute for a good framework. A good framework would let you skip over boilerplate code, abstract at a higher level, and be dependable.
Context matters most here-- does a solid framework exist for the work you're trying to do? Then use it, otherwise write what you need and understand the risks that come with freshly written code.
I disagree about ditching abstractions. Programmatic abstractions aren't just a way to reduce the amount of code you write, they're also a common language to understand large systems more easily, and a way to make sure systems that get built are predictable.
I share that notion, but I think the abstractions are the foundational tech stack we have had for decades, like the Web Standard or even bash. You need constraints, but not the unnecessary complexity that comes with many modern tech stacks (react/next) that were build around SV's hyper-scalability monopoly mentality. Reach for simple tools if the task is simple: KISS.
This is even more relevant in the context of generated code, where most of the time is spent reviewing rather than writing the code. Abstractions, by allowing the code to be more concise, help.
With LLM code, I'd rather have higher-level abstractions.
Not only that, but a way to factor systems so you can make changes to them without spooky action at a distance. Of course, you have to put in a lot of effort to make that happen, but that's why it doesn't seem to me that LLM's are solving the hard part of software development in the first place.
Using a framework gives you some assurance that the underlying methods are well designed. If you don't know how to spot issues in auth design, then using an LLM instead of a library is a bad idea.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.
Even with a perfect coding agent, we code to discover what correct even is.
Team decides on vague requirements, then you actually have to implement something. Well that 'implementing' means iterating until you discover the correct thing. Usually in lots of finicky decisions.
Sometimes you might not care about those decisions, so you one shot one big change. But in my experience, the day-to-day on a production app you can 100% write all the code with Claude, but you're still trying to translate high level requirements into "low"-level decisions.
But in the end its nice not to care about the code monkey work going all over a codebase, adding a lot of trivial changes by hand, etc.
To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?
This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.
I am not "against" AI programming, although I confess I don't really know what that means... coders and business folk are gonna do whatever gets the job done and opinions by and large matter not a whit.
However:
> ..it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
Right, but... approximately zero users of AI for coding are setting temperature to 0 not to mention changing temperature at all. So this is a comparison to a world that doesn't really exist.
Additionally, C code compiles much much closer to the same assembly and microcode regardless of compiler as compared to temperature zero prompts across different AIs.
Compiling the symbols into a binary is not the bottleneck. Formalizing the contract for interacting with the real world is and always has been the bottleneck.
I dislike it when rhetorical flourishes start with "honest question...".
Maybe using AI assistant instead of directly writing code is equivalent to using a high level language instead of assembly and maybe it isn't. So at least begin your discussion as "I think programmers who don't use AI are like programmers who insist on assembly rather than a high level language" (and they existed back in the day). I mean, an "honest question" is one where you are honestly unsure whether you will get an answer or what the answer will be. That's completely different from honestly feeling your opponents have no good arguments. Just about the opposite, really.
By the way, the reason I view AI assistants and high level language compilers as fundamentally different is that high level languages compilers are mostly deterministic, mostly you can determine both the code generated and the behavior of code in terms of the high level language. AI created/assisted code is fundamentally undermined relative to the source (a prompt) on a much wider basis than the assembly created by a high level language compiler (whose source is source code).
I found the article interesting yet my thinking is at the opposite spectrum. I also spent a lot of time using LLM, and I am moving away from "no framework" or "library pretending to be a framework".
Not that I was a fan of it, but for work purpose I was using React / Next.js etc.
Now, I am using Laravel. lots of magic, pretty much always one recommended way to do things, excellent code generation using CLI. When you combine it with AI it's following the framework's guideline. The AI does not have to think about whether it should locate business logic with UI, use a hook or not, extract a helper, etc.
It knows how to create route, controller, validator, view, model, migration, whatever.
So the suggestion here is that instead of using battle tested libraries/frameworks, everyone should now build their own versions, each with an unique set of silent bugs?
Right. Lets all write our own Spring Framework / Django / Ruby on Rails. Everyone who contributed to these frameworks was obviously a jackass but me with my Claude sub can beat everybody while ignoring the actual stuff that I should be doing. Makes for a perfectly great maintenance burden.
Intellectual surrender is exactly the risk I fear with coding agents. Will the next generation of software ‘developers’ still know how to code? Seems coding agents are in a way taking us further from understanding the machine, just like frameworks have in the past.
Software has always been about abstraction. This one, in a way, is the ultimate abstraction. However it turns out that LLMs are a pretty powerful learning tool. One just needs the discipline to use it.
The interesting question here is what replaces frameworks as the unit of leverage.
Frameworks existed because the cost of understanding someone else's abstractions was lower than rebuilding from scratch. With agents, that calculus flips — generating bespoke code from a clear spec is now cheaper than learning a framework's opinions about how your app should work.
But the article buries the key point: "with the experience on my back of having laid the bricks." The author can direct agents effectively because he has two decades of mental models about what good software looks like. The agent is executing his taste and judgment, not replacing it.
The people who will struggle are not the ones who skip frameworks — it is the ones who never built the internal model of how systems fail. Frameworks taught you that implicitly (why does Rails do it this way? Because the alternative breaks at scale). If you skip straight to "agent, build me X," you never develop the instinct for when the output is subtly wrong.
The real unlock is probably closer to what the SRE agent trio example shows: agents handling the mechanical loop (detect, diagnose, fix, PR) while humans focus on system design and invariant definition. The skill shifts from writing code to defining constraints precisely enough that automated systems can maintain them.
There was a time around 2016 where you weren't allowed to write a React application without also writing a "Getting Started with React" blog post. Having trained on all of that, the AI probably thinks React is web development.
Few months ago I did exactly this. But over time I threw away all the generated js,css and html. It was unmaintenable mess. I finally chose Svelte and stuck with it. Now I have a codebase which makes sense to me.
I did asked AI to generate landing page. This gave me the initial headers, footers and styles that I used for my webapp but I threw away everything else.
> We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
I disagree. At least for a little while until models improve to truly superhuman reasoning*, frameworks and libraries providing abstractions are more valuable than ever. The risk/reward for custom work vs library has just changed in unforeseen ways that are orthogonal to time and effort spent.
Not only do LLMs make customization of forks and the resulting maintenance a lot easier, but the abstractions are now the most valuable place for humans to work because it creates a solid foundation for LLMs to build on. By building abstractions that we validate as engineers, we’re encoding human in the loop input without the end-developer having to constantly hand hold the agent.
What we need now is better abstractions for building verification/test suites and linting so that agents can start to automatically self improve their harness. Skills/MCP/tools in general have had the highest impact short of model improvements and there’s so much more work to be done there.
* whether this requires full AGI or not, I don’t know.
Dumbest take I've seen in a while. Really. If anything, AI working with frameworks is making them more effective. Frameworks, by definition, produce more structure, than just the language + libs do, and their entire practical utility is to abstract away complexity and lower the amount of footguns. The ultimate form of abstracting away complexity is an AI agent/coder writing the code for you. But there are gazillions of solutions (and opinionated ones) out there, for gazillions of all kinds of problems... having an AI agent work within the constraints of a framework is going to be a good thing in almost all cases as it will be more focused on your problem space rather than figuring out how to send freaking bits between computers.
How do people not understand that even if AI is writing all your code you still want to have as little code as possible for a given problem solution that you have to manage yourself? Frameworks help with that.
I am a non coder. For many years I went through many various coding tutorials, but never had been able to fully build anything other than basic websites. Now with AI I have been able to build useful CLI tools. Instead of using a static website generator such as Hugo, I can now quick build a website of what I am looking for. Heck, I just had it build me a website as a presentation instead of doing a slide show. I came up with an outline + my notes and information then had it build out the site based on that information. I was able to have it create some really cool animations to help explain my ideas.
I have had the same experience when building simple websites for myself and others. I did it as a test to begin with, but it worked out so well that I have kept at it for a while. The core concept for my experiment was to have no dependencies other than PHP and a web server. Longevity is the goal, I should be able to leave a project for years and it should just keep on running.
It is kind of a mini-framework, but really more of a core that can be expanded upon. A few simple ideas that has been codified. It is mainly a router that does very specific things with some convenient features built-in, and with the option to build plugins and templates on top of this core. The customization and freedom it enables is fantastic!
I used to worry that AI would lead to a regression toward the mean, but for this specific use case I think it can have the opposite effect. It can cause a flourish of experiments and custom-tailored solutions that enables a richer online experience. It demands a certain discipline in the way you build, to avoid making a long-term mess, but having just a little bit of experience and insight into general web development goes a long way to keep things tidy and predictable.
Have anyone else had similar experiences?
EDIT: One live site where I have built on top of FolderWeb, is https://stopplidelsen.no (Norwegian)
The pendulum swing described here is real but I think the underlying issue is subtler than "AI vs. no AI."
The actual problem most teams have isn't writing code — it's understanding what the code they already depend on is doing. You can vibe-code a whole app in a weekend, but when one of your 200 transitive dependencies ships a breaking change in a patch release, no amount of AI is going to help you debug why your auth flow suddenly broke.
The skill that's actually becoming more valuable isn't "writing code from scratch" — it's maintaining awareness of the ecosystem you're building on. Knowing when Node ships a security fix that affects your HTTP handling, or when a React minor changes the reconciliation behavior, or when Postgres deprecates a function you use in 50 queries.
That's the boring, unsexy part of engineering that AI doesn't solve and most developers skip until something catches fire.
> no amount of AI is going to help you debug why your auth flow suddenly broke.
What? Coding agents are very capable at helping fix bugs in specific domains. Your examples are like, the exact place where AI can add value.
You do an update, things randomly break: tell Claude to figure it out and it can go look up the breaking changes in the new versions, read your code and tell you what happened and fix it for you.
> Since [a few months ago], things have dramatically changed...
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
Sure, you can skip using frameworks and let AI write them directly for you, because that's what they are trained on - these framework you think you're omitting.
Now the issue is - if we play with the idea that the revolution is actually going to happen and developers will get replaced with vibe coders in the next 6 months (as has been prophesied for the last 5 years) - then the innovation will stop as there will be no one left to add to the pool.
This whole thing reminds me of debacle about retirement funds and taxes in my country. People think they are smart by avoiding them, because they suspect that the system will fail and they won't get anything back. But by the virtue of avoiding these taxes they themselves make a self fulfilling prophecy that is already breaking the system.
Thank you the insightful feedback :)
If you also have something to say on the point of the article itself, instead of pointing the finger on the person I'll be happy to answer on that
this is totally backwards to how i've been using agents.
the thing that an agent is really really good at is overcoming the initial load of using a new framework or library. i know, at some level, that using other people's code is going to save me trouble down the road, but there's an initial load to learn how to integrate with it, how to use it, and how to map the way the framework authors think to the way i think and the way my project needs to work. there's always the temptation to just build from scratch instead because it's initially quicker and easier.
letting the AI figure that out, and do the first initial steps of getting the framework to accomplish the task i need, produces a product that is better than what either the AI or i would produce without the framework, and it creates a product that i can then read, understand, and work on. letting the AI go from scratch invariably produces code that i don't want to work with myself.
> They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.
Sorry, i don't buy this. There is a very good reason to use tried and tested frameworks. Am I "intellectually surrendering" when I use a compiler/language/framework that has a great track record?
And how is it not "intellectual surrender" to let the AI do the work for you?
> In my mind, besides the self declared objectives, frameworks solve three problems .. “Simplification” .. Automation .. Labour cost.
I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.
This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.
Nothing fundamentally changed about frameworks. No need to reconsider every single practice because of AI. I think frameworks actually keep agents in check because they're trained on huge set of conventions.
I vibe coded a few of projects in vanilla JS and they eventually became mess, but with a framework they'd at least be structured mess
I had called this a while back, since the reasoning is simple: frameworks primarily exist to minimize boilerplate, but AI is very good at boilerplate, so the value of frameworks is diminished.
The larger underlying shift is that the economics of coding have been upended. Since its inception, our industry has been organized around one fundamental principle: code is expensive because coders are expensive. This created several complex dynamics, one which was frameworks -- which are massive, painful dependencies aimed at alleviating costs by reducing the repeated boilerplate written by expensive people. As TFA indicates, the costs of frameworks in terms of added complexity (e.g. abstractions from the dependency infecting the entire codebase) are significant compared to their benefits.
But now that the cost of code ---> 0, the need for frameworks (and reusability overall) will likely also --> 0.
Our first instinct is to recoil and view this as a bad thing, because it is considered "Tech Debt." But as the word "debt" indicates, Tech Debt is yet another economic concept and is also being redefined by these new economics!
For instance, all this duplicate code would have been terrible if only humans had to maintain it. But for LLMs, it is probably better because all the relevant logic is RIGHT THERE in the code, conveniently colocated with the rest of the functionality where it is used, and not obfuscated behind a dozen layers of abstraction whose (intended) functionality is described in natural language scattered across a dozen different pieces of documentation, each with varying amounts of sufficiency, fidelity and updated-ness. This keeps the context very focused on the relevant bits, which along with extensive testing (again, because code is cheap!) that enables instant self-checking, greatly amplifies the accuracy of the LLMs.
Now, I'm not claiming to say this will work out well long term -- it's too early to tell -- but it is a logical outcome of the shifting economics of code. I always say with AI, the future of coding will look very weird to us; this is another example of it.
> For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to…
Right, a future where you have to pay an AI hyperscaler thousands of dollars a month for access to their closed-source black box that needs a world historical capital moat to operate effectively is actually worse than this. It is baffling to me that more people don’t see this.
I use coding agents almost exclusively now and I’m going to say yes and no on this one.
Yes, I think there’s the potential to replace some frameworks that abstract away too many details and make things way too complicated for basic apps. A good example of this are ORMs like SqlAlchemy. Every time I use them I think to myself it would be easier to just write SQL myself, but it would be a tremendous amount of boilerplate. Nowadays though it might be worth it for an agent to just write the SQL for you instead!
On the other hand, you have libraries like Django. Sure, an agent _could_ write you your own web server. But wow would it be a waste of tokens and your projects surface area would be dwarfed by the complexity of just building your own alternative to Django. I can’t see that being the right move for years still.
I'm not sure why this is against 'frameworks' per se; if we were sure that the code LLMs could generate was the best possible, we might as well use Assembly, no, since that'd lead to best performance? But we don't generally, we still need to validate, verify and read it. And in, that, there is still some value in using a framework since the code generated is likely, on the whole, to be shorter and simpler than that not using a framework. On top of that, because it's simpler, I've at least found that there's less scope for LLMs to go off and do something strange.
I choose to use frameworks in the same sense I choose to use crypto libraries. Smarter people have thought long and hard about the problems involved, and came up with the best ways to solve them.
Why have the agents redo all of that if it's not absolutely necessary? Which it probably isn't for ~98% of cases.
Also, the models are trained on code which predominantly uses frameworks, so it'll probably trend toward the average anyway and produce a variant of what already exists in frameworks.
In the cases where it might make sense, maybe the benefit then is the ability to take and use piecemeal parts of a framework or library and tailor it to your specific case, without importing the entire framework/library.
No, it copied some relevant parts of every framework code, shifting burden of design, maintenance, debugging and polishing all corner cases to your shoulders.
Why wouldn't you clone frameworks code into your repository, removing parts you don't need, modifying code as you wish?
There is a fourth reason to use a framework: onboarding.
It does not work much for Django, as every project I saw using it has a different shape, but it works very well for Rails, as all projects share the same structure. However, even for Django, there are some practices that a newcomer to a project should expect to find in the code, because it's Django. So, maybe onboarding on a LLM coded project is just picking the same LLM as all the other developers, making it read the code and learning what kind of prompts the other developers use.
By the way, does anybody mind to share first hand experiences of projects in which every developer is using agents? How do those agents cope with the code of the other agents?
Frameworks are the reasons why AI can learn patterns and repeat, without frameworks you will be burning credits just to do things that been optimized already and completed. Unless you are Anthropic investor, thats not the way to improve your coding.
I see Libraries and frameworks as a way to capture knowledge and best practices so it can be shared with other people. So looking wat a LLM/AI does, it looks to me that this would be a perfect fit. Without the dependeny hell, unresolved github issues, need to fork and leaving maintainers. It could be opensource on steroïdes, with far shorter feedbackloops (just working in your IDE).
The main burden I see is validation of the output and getting reproducable results. As with many AI solutions.
What they are basically saying : a framework built up from bash-or-makefile-ground by an LLM, is better than any existing framework. I don't agree. When I use LLMs to generate scripts for me, I often have to adapt them to fit in the bigger picture. The more scripts I have, the more blurred becomes what that framework as a whole stands for. Then to become a usable framework, refactoring is needed, which means the calls to those scripts need rewriting and retesting as well.
I think if anything frameworks will become more important. They are already built into the training data of these models and they provide guardrails like protection against xss and sql injection. They are an architectural decision like anything else but why reinvent the wheel even if its an LLM doing the work?
Strange how many people are comparing code to art. Software engineering has never been about the code written, it’s about solving problems with software. With AI we can solve more problems with software. I have been writing code for 25 years, I love using AI. It allows me to get to the point faster.
The author is right, eliminating all this framework cruft will be a boon for building great software. I was a skeptic but it seems obvious now its largely going to be an improvement.
That took the strangest turn. It started with empowerment to do much more (and that I reallY agree with) — to then use it to... build everything from scratch? What? Why?
What a framework gives me is mostly other people having done precisely the architectural work, that is a prequisite to my actual work. It's fantastic, for the same reason that automatic coding is. I want to solve unsolved problems asap.
I am so confused by the disconnect that I feel like I must be missing something.
I don't see it as either/or. Frameworks give you a common vocabulary to use with the LLMs, and what allow you to organize your thoughts and maintain good git hygiene, and serve as a useful street map to review and explore what's been built.
You can drop the boilerplate bit pushing glue frameworks, but the building block frameworks are here to stay; LLMs know a lot, but they don’t know every solution to every problem. Do not confuse a software development LLM assistant with an oracle.
A huge advantage of frameworks to me is to give new comers to the code a unified frame of reference. A Rails developer (or even a non-Rails guys who understands MVC) can jump into a Rails based codebase he is not familiar with a lot easier than the custom "from the ground up" thing the author espouses.
It's puzzling to me that the author doesn't even mention this huge and obvious benefit of frameworks.
> But the true revolution happened clearly last year
Oh, that seems like a good bit of time!
> and since December 2025
So like..1 or 2 months ago? This is like saying “over half of people who tried our product loved it - all 51% of them!”. This article is pushing hype, and is mistaking Anthropics pre IPO marketing drive as actual change.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
I constantly see this and think I must be operating in a different world. This never took significant amounts of time. Are people using react to make text blogs or something?
When you choose the right framework it saves you enormous amounts of time. Sounds like the author has trouble separating hype from fact. Pick the right framework and your LLM will work better, too.
Pretty much completely disagree with the OP. Software Engineering never left, maybe the author moved away from it instead.
> Stop wrapping broken legs in silk. Start building things that are yours.
This however is deeply wrong for me. Anyone who writes and reviews code regularly knows very well that reading code doesn't lead to the same deep intuitive understanding of the codebase as writing same code.
So, no, with AI you are not building things which are yours. You might call them yours, but you lose deeper understanding of what you built.
You're right, clearly I've tried to be a bit provocative to pass the message, but I'm not religious in this sense. Minimal frameworks that really solve a problem cleanly and are adopted with intention are welcome.
There is yet another issue: the end-users are fickle fashion minded people, and will literally refuse to use an application if it does not look like the latest React-style. They do not want to be seen using "old" software, like wearing the wrong outfit or some such nonsense. This is real, and baffling.
lol ok have fun building from zero _without_ abstractions. It will work for the narrow thing you first tell it to build, the fun comes when you tell it to change in any way.
In big corporations that's how it is. Developers are told to only implement what is in the specs and if they have any objection, they need to raise it to PM who will then forward it to the system architect etc.
So that creates the notion as if the design was something out of reach. I met developers now who cannot develop anything on their own if it doesn't have a ticket that explains everything and hand holds them. If something is not clear they are stuck and need help of senior engineers.
This line shows either he does not get how wrong he is,
or I do not understand the deepness of his enlightenment.
"A simple Makefile covers 100% of my needs for 99% of my use cases".
We've come a long way to replace simple Makefile with autotools (incredible monstrocity), cmake, ninja etc.
I hope he does not propose to ditch *libc.
"Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities?"
Who outside of 'frontend web developers' actually do this?
I don't think this is a good description of, say, Apache Tika or Alembic's Ash.
The author makes a valid observation wrapped in an overstatement. Yes, AI coding agents have changed the economics of building custom tooling. But the conclusion—that frameworks are now obsolete—misses the forest for the trees.
The problem with "framework culture" wasn't that frameworks exist, but that we lost the ability to critically evaluate when they're appropriate. We reached for React for static sites, Kubernetes for three-server deployments, and microservices for monolithic problems—not because these tools were wrong, but because we stopped thinking.
What AI agents actually restore isn't "pure software engineering"—it's optionality. The cost of writing a custom solution has dropped dramatically, which means the decision tree has changed. Now you can prototype both approaches in an afternoon and make an informed choice.
But here's what AI doesn't solve: understanding the problem domain deeply enough to architect a maintainable solution. You can generate 10,000 lines of bespoke code in minutes, but if you don't understand the invariants, edge cases, and failure modes, you've just created a different kind of technical debt—one that's harder to unwind because there's no community, no documentation, and no shared understanding.
Frameworks encode decades of collective battle scars. Dismissing them entirely is like dismissing the wheel because you can now 3D-print custom rollers. Sometimes you want the custom roller. Sometimes you want the battle-tested wheel. AI gives you both options faster—it doesn't make the decision for you.
> The three problems frameworks solve (or claim to) [..] Simplification [..] Automation [..] Labour cost
and he misses _the most important problem frameworks solve_
which is correctness
when it comes to programming most things are far more complicated in subtle annoying ways then they seem to be
and worse while you often can "cut away" on this corner cases this also tends to lead to obscure very hard to find bugs including security issues which have a tendency to pop up way later when you haven't touched to code for a while and don't remember which corner you cut (and with AI you like did never know which corner you did cut)
like just very recently some very widely used python libraries had some pretty bad bugs wrt. "basic" HTTP/web topics like http/multipart request smuggling, DOS from "decompression bombs" and similar
and while this might look like it's a counter argument, it speaks for strict code reuse even for simple topics. Because now this bugs have been fixed! And that is a very common topic for frameworks/libraries, they start out with bugs, and sadly often the same repeated common bugs known from other frameworks, and then over time things get ironed out.
But with AI there is an issue, a lot of the data it's trained on is code _which does many of this "typical" issues wrong_.
And it's non-determenistic, and good at "hiding" bugs, especially the kind of bugs which anyway are prone to pass human reviews.
So you _really_ would want to maximize use of frameworks and libraries when using AI, as that large part of the AI reliability issues.
But what does change is that there is much less reason to give frameworks/libraries "neat compact APIs" (which is a common things people spend A LOT of time one and which is prone to be the source of issues as people insist on making things "look simpler" then they are and in turn accidentally make them not just simpler but outright wrong, or prevent use-cases you might need).
Now depending on you definition of framework you could argue that AI removes boiler-parts issues in ways which allow effectively replacing all frameworks with libraries.
But you still need to review code, especially AI generated code. To some degree the old saying that code is far more read then written is even more true with AI (as most isn't "written"(by human) anymore). Now you could just not review AI code, but that can easily count as gross negligence and in some jurisdictions it's not (fully) possible to opt out of damages from gross negligence no matter what you put in TOS or other contracts. I.e. I can't recommend such negligent actions.
So IMHO there is still use for some kind of frameworks, even if what you want from them will likely start to differ and many of them can be partially or fully "librarified".
> Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
LLM generated code is the ultimate abstraction. A mess of code with no trusted origin that nobody has ever understood. It's worse than even the worst maintained libraries and frameworks in every way.
honestly this blog post was pretty off base. Current AIs have a limited ability to keep up with complexity and using known frameworks helps with managing that complexity. If you need to write everything from scratch every time you have to go through the process of scaffolding and harnessing the whole system from scratch. I don't think it's worth rewriting react from scratch every time you make a browser application, even in the best case it's just a huge waste of tokens.
I've never did see any value in monsters like React. Always use plain JavaScript, wrote web components and used some narrow scope 3rd party libraries. Works like a charm for me. Now instead of writing whole web components on my own I write skeletons with some comments and ask IDE with AI services (I use IDEs from JetBrains) to complete it. I then do the same with tee main application. So far the results are stellar. I do similar with my backend applications (mostly C++) but there is much more work from my side is involved as the requirements are way stricter, for example performance being a major thing.
Nah. Nothing has changed. To offload the work to an agent and make it a productivity gain it is exactly the same as using a framework, it's a black box portion of your system, written by someone else, that you don't understand.
Unless you are quite literally spending almost the same amount of time you'd spend yourself to deeply understand each component, at which point, you could write it yourself anyway, nothing has changed when it comes to the dynamics of actually authoring systems.
There are exceptions, but generally speaking untempered enthusiasm for agents correlates pretty well with lack of understanding about what engineering software actually entails (it's about relational and conceptual comprehension, communication, developing shared knowledge, and modeling, not about writing code or using particular frameworks!)
EDIT: And to be clear, the danger of "agentizing" software engineering is precisely that it promotes a tendency to obscure information about the system, turn engineers into personal self-llm silos, and generally discard all the second-order concerns that make for good systems, resilience, modifiability, intelligibility, performance.
This is about green field development which is relatively rare. Much of the time the starting point is a bunch of code using React or maybe just a lump of PHP. Business logic ends up plunked down all over the place and LLMs tend to make a huge mess with all this unless kept on a tight leash.
I'm glad this guy is doing well, but I'm dreading the amount of work being created for people who can reverse engineer the mountains of hallucinated bullshit that he and others are now actively producing.
And if the frameworks aren't useful then maybe work up the chain and ditch compilers next?
The author makes a valid observation wrapped in an overstatement. Yes, AI coding agents have changed the economics of building custom tooling. But the conclusion—that frameworks are now obsolete—misses the forest for the trees.The problem with "framework culture" wasn't that frameworks exist, but that we lost the ability to critically evaluate when they're appropriate. We reached for React for static sites, Kubernetes for three-server deployments, and microservices for monolithic problems—not because these tools were wrong, but because we stopped thinking.What AI agents actually restore isn't "pure software engineering"—it's optionality. The cost of writing a custom solution has dropped dramatically, which means the decision tree has changed. Now you can prototype both approaches in an afternoon and make an informed choice.But here's what AI doesn't solve: understanding the problem domain deeply enough to architect a maintainable solution. You can generate 10,000 lines of bespoke code in minutes, but if you don't understand the invariants, edge cases, and failure modes, you've just created a different kind of technical debt—one that's harder to unwind because there's no community, no documentation, and no shared understanding.Frameworks encode decades of collective battle scars. Dismissing them entirely is like dismissing the wheel because you can now 3D-print custom rollers. Sometimes you want the custom roller. Sometimes you want the battle-tested wheel. AI gives you both options faster—it doesn't make the decision for you.
I feel the same way, but I’m not a traditional software engineer. Just an old-school Webmaster who’s been trying to keep up with things, but I’ve had to hire developers all along.
I’m an idea’s guy, and in the past month or so my eyes have also fully opened to what’s coming.
But there’s a big caveat. While the actual grunt work and development is going away, there’s no telling when the software engineering part is going to go away as well. Even the ideas guy part. What happens when a simple prompt from someone who doesn’t even know what they’re doing results in an app that you couldn’t have done as well with whatever software engineering skills you have?
now we get to watch an entire generation of clowns who struggled to create anything at all learn the need for self-discipline in the face of newly accessible NIH traps
A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.
You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).
The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.
Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").
I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.
Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.
Two minutes later, the SRE agents had the bug fixed and ready to be merged.
"understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.
AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.
The article gets at this briefly and moves on: "I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time."
I think this dynamic applies to any use of AI, or indeed, any form of outsourcing. You can outsource a task effectively if you understand the complete task and its implementation very deeply. But if you don't, then you don't know if what you are getting back is correct, maintainable, scalable.
This sounds entirely too doomer.
There will obviously be companies that build a vibe coded app which too many people depend on. There will be some iteration (maybe feature addition, maybe bug fix) which will cause a catastrophic breakage and users will know.
But there will also be companies who add a better mix of incantations to the prompts, who use version control and CI, who ensure the code is matched with tests, who maintain the prompts and requirements documents.
The former will likely follow your projected path. The latter will do fine and may even thrive better than either traditional software houses of cheap vibe coding shops.
Then again, there are famous instances of companies who have tolerated terribly low investment in IT, including SouthWest Airlines.
There are people out there who truly believe that they can outsource the building of highly complex systems by politely asking a machine, and ultimately will end up tasking the same machine to tell them how these systems should be built.
Now, if I were in business with any of these people, why would I be paying them hundreds of thousands, plus the hundreds of thousands in LLM subscriptions they need to barely function, when they cannot produce a single valuable thought?
I don't think there's going to be any catastrophic collapse but I predict de-slopping will grow to occupy more and more developer time.
Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.
I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.
I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.
I had the opportunity to test out the latest version of Claude this afternoon. After a very good analysis of the existing code, I asked it to implement an optimisation that it had identified.
It introduced a race condition into the code which I could tell just by looking at the diff. More worrying, is that after telling it that there was now a race condition, it provided a solution that was no fix at all. That concerns me.
I'm certain you can work with Claude in such a way that it will avoid those errors but I can't help worry about those developers who don't even know what a race condition is, ploughing on and committing the change.
These tools are very good in many ways and I can see how they can be helpful, but they're being mismarketed in my opinion.
Software engineers have been confidently wrong about a lot of things.
E.g. OOP and "patterns" in 90s. What was the last time you implemented a "visitor"?
P. Norvig mentioned most of the patterns are transparent in Common Lisp: e.g. you can just use a `lambda` instead of "visitor". But OOP people kept doing class diagrams for a simple map or fold-like operation.
AI producing a flawed code and "not understanding" are completely different issues. Yes, AI can make mistakes, we know. But are you certain your understanding is really superior?
I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable
Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?
An HN post earlier this week declared that “AI is killing B2B SaaS”:
https://news.ycombinator.com/item?id=46888441
Developers and businesses with that attitude could experience a similarly rude awakening.
The hubris is with the devs that think like you actually.
My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."
Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).
I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.
The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).
This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.
LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.
Why does there seem to be such a divide in opinions on AI in coding? Meanwhile those who "get it" have been improving their productivity for literally years now.
I give a year, the realization would be brutal.
This comment ignores the key insight of the article. Design is what matters most now. Design is the difference between vibe coding and software engineering.
Given a good design, software engineers today are 100x more productive. What they produce is high quality due to the design. Production is fast and cheap due to the agents.
You are correct, there will be a reckoning for large scale systems which are vibe coded. They author is also correct, well designed systems no longer need frameworks or vendors, and they are unlikely to fail because they were well designed from the start.
> A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.
I pray (?) for times like the ones you predict. But companies can stay irrational longer than the average employee can afford.
You still "find most stuff by building/struggling". You just move up stack.
> there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign
For those who are "vibe code only", perhaps. But it's no different than the "coding bootcamp only" developers who never really learned to think holistically. Or the folks who learned the bare minimum to get those sweet dotcom boom dollars back in the day, and then had to return to selling cars when it call came crashing down.
The winners have been, and will always be, those who can think bigger. The ones today who already know how to build from scratch but then find the superpower is in architecture, not syntax, and suddenly find themselves 10x more productive.
> Good time to be in business if you can see through the bs and understand how these systems actually function
You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.
What's makes you so sure of your statement?
I have be building systems for 20 years and I think the author is right.
I think it would be the opposite and we are all in for a rude awakening. If you have tried playing with Opus 4.6 you would know what I am talking about.
Yeah I completely disagree with the author actually, but also with you.
The frameworks are what make the AI write easily understandable code. I let it run nextjs with an ORM, and it almost always creates very well defined api routes, classes & data models. etter then I would do often,
I also ask it be way more correct on the validation & error handling then I would ever do. It makes mistakes, I shout at it and corrects quickly.
So the project I've been "vibe coding" have a much better codebase then I used to have on my solo projects.
Business has been operating on a management/executive culture for many decades now.
These people get paid millions a year to fly around and shake hands with people aka shit fuck all.
At times in the past I have worked on projects that were rushed out and didn't do a single thing that they were intended to do.
And you know what management's response was? They loved that shit. Ooooh it looks do good, that's so cool, well done. Management circle jerking each other, as if using everyone else's shafts as handles to climb the rungs of the ladder.
It's just...like it kills me that this thing I love, technology/engineering/programming...things that are responsible for many of the best things present in our modern lives, have both been twisted to create some of the worst things in our modern lives in the pursuit of profit. And the people in charge? They don't even care if it works or not, they just want that undeserved promotion for a job that a Simpsons-esque fucking drinking bird is capable of.
I just want to go back to the mid 2000s. ;~;
But by then many of us are already starved. That’s why I always said that engineers should NOT integrate AI with internal data.
The future is already here. Been working a few years at a subsidiary of a large corporation where the entire hierarchy of companies is pushing AI hard, at different levels of complexity, from office work up through software development. Regular company meetings across companies and divisions to discuss methods and progress. Overall not a bad strategy and it's paying dividends.
A experiment was tried on a large and very intractable code-base of C++, Visual Basic, classic .asp, and SQL Server, with three different reporting systems attached to it. The reporting systems were crazy being controlled by giant XML files with complex namespaces and no-nos like the order of the nodes mattering. It had been maintained by offshore developers for maybe 10 years or more. The application was originally created over 25 years ago. They wanted to replace it with modern technology, but they estimated it'd take 7 years(!). So they just threw a team at it and said, "Just use prompts to AI and hand code minimally and see how far you get."
And they did wonderfully (and this is before the latest Claude improvements and agents) and they managed to create a minimal replacement in just two months (two or maybe three developers full time I think was the level of effort). This was touted at a meeting and given the approval for further development. At the meeting I specifically asked, "You only maintain this with prompts?" "Yes," they said, "we just iterate through repeated prompts to refine the code."
It has all mostly been abandoned a few months later. Parts of it are being reused, attempting a kind of "work in from the edges" approach to replacing parts of the system, but mostly it's dead.
We are yet to have a postmortem on this whole thing, but I've talked to the developers, and they essentially made a different intractable problem of repeated prompting breaking existing features when attempting to apply fixes or add features. And breaking in really subtle and hard to discern ways. The AI created unit tests didn't often find these bugs, either. They really tried a lot of angles trying to sort it out - complex .md files, breaking up the monolith to make the AI have less context to track, gross simplification of existing features, and so on. These are smarty-pants developers, too, people who know their stuff, got better than BS's, and they themselves were at first surprised at their success, then not so surprised later at the eventual result.
There was also a cost angle that became intractable. Coding like that was expensive. There was a lot of hand-wringing from managers over how much it was costing in "tokens" and whatever else. I pointed out if it's less cost than 7 years of development you're ahead of the game, which they pointed out it would be a cost spread over 7 years, not in 1 year. I'm not an accountant, but apparently that makes a difference.
I don't necessarily consider it a failed experiment, because we all learned a lot about how to better do our software development with AI. They swung for the fences but just got a double.
Of course this will all get better, but I wonder if it'll ever get there like we envision, with the Star Trek, "Computer, made me a sandwich," method of software development. The takeaway from all this is you still have to "know your code" for things that are non-trivial, and really, you can go a few steps above non-trivial. You can go a long way not looking to close at the LLM output, but there is a point at which it starts to be friction.
As a side note, not really related to the OP, but the UI cooked up by the LLMs was an interesting "card" looking kind of thing, actually pretty nice to look at and use. Then, when searching for a wiki for the Ball x Pit game, I noticed that some of the wikis very closely resembled the UI for the application. Now I see variations of it all over the internet. I wonder if the LLMs "converge" on a particular UI if not given specific instructions?
Come to the redteam / purpleteam side. We're having fun times right now. The definition of "every software has bugs" is now on a next level, because people don't even care about sql injection anymore. It's right built into every vibecoded codebase.
Authentication and authorization is as simple as POST /api/create/admin with zero checks. Pretty much every API ever slop coded looks like this. And if it doesn't, it will forget about security checks two prompts later and reverse the previously working checks.
>A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.
Correct. Those who wave away coding agents and refuse to engrain them into their workflows are going to be left behind in the dust.
[dead]
Back in the 00s people like you were saying "no one will put their private data in the cloud!"
"I am sick of articles about the cloud!"
"Anyone know of message boards where discussing cloud compute is banned?"
"Businesses will not trust the cloud!"
Aside from logistics of food and medicine, most economic activity is ephemeral wank.
It's memes. It's a myth. Allegory.
These systems are electrical state in machines and they can be optimized at the hardware layer.
Your Python or Ruby or whatever you ship 9,000 layers of state and abstraction above the OS running in the data center has little influence on how these systems actually function.
To borrow from poker; software engineers were being handed their hat years ago. It's already too late.
> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
To be blunt, I think it's a form of mania that drives someone to reject human-written code in favor of LLM-generated code. Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM. When they automate this much writing, it makes me wonder how much of their own reading they automate away too.
The below captures this perfectly. The author is trying to explain that vibe-coding their own frameworks lets them actually "understand" the code, while not noticing that the LLM-generated text they used to make this point is talking about cutting and sewing bricks.
> But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.
What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.
Yeah, I'm huge on using LLMs for coding, but one of the biggest wins for me is that the LLM already knows the frameworks. I no longer need to learn whatever newest framework there is. I'll stick to my frameworks, especially when using an LLM to code.
> Those beliefs aren't always true, but they're often true.
You can probably tell with a high certainty, from the API in an hour or so.
my problem with frameworks has always been that the moment I want to do something the framework writers aren't interested in, I now have three problems: my problem, how to implement it in the underlying platform and how to work around the framework to not break my feature.
after 3 decades as SWE I mostly found both i) and ii) to not be true, for the most part. a lot of frameworks are not built from the ground up as “i am building a thing to solve x” but “i had a thing and built something that may (or may not) be generally useful.” so a lot of them carry weight from what they were originally built from. then people start making requests to mold the framework to their needs, some get implemented, some don’t. those that don’t good teams will build extensions/plugins etc into the framework and pretty soon you got a monster thing inside of your codebase you probably did not need to begin with. i think every single ORM that i’ve ever used fits this description.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.
Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.
'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
'Revise that so that Sam is Harsher and Frodo more stubborn.'
Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
> It's strange to me when articles like this describe the 'pain of writing code'.
I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.
In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?
Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.
Have you really never found writing code painful?
CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?
I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.
I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.
These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.
Your comment is spot on, but the nuance people who are still new to these LLMs don't yet see is the real reason "he'd be better off just writing the damned book instead."
1. That prompt is always a slot machine. It's never 100% deterministic and that's why we haven't seen an explosion of claude skills. When it works for you, and it's magical, everyone is wowed. However, there is a set of users who then bang their head, wondering why their identical attempt is garbage compared to their coworker. "It must be a skills issue." No, it's just the LLM being an LLM.
2. Coding agents are hyper localized and refuse to consider the larger project when it solves something. So you end up with these "paper cuts" of duplicated functions or classes that do one thing different. Now the LLM in future runs has to decide which of these classes or functions to use and you end up with two competing implementations. Future you will bang your head trying to figure out how to combine them.
3. The "voice" of the code it outputs is trained on public repositories so if your internal codebase is doing something unique, the LLM will consistently pick the voice it's trained on, forcing you to rewrite behind it to match your internal code.
4. It has no chill. If I set any "important" rules in the prompt then it sometimes adheres to it at the expense of doing the "right" thing in its changes. Or it completely ignores it and does its own thing, when it would have been the perfect time to follow the rule. This is to your point that, if I had just written the code myself, it would have been less words than any "perfect" prompt it would have taken to get the same code change.
I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.
I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.
I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.
It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.
People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.
I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.
Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.
Tolkien's book is an art, programs are supposed to do something.
Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.
> Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'
Is exactly what programs are. Not the minutiae of the language within.
Writing he code should be the easy part and one of the smaller time sinks actually. The fruits of the labour is in the planning, the design, the architecture and the requirements that you want to achieve now and potentially in the future.. these all require a serious amount of effort and foresight to plan out.
When you're ready, maybe you've done some POC in areas you were unsure, maybe some good skeletons work to see a happy path draw a shadow of s solution, iterate over your plans and then put some real "code"/foundation in place.
It's a beautiful process. Starting out I used to just jump into s project deep with the code first and hit that workaround button one too many times and it's far more expensive, we all know that.
I don't find writing code painful, but I do find it tedious. The amount of time wasted on boilerplate keeps me from getting to the good stuff. LLMs let me speed run through all of that.
To take it back to your example, let's imagine Tolkien is spending a ton of time on setting up his typewriter, making sure he had his correction tape handy, verifying his spelling and correcting mistakes, ensuring his tab stops were setup to his writing standard, checking for punctuation marks, etc. Now imagine eliminating all that crap so he can focus on the artistic nature of the dialogue.
I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?
Isn't that what Tolkien did in his head? Write something, learn what he liked/didn't like then revise the words? Rinse/repeat. Same process here.
“ What’s gone is the tearing, exhausting manual labour of typing every single line of code.”
Yeah, this was always the easy part.
I didn't fully realize how much pain there was until I started delegating the coding to AI. It's very freeing. Unfortunately I think this will soon lead to mass layoffs.
Pain can mean tedium rather than intellectual challenge.
Sometimes you are not writing Lord of the Rings.
Sometimes you are writing a marketing copy for a new Nissan that's basically the same as last year Nissan, yet you need to sell it somehow. Nobody will REALLY read it more than 2 seconds and your words will be immediately forgotten. Maybe some AI is good then.
Claude Opus 4.6:
“He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.
Please forgive me for being blunt, I want to emphasize how much this strikes me.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
The future is exciting.
Bring it.
The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing.
The alternative is that your bespoke solution has undiscovered security vulnerabilities, probably no security community, and no easy fix for either of those.
You get the privilege of patching Node.js.
Similarly, as a hiring manager, you can hire a React developer. You can't hire a "proprietary AI coded integrated project" developer.
This piece seems to say more about React than it says about a general shift in software engineering.
Don't like React? Easiest it's ever been not to use it.
Don't like libraries, abstractions and code reuse in general? Avoid them at your peril. You will quickly reach the frontier of your domain knowledge and resourcing, and start producing bespoke square wheels without a maintenance plan.
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
It's not really easy to not use React, since it was hyped to no end and now is entrenched. Try to get a frontend job without knowing React.
I wanted to believe this article, but the writing is difficult to follow, and the thread even harder. My main issue is the contradiction about frameworks and using what the large tech companies have built vs real engineering.
The author seems to think that coding agents and frameworks are mutually exclusive. The draw of Vercel/next.js/iOS/React/Firebase is allowing engineers to ship. You create a repo, point to it, and boom! instant CICD, instant delivery to customers in seconds. This is what you're complaining about!? You're moaning that it took 1 click to get this for free!? Do you have any idea how long it would take to setup just the CI part on Jenkins just a few years ago? Where are you going to host that thing? On your Mac mini?
There's a distinction between frameworks and libraries. Frameworks exist to make the entire development lifecycle easier. Libraries are for getting certain things that are better than you (encryption, networking, storage, sound, etc.) A framework like Next.js or React or iOS/macOS exist because they did the heavy work of building things that need to already exist when building an application. Not making use of it because you want to perform "real engineering" is not engineering at all, that's just called tinkering and shipping nothing.
Mixing coding agents with whatever framework or platform to get you the fastest shipping speed should be your #1 priority. Get that application out. Get that first paid customer. And if you achieve a million customers and your stuff is having scaling difficulties, then you already have teams of engineers to work on bringing some of this stuff in house like moving away from Firebase/Vercel etc. Until then, do what lets you ship ASAP.
I was thinking the same. On mobile both frameworks and libraries make my life infinitely easier
This is an interesting idea but the problem is where to stop as you travel down through layers of frameworks?
Say we take it to an absurd extreme: You probably won’t have your agent code up Verilog and run your website on an ASIC.. and you aren’t going to write an assembler to code up your OS and kernel and all the associated hardware support, so you probably want a server and an OS to run your code and maybe some containers or a process model.. so the agentic reinvention has to stop somewhere.
One helpful mindset is to choose frameworks and components that avoid rediscovery. Tailwind for example contains some very well thought out responsive breakpoints that a ton of thought went into designing. With `:md` which is only a couple of tokens, your agent can make use of all that knowledge without having to reinvent everything that went into those decisions.
I fail to see the obvious wisdom in having AI re-implement chunks of existing frameworks without the real-world battle testing, without the supporting ecosystem, and without the common parlance and patterns -- all of which are huge wins if you ever expand development beyond a single person.
It's worth repeating too, that not everything needs to be a react project. I understand the author enjoys the "vibe", but that doesn't make it a ground truth. AI can be a great accelerator, but we should be very cognizant of what we abdicate to it.
In fact I would argue that the post reads as though the developer is used to mostly working alone, and often choosing the wrong tool for the job. It certainly doesn't support the claim of the title
> re-implement chunks of existing frameworks without the real-world battle testing
The trend of copying code from StackOverflow has just evolved to the AI era now.
I also expect people will attempt complete rewrites of systems without fully understanding the implications or putting safeguards in place.
AI simply becomes another tool that is misused, like many others, by unexperienced developers.
I feel like nothing has changed on the human side of this equation.
> the supporting ecosystem, ... the common parlance and patterns
Which are often the top reason to use a framework at all.
I could re-implement a web frame work in python if I needed to but then I would lose all the testing, documentation, middle-ware and worst of all the next person would have to show up and re learn everything I did and understand my choices.
AI has a lot of "leaders" currently working through a somewhat ignorant discovery of existing domain knowledge (ask me how being a designer has felt in the last 15 years of UX Leadership™ slowly realizing there's depth to the craft).
In recent months, we have MCPs, helping lots of people realize that huh, when services have usable APIs, you can connect them together!
In the current case: AI can do the tedious things for me -> Huh, discarding vast dependency trees (because I previously wanted the tedious stuff done for me too) lessens my risk surface!
They really are discovered truths, but no one's forcing them to come with an understanding of the tradeoffs happening.
I have been using Cursor w/ Opus 4.x to do extensive embedded development work over the past six months in particular. My own take on this topic is that for all of the chatter about LLMs in software engineering, I think a lot of folks are missing the opportunity to pull back and talk about LLMs in the context of engineering writ large. [I'm not capitalizing engineering because I'm using the HN lens of product development, not building bridges or nuclear reactors.]
LLMs have been a critical tool not just in my application but in my circuit design, enclosure design (CAD, CNC) and I am the conductor where these three worlds meet. The degree to which LLMs can help with EE is extraordinary.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
Where I agree with the author of this post is that I feel like perhaps it's time for a lot of libraries to sunset. I don't think replacing frameworks is the correct abstraction at all but I do think that it no longer makes sense to spend time integrating libraries when what you really need are purpose-built functions that do exactly what you want instead of what some library author thought you should want.
It seems to me that a lot of the discussion stems around different definitions of the word framework and I believe library is probably the more appropriate term to use here. I wouldn't replace .net framework with something I vibe coded but your example of a library of not so specific functions is ripe for replacement. If you're only using 5% of a library you've probably written as much adapter code as you would have if it was just specific code to solve your problem.
I didn’t even give Claude (Opus 4.1) the registers when I did this for a recent ESP32 + ST7789 Rust project. I think I literally just said “make a driver with a double frame buffer for the ST7789 on SPI1, with DMA updates“. And it did it.
In my experience, often the libs provided by manufacturers are thin wrappers over physical interface setup and communication in the form of a single header and cpp file. Isnt it easier to just use them instead of generating differently phrased copies of them?
This article has some cowboy coding themes I don't agree with. If the takeaway from the article is that frameworks are bad for the age of AI, I would disagree with that. Standardization, and working with a team of developers all using the same framework has huge benefits. The same is true with agents. Agents have finite context, when an agent knows it is using rails, it automatically can assume a lot about how things work. LLM training data has a lot of framework use patterns deeply instilled. Agents using frameworks that LLMs have extensive training on produce high quality, consistent results without needing to provide a bunch of custom context for bespoke foundational code. Multiple devs and agents all using a well known framework automatically benefit from a shared mental model.
When there are multiple devs + agents all interacting with the same code base, consistency and standards are essential for maintainability. Each time a dev fires up their agent for a framework their context doesn't need to be saturated with bespoke foundational information. LLM and devs can leverage their extensive training when using a framework.
I didn't even touch on all the other benefits mature frameworks bring outside of shared mental model: security hardening, teams providing security patches, performance tuning, dependability, documentation, 3rd party ecosystems. etc.
I would think that frameworks make more sense than ever with LLMs.
The benefits of frameworks were always having something well tested that you knew would do the job, and that after a bit of use you'd be familiar with, and the same still stands.
LLMs still aren't AGI, and they learn by example. The reason they are decent at writing React code is because they were trained on a lot of it, and they are going to be better at generating based on what they were trained on, than reinventing the wheel.
As the human-in-the-loop, having the LLM generate code for a framework you are familiar with (or at least other people are familiar with) also let's you step in and fix bugs if necessary.
If we get to a point, post-AGI, where we accept AGI writing fully custom code for everything (but why would it - if it has human-level intelligence, wouldn't it see the value in learning and using well-debugged and optimized frameworks?!), then we will have mostly lost control of the process.
It’s fun to ask the models their input. I was working on diagrams and was sure Claude would want some python / js framework to handle layout and nodes and connections. It said “honestly I find it easiest to just write the svg code directly”.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
Do I live in a different engineering world? Because that's so much not the exhausting labour part of my work, it's not even the same universe. The exhausting manual labour for me is interacting with others in the project, aligning goals and distributing work, reviewing, testing, even coming up with test concepts, and… actually thinking through what the code conceptually will work like. The most exhausting thing I've done recently is thinking through lock-free/atomic data structures. Ouch, does that shit rack your brain.
My biggest concern with AI is that I'm not sure how a software engineer can build up this sort of high-level intuition:
> I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am.
Without a significant development period of this:
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
A professional mathematician should use every computer aid at their disposal if it's appropriate. But a freshman math major who isn't spending most of their time with just a notebook or chalk board is probably getting in the way of their own progress.
Granted, this was already an issue, to a lesser extent, with the frameworks that the author scorns. It's orders of magnitude worse with generative AI.
I'm not sure. I don't know about deep expertise and mastery, but I can attest that my fluency skyrocketed as the result of AI in several languages, simply because the friction involved in writing them went own by orders of magnitude. So I am writing way more code now in domains that I previously avoided, and I noticed that I am now much more capable there even without the AI.
What I don't know is what state I'd be in right now, if I'd had AI from the start. There are definitely a ton of brain circuits I wouldn't have right now.
Counterpoint: I've actually noticed them holding me back. I have 20 years of intuition built up now for what is hard and what is easy, and most of it became wrong overnight, and is now limiting me for no real reason.
The hardest part to staying current isn't learning, but unlearning. You must first empty your cup, and all that.
People said the same thing about the transition to higher levels of abstraction in the past. “How will they write good code if they don’t know assembly? How can they write efficient code if they don’t understand how a microprocessor works?”
These arguments basically just amount to the intellectual equivalent of hazing. 90% of engineers don’t need to know how these things work to be productive. 90% of engineers will never work on a global scale system. Doing very basic things will work for those engineers. Don’t let perfect be the enemy of good enough.
Also, I’d argue that AI will advance enough to capture system design soon too.
This is a wild take. Good frameworks come with clever, well-thought-out abstractions and defensive patterns for dealing with common problems experienced when working in the space the framework covers. frameworks are also often well-documented and well-supported by the community, creating common ways of doing things with well understood strengths and weaknesses.
In some cases, it's going to make sense to drop your dependency and have AI write that functionality inline, but the idea that the AI coding best practice is to drop all frameworks and build your own vibe-coded supplychain de novo for every product is ludicrous. At that point, we should just take out the middle man and just have the LLMs write machine code to fulfill our natural language product specs.
The other thing that's dumb about this is frameworks are usually consolidating repetitive boilerplate so it's going to cost a lot more tokens for an AI to inline everything a framework does.
There are a few interesting points in the comments here.
The pro case for getting rid of frameworks: they're bulky, complex, there are security holes, updates to keep up with, things keep changing. LLMs can write you something perfectly customized to what you're doing. You get some free security by obscurity.
The con case: LLMs are excellent at getting you up to speed with a framework and understanding issues. As avidiax says in this thread, "The author seems to mistake having to update Node.js for a security patch to be a curse rather than a blessing. You get the privilege of patching Node.js." Security by obscurity is generally a bad design. To me, the general architecture and maintainability is a huge issue when you have LLMs write everything from scratch. Not that a Node or React app is a paragon of maintainability or architecture, but it's certainly better than something made from scratch by an LLM. The code quality of a framework is also far higher.
I personally feel like the best path today is to use something lightweight, like Svelte. You get the best of both worlds. Light structure but nothing overbearing.
You can never have a good substitute for a good framework. A good framework would let you skip over boilerplate code, abstract at a higher level, and be dependable.
Context matters most here-- does a solid framework exist for the work you're trying to do? Then use it, otherwise write what you need and understand the risks that come with freshly written code.
I disagree about ditching abstractions. Programmatic abstractions aren't just a way to reduce the amount of code you write, they're also a common language to understand large systems more easily, and a way to make sure systems that get built are predictable.
I share that notion, but I think the abstractions are the foundational tech stack we have had for decades, like the Web Standard or even bash. You need constraints, but not the unnecessary complexity that comes with many modern tech stacks (react/next) that were build around SV's hyper-scalability monopoly mentality. Reach for simple tools if the task is simple: KISS.
This is even more relevant in the context of generated code, where most of the time is spent reviewing rather than writing the code. Abstractions, by allowing the code to be more concise, help.
With LLM code, I'd rather have higher-level abstractions.
Not only that, but a way to factor systems so you can make changes to them without spooky action at a distance. Of course, you have to put in a lot of effort to make that happen, but that's why it doesn't seem to me that LLM's are solving the hard part of software development in the first place.
Using a framework gives you some assurance that the underlying methods are well designed. If you don't know how to spot issues in auth design, then using an LLM instead of a library is a bad idea.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
I think this is a subtle but important point.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.
Even with a perfect coding agent, we code to discover what correct even is.
Team decides on vague requirements, then you actually have to implement something. Well that 'implementing' means iterating until you discover the correct thing. Usually in lots of finicky decisions.
Sometimes you might not care about those decisions, so you one shot one big change. But in my experience, the day-to-day on a production app you can 100% write all the code with Claude, but you're still trying to translate high level requirements into "low"-level decisions.
But in the end its nice not to care about the code monkey work going all over a codebase, adding a lot of trivial changes by hand, etc.
To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?
This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.
I am not "against" AI programming, although I confess I don't really know what that means... coders and business folk are gonna do whatever gets the job done and opinions by and large matter not a whit.
However:
> ..it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
Right, but... approximately zero users of AI for coding are setting temperature to 0 not to mention changing temperature at all. So this is a comparison to a world that doesn't really exist.
Additionally, C code compiles much much closer to the same assembly and microcode regardless of compiler as compared to temperature zero prompts across different AIs.
Compiling the symbols into a binary is not the bottleneck. Formalizing the contract for interacting with the real world is and always has been the bottleneck.
I dislike it when rhetorical flourishes start with "honest question...".
Maybe using AI assistant instead of directly writing code is equivalent to using a high level language instead of assembly and maybe it isn't. So at least begin your discussion as "I think programmers who don't use AI are like programmers who insist on assembly rather than a high level language" (and they existed back in the day). I mean, an "honest question" is one where you are honestly unsure whether you will get an answer or what the answer will be. That's completely different from honestly feeling your opponents have no good arguments. Just about the opposite, really.
By the way, the reason I view AI assistants and high level language compilers as fundamentally different is that high level languages compilers are mostly deterministic, mostly you can determine both the code generated and the behavior of code in terms of the high level language. AI created/assisted code is fundamentally undermined relative to the source (a prompt) on a much wider basis than the assembly created by a high level language compiler (whose source is source code).
Edit: formatting
I found the article interesting yet my thinking is at the opposite spectrum. I also spent a lot of time using LLM, and I am moving away from "no framework" or "library pretending to be a framework".
Not that I was a fan of it, but for work purpose I was using React / Next.js etc.
Now, I am using Laravel. lots of magic, pretty much always one recommended way to do things, excellent code generation using CLI. When you combine it with AI it's following the framework's guideline. The AI does not have to think about whether it should locate business logic with UI, use a hook or not, extract a helper, etc.
It knows how to create route, controller, validator, view, model, migration, whatever.
So the suggestion here is that instead of using battle tested libraries/frameworks, everyone should now build their own versions, each with an unique set of silent bugs?
> Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework
Like the vibe coded solution won't be flawed and vulnerable
Exactly, AI will finally put a stop to the "do not implement your own crypto" fad /s
https://security.stackexchange.com/questions/209652/why-is-i...
Right. Lets all write our own Spring Framework / Django / Ruby on Rails. Everyone who contributed to these frameworks was obviously a jackass but me with my Claude sub can beat everybody while ignoring the actual stuff that I should be doing. Makes for a perfectly great maintenance burden.
Intellectual surrender is exactly the risk I fear with coding agents. Will the next generation of software ‘developers’ still know how to code? Seems coding agents are in a way taking us further from understanding the machine, just like frameworks have in the past.
Software has always been about abstraction. This one, in a way, is the ultimate abstraction. However it turns out that LLMs are a pretty powerful learning tool. One just needs the discipline to use it.
The interesting question here is what replaces frameworks as the unit of leverage.
Frameworks existed because the cost of understanding someone else's abstractions was lower than rebuilding from scratch. With agents, that calculus flips — generating bespoke code from a clear spec is now cheaper than learning a framework's opinions about how your app should work.
But the article buries the key point: "with the experience on my back of having laid the bricks." The author can direct agents effectively because he has two decades of mental models about what good software looks like. The agent is executing his taste and judgment, not replacing it.
The people who will struggle are not the ones who skip frameworks — it is the ones who never built the internal model of how systems fail. Frameworks taught you that implicitly (why does Rails do it this way? Because the alternative breaks at scale). If you skip straight to "agent, build me X," you never develop the instinct for when the output is subtly wrong.
The real unlock is probably closer to what the SRE agent trio example shows: agents handling the mechanical loop (detect, diagnose, fix, PR) while humans focus on system design and invariant definition. The skill shifts from writing code to defining constraints precisely enough that automated systems can maintain them.
Good article: Software Engineering is finally being liberated from the "Middle Work" of the last decade.
The AI tsunami isn't just about coding faster—it’s about reclaiming architectural sovereignty from hyperscaler blueprints.
The future is Just-in-Time and Highly Customized.
My full thoughts here: https://www.linkedin.com/posts/carlcarrie_software-engineeri...
I have to tell claude specifically to use plain html css js, else it goes on building react
There was a time around 2016 where you weren't allowed to write a React application without also writing a "Getting Started with React" blog post. Having trained on all of that, the AI probably thinks React is web development.
Tell claude to build a functional website using plain html and css and no frameworks and it'll do it in a second. Now try that with a junior dev.
Indeed, this has been one of the first things I've noticed
Few months ago I did exactly this. But over time I threw away all the generated js,css and html. It was unmaintenable mess. I finally chose Svelte and stuck with it. Now I have a codebase which makes sense to me.
I did asked AI to generate landing page. This gave me the initial headers, footers and styles that I used for my webapp but I threw away everything else.
Just write up your requirements in and AGENT.md, to avoid repeating yourself. It has worked really well for me on a PHP+Apache project.
> We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
I disagree. At least for a little while until models improve to truly superhuman reasoning*, frameworks and libraries providing abstractions are more valuable than ever. The risk/reward for custom work vs library has just changed in unforeseen ways that are orthogonal to time and effort spent.
Not only do LLMs make customization of forks and the resulting maintenance a lot easier, but the abstractions are now the most valuable place for humans to work because it creates a solid foundation for LLMs to build on. By building abstractions that we validate as engineers, we’re encoding human in the loop input without the end-developer having to constantly hand hold the agent.
What we need now is better abstractions for building verification/test suites and linting so that agents can start to automatically self improve their harness. Skills/MCP/tools in general have had the highest impact short of model improvements and there’s so much more work to be done there.
* whether this requires full AGI or not, I don’t know.
I'm surprised I don't see many (or any) comments mentioning this: this blog post was clearly written with heavy LLM assistance.
Dumbest take I've seen in a while. Really. If anything, AI working with frameworks is making them more effective. Frameworks, by definition, produce more structure, than just the language + libs do, and their entire practical utility is to abstract away complexity and lower the amount of footguns. The ultimate form of abstracting away complexity is an AI agent/coder writing the code for you. But there are gazillions of solutions (and opinionated ones) out there, for gazillions of all kinds of problems... having an AI agent work within the constraints of a framework is going to be a good thing in almost all cases as it will be more focused on your problem space rather than figuring out how to send freaking bits between computers.
How do people not understand that even if AI is writing all your code you still want to have as little code as possible for a given problem solution that you have to manage yourself? Frameworks help with that.
I am a non coder. For many years I went through many various coding tutorials, but never had been able to fully build anything other than basic websites. Now with AI I have been able to build useful CLI tools. Instead of using a static website generator such as Hugo, I can now quick build a website of what I am looking for. Heck, I just had it build me a website as a presentation instead of doing a slide show. I came up with an outline + my notes and information then had it build out the site based on that information. I was able to have it create some really cool animations to help explain my ideas.
I have had the same experience when building simple websites for myself and others. I did it as a test to begin with, but it worked out so well that I have kept at it for a while. The core concept for my experiment was to have no dependencies other than PHP and a web server. Longevity is the goal, I should be able to leave a project for years and it should just keep on running.
Source code is here: https://forge.dmz.skyfritt.net/ruben/folderweb.
It is kind of a mini-framework, but really more of a core that can be expanded upon. A few simple ideas that has been codified. It is mainly a router that does very specific things with some convenient features built-in, and with the option to build plugins and templates on top of this core. The customization and freedom it enables is fantastic!
I used to worry that AI would lead to a regression toward the mean, but for this specific use case I think it can have the opposite effect. It can cause a flourish of experiments and custom-tailored solutions that enables a richer online experience. It demands a certain discipline in the way you build, to avoid making a long-term mess, but having just a little bit of experience and insight into general web development goes a long way to keep things tidy and predictable.
Have anyone else had similar experiences?
EDIT: One live site where I have built on top of FolderWeb, is https://stopplidelsen.no (Norwegian)
The pendulum swing described here is real but I think the underlying issue is subtler than "AI vs. no AI."
The actual problem most teams have isn't writing code — it's understanding what the code they already depend on is doing. You can vibe-code a whole app in a weekend, but when one of your 200 transitive dependencies ships a breaking change in a patch release, no amount of AI is going to help you debug why your auth flow suddenly broke.
The skill that's actually becoming more valuable isn't "writing code from scratch" — it's maintaining awareness of the ecosystem you're building on. Knowing when Node ships a security fix that affects your HTTP handling, or when a React minor changes the reconciliation behavior, or when Postgres deprecates a function you use in 50 queries.
That's the boring, unsexy part of engineering that AI doesn't solve and most developers skip until something catches fire.
> no amount of AI is going to help you debug why your auth flow suddenly broke.
What? Coding agents are very capable at helping fix bugs in specific domains. Your examples are like, the exact place where AI can add value.
You do an update, things randomly break: tell Claude to figure it out and it can go look up the breaking changes in the new versions, read your code and tell you what happened and fix it for you.
> Since [a few months ago], things have dramatically changed...
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
It's a chicken and an egg problem.
Sure, you can skip using frameworks and let AI write them directly for you, because that's what they are trained on - these framework you think you're omitting.
Now the issue is - if we play with the idea that the revolution is actually going to happen and developers will get replaced with vibe coders in the next 6 months (as has been prophesied for the last 5 years) - then the innovation will stop as there will be no one left to add to the pool.
This whole thing reminds me of debacle about retirement funds and taxes in my country. People think they are smart by avoiding them, because they suspect that the system will fail and they won't get anything back. But by the virtue of avoiding these taxes they themselves make a self fulfilling prophecy that is already breaking the system.
If the author is this Alain di Chiappari, he works for a telehealth and psychology site:
https://theorg.com/org/unobravo-telehealth-psychology-servic...
It is interesting how many telehealth and crypto people are promoting AI (David Sacks being the finest of all specimens).
The article itself is of course an AI assisted mashup of all propaganda talking points. People using Unobravo should take note.
Thank you the insightful feedback :) If you also have something to say on the point of the article itself, instead of pointing the finger on the person I'll be happy to answer on that
this is totally backwards to how i've been using agents.
the thing that an agent is really really good at is overcoming the initial load of using a new framework or library. i know, at some level, that using other people's code is going to save me trouble down the road, but there's an initial load to learn how to integrate with it, how to use it, and how to map the way the framework authors think to the way i think and the way my project needs to work. there's always the temptation to just build from scratch instead because it's initially quicker and easier.
letting the AI figure that out, and do the first initial steps of getting the framework to accomplish the task i need, produces a product that is better than what either the AI or i would produce without the framework, and it creates a product that i can then read, understand, and work on. letting the AI go from scratch invariably produces code that i don't want to work with myself.
> They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.
Sorry, i don't buy this. There is a very good reason to use tried and tested frameworks. Am I "intellectually surrendering" when I use a compiler/language/framework that has a great track record?
And how is it not "intellectual surrender" to let the AI do the work for you?
> In my mind, besides the self declared objectives, frameworks solve three problems .. “Simplification” .. Automation .. Labour cost.
I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.
This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.
Nothing fundamentally changed about frameworks. No need to reconsider every single practice because of AI. I think frameworks actually keep agents in check because they're trained on huge set of conventions.
I vibe coded a few of projects in vanilla JS and they eventually became mess, but with a framework they'd at least be structured mess
I had called this a while back, since the reasoning is simple: frameworks primarily exist to minimize boilerplate, but AI is very good at boilerplate, so the value of frameworks is diminished.
The larger underlying shift is that the economics of coding have been upended. Since its inception, our industry has been organized around one fundamental principle: code is expensive because coders are expensive. This created several complex dynamics, one which was frameworks -- which are massive, painful dependencies aimed at alleviating costs by reducing the repeated boilerplate written by expensive people. As TFA indicates, the costs of frameworks in terms of added complexity (e.g. abstractions from the dependency infecting the entire codebase) are significant compared to their benefits.
But now that the cost of code ---> 0, the need for frameworks (and reusability overall) will likely also --> 0.
I had predicted that this dynamic will play out widely and result in a lot more duplicative code overall, which is already being borne out by studies like https://www.gitclear.com/ai_assistant_code_quality_2025_rese...
Our first instinct is to recoil and view this as a bad thing, because it is considered "Tech Debt." But as the word "debt" indicates, Tech Debt is yet another economic concept and is also being redefined by these new economics!
For instance, all this duplicate code would have been terrible if only humans had to maintain it. But for LLMs, it is probably better because all the relevant logic is RIGHT THERE in the code, conveniently colocated with the rest of the functionality where it is used, and not obfuscated behind a dozen layers of abstraction whose (intended) functionality is described in natural language scattered across a dozen different pieces of documentation, each with varying amounts of sufficiency, fidelity and updated-ness. This keeps the context very focused on the relevant bits, which along with extensive testing (again, because code is cheap!) that enables instant self-checking, greatly amplifies the accuracy of the LLMs.
Now, I'm not claiming to say this will work out well long term -- it's too early to tell -- but it is a logical outcome of the shifting economics of code. I always say with AI, the future of coding will look very weird to us; this is another example of it.
> For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to…
Right, a future where you have to pay an AI hyperscaler thousands of dollars a month for access to their closed-source black box that needs a world historical capital moat to operate effectively is actually worse than this. It is baffling to me that more people don’t see this.
I use coding agents almost exclusively now and I’m going to say yes and no on this one.
Yes, I think there’s the potential to replace some frameworks that abstract away too many details and make things way too complicated for basic apps. A good example of this are ORMs like SqlAlchemy. Every time I use them I think to myself it would be easier to just write SQL myself, but it would be a tremendous amount of boilerplate. Nowadays though it might be worth it for an agent to just write the SQL for you instead!
On the other hand, you have libraries like Django. Sure, an agent _could_ write you your own web server. But wow would it be a waste of tokens and your projects surface area would be dwarfed by the complexity of just building your own alternative to Django. I can’t see that being the right move for years still.
I'm not sure why this is against 'frameworks' per se; if we were sure that the code LLMs could generate was the best possible, we might as well use Assembly, no, since that'd lead to best performance? But we don't generally, we still need to validate, verify and read it. And in, that, there is still some value in using a framework since the code generated is likely, on the whole, to be shorter and simpler than that not using a framework. On top of that, because it's simpler, I've at least found that there's less scope for LLMs to go off and do something strange.
I choose to use frameworks in the same sense I choose to use crypto libraries. Smarter people have thought long and hard about the problems involved, and came up with the best ways to solve them.
Why have the agents redo all of that if it's not absolutely necessary? Which it probably isn't for ~98% of cases.
Also, the models are trained on code which predominantly uses frameworks, so it'll probably trend toward the average anyway and produce a variant of what already exists in frameworks.
In the cases where it might make sense, maybe the benefit then is the ability to take and use piecemeal parts of a framework or library and tailor it to your specific case, without importing the entire framework/library.
It never left, welcome back to software engineering though!
Thank you, I'm glad to be back!
No, it copied some relevant parts of every framework code, shifting burden of design, maintenance, debugging and polishing all corner cases to your shoulders.
Why wouldn't you clone frameworks code into your repository, removing parts you don't need, modifying code as you wish?
There is a fourth reason to use a framework: onboarding.
It does not work much for Django, as every project I saw using it has a different shape, but it works very well for Rails, as all projects share the same structure. However, even for Django, there are some practices that a newcomer to a project should expect to find in the code, because it's Django. So, maybe onboarding on a LLM coded project is just picking the same LLM as all the other developers, making it read the code and learning what kind of prompts the other developers use.
By the way, does anybody mind to share first hand experiences of projects in which every developer is using agents? How do those agents cope with the code of the other agents?
A framework also gives you someone's expertise in a domain so you don't have to develop that expertise yourself and focus on all the other stuff...
...and importantly, neither does the LLM; frameworks are incredibly useful even if you are using generative AI.
LLMs finally deliver on the crochety front end dev's dream of writing everything in vanilla JS. Hallelujah.
Frameworks are the reasons why AI can learn patterns and repeat, without frameworks you will be burning credits just to do things that been optimized already and completed. Unless you are Anthropic investor, thats not the way to improve your coding.
I see Libraries and frameworks as a way to capture knowledge and best practices so it can be shared with other people. So looking wat a LLM/AI does, it looks to me that this would be a perfect fit. Without the dependeny hell, unresolved github issues, need to fork and leaving maintainers. It could be opensource on steroïdes, with far shorter feedbackloops (just working in your IDE).
The main burden I see is validation of the output and getting reproducable results. As with many AI solutions.
What they are basically saying : a framework built up from bash-or-makefile-ground by an LLM, is better than any existing framework. I don't agree. When I use LLMs to generate scripts for me, I often have to adapt them to fit in the bigger picture. The more scripts I have, the more blurred becomes what that framework as a whole stands for. Then to become a usable framework, refactoring is needed, which means the calls to those scripts need rewriting and retesting as well.
I think if anything frameworks will become more important. They are already built into the training data of these models and they provide guardrails like protection against xss and sql injection. They are an architectural decision like anything else but why reinvent the wheel even if its an LLM doing the work?
Yes, that was one of the first aha moments for me; put simply:
It's now cheaper to try diving into a system to change it, opposed to the 'safe' path to built on-top-off and adapt to it.
Latest opus and antigravity. Did an insane amount of complex refactoring on a 500k ish line codebase. I saw programming die today.
I will never significantly code by hand again and probably won't be hired in 5 years.
Strange how many people are comparing code to art. Software engineering has never been about the code written, it’s about solving problems with software. With AI we can solve more problems with software. I have been writing code for 25 years, I love using AI. It allows me to get to the point faster.
The author is right, eliminating all this framework cruft will be a boon for building great software. I was a skeptic but it seems obvious now its largely going to be an improvement.
It's actually so over
That took the strangest turn. It started with empowerment to do much more (and that I reallY agree with) — to then use it to... build everything from scratch? What? Why?
What a framework gives me is mostly other people having done precisely the architectural work, that is a prequisite to my actual work. It's fantastic, for the same reason that automatic coding is. I want to solve unsolved problems asap.
I am so confused by the disconnect that I feel like I must be missing something.
Frameworks help you reduce to the point to irreducible complexity.
Not using a framework means creating and maintaining a new and bad one.
And the AI doesn’t even do that. They repeat and create new complexity
Frameworks are stable by design, generated code isn't. Why people still had to learn math when calculator was invented?
I don't see it as either/or. Frameworks give you a common vocabulary to use with the LLMs, and what allow you to organize your thoughts and maintain good git hygiene, and serve as a useful street map to review and explore what's been built.
You can drop the boilerplate bit pushing glue frameworks, but the building block frameworks are here to stay; LLMs know a lot, but they don’t know every solution to every problem. Do not confuse a software development LLM assistant with an oracle.
A huge advantage of frameworks to me is to give new comers to the code a unified frame of reference. A Rails developer (or even a non-Rails guys who understands MVC) can jump into a Rails based codebase he is not familiar with a lot easier than the custom "from the ground up" thing the author espouses.
It's puzzling to me that the author doesn't even mention this huge and obvious benefit of frameworks.
I feel the opposite. Frameworks and standardization becomes even more important when using AI.
> We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years.
Oh, you accepted that? I feel sorry for you. Many of us never did.
> But the true revolution happened clearly last year
Oh, that seems like a good bit of time!
> and since December 2025
So like..1 or 2 months ago? This is like saying “over half of people who tried our product loved it - all 51% of them!”. This article is pushing hype, and is mistaking Anthropics pre IPO marketing drive as actual change.
> What’s gone is the tearing, exhausting manual labour of typing every single line of code.
I constantly see this and think I must be operating in a different world. This never took significant amounts of time. Are people using react to make text blogs or something?
When you choose the right framework it saves you enormous amounts of time. Sounds like the author has trouble separating hype from fact. Pick the right framework and your LLM will work better, too.
You can also decide to switch frameworks or even languages. I switched a personal app I'm working on from Go to Deno and Hono and it's quite nice.
"Software engineers are scared of designing things themselves."
So the answer is to let AI agents design it for you, trained on the data of the giants of software engineering. Got it!
Pretty much completely disagree with the OP. Software Engineering never left, maybe the author moved away from it instead.
> Stop wrapping broken legs in silk. Start building things that are yours.
This however is deeply wrong for me. Anyone who writes and reviews code regularly knows very well that reading code doesn't lead to the same deep intuitive understanding of the codebase as writing same code.
So, no, with AI you are not building things which are yours. You might call them yours, but you lose deeper understanding of what you built.
> That adapting layer of garbage we blindly accepted during these years.
Wouldn't everything that agents produce be better described as a "layer of garbage?"
If a framework, best a minimal one using web standards E.g. svelte or https://nuejs.org/.
You're right, clearly I've tried to be a bit provocative to pass the message, but I'm not religious in this sense. Minimal frameworks that really solve a problem cleanly and are adopted with intention are welcome.
Wouldn't frameworks be better for Ai?
They're used more frequently, I couldn't imagine in python there's more examples of web servers from scratch then using flask or Django?
Frameworks provide a layer of abstraction, so the code is denser, which will use less tokens, and put less code in the prompt.
There is yet another issue: the end-users are fickle fashion minded people, and will literally refuse to use an application if it does not look like the latest React-style. They do not want to be seen using "old" software, like wearing the wrong outfit or some such nonsense. This is real, and baffling.
lol ok have fun building from zero _without_ abstractions. It will work for the narrow thing you first tell it to build, the fun comes when you tell it to change in any way.
"Software engineers are scared of designing things themselves."
what?
Read the following paragraph. The author isn't wrong.
> I want to build X > "Hey claude, how would you make X" > Here's how I'd build X... [Plan mode on]
In big corporations that's how it is. Developers are told to only implement what is in the specs and if they have any objection, they need to raise it to PM who will then forward it to the system architect etc.
So that creates the notion as if the design was something out of reach. I met developers now who cannot develop anything on their own if it doesn't have a ticket that explains everything and hand holds them. If something is not clear they are stuck and need help of senior engineers.
With a line like that I wouldn't trust anything this guy has to say.
This line shows either he does not get how wrong he is, or I do not understand the deepness of his enlightenment. "A simple Makefile covers 100% of my needs for 99% of my use cases". We've come a long way to replace simple Makefile with autotools (incredible monstrocity), cmake, ninja etc. I hope he does not propose to ditch *libc.
Build libraries, not frameworks.
"Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities?"
Who outside of 'frontend web developers' actually do this?
I don't think this is a good description of, say, Apache Tika or Alembic's Ash.
The author makes a valid observation wrapped in an overstatement. Yes, AI coding agents have changed the economics of building custom tooling. But the conclusion—that frameworks are now obsolete—misses the forest for the trees.
The problem with "framework culture" wasn't that frameworks exist, but that we lost the ability to critically evaluate when they're appropriate. We reached for React for static sites, Kubernetes for three-server deployments, and microservices for monolithic problems—not because these tools were wrong, but because we stopped thinking.
What AI agents actually restore isn't "pure software engineering"—it's optionality. The cost of writing a custom solution has dropped dramatically, which means the decision tree has changed. Now you can prototype both approaches in an afternoon and make an informed choice.
But here's what AI doesn't solve: understanding the problem domain deeply enough to architect a maintainable solution. You can generate 10,000 lines of bespoke code in minutes, but if you don't understand the invariants, edge cases, and failure modes, you've just created a different kind of technical debt—one that's harder to unwind because there's no community, no documentation, and no shared understanding.
Frameworks encode decades of collective battle scars. Dismissing them entirely is like dismissing the wheel because you can now 3D-print custom rollers. Sometimes you want the custom roller. Sometimes you want the battle-tested wheel. AI gives you both options faster—it doesn't make the decision for you.
> The three problems frameworks solve (or claim to) [..] Simplification [..] Automation [..] Labour cost
and he misses _the most important problem frameworks solve_
which is correctness
when it comes to programming most things are far more complicated in subtle annoying ways then they seem to be
and worse while you often can "cut away" on this corner cases this also tends to lead to obscure very hard to find bugs including security issues which have a tendency to pop up way later when you haven't touched to code for a while and don't remember which corner you cut (and with AI you like did never know which corner you did cut)
like just very recently some very widely used python libraries had some pretty bad bugs wrt. "basic" HTTP/web topics like http/multipart request smuggling, DOS from "decompression bombs" and similar
and while this might look like it's a counter argument, it speaks for strict code reuse even for simple topics. Because now this bugs have been fixed! And that is a very common topic for frameworks/libraries, they start out with bugs, and sadly often the same repeated common bugs known from other frameworks, and then over time things get ironed out.
But with AI there is an issue, a lot of the data it's trained on is code _which does many of this "typical" issues wrong_.
And it's non-determenistic, and good at "hiding" bugs, especially the kind of bugs which anyway are prone to pass human reviews.
So you _really_ would want to maximize use of frameworks and libraries when using AI, as that large part of the AI reliability issues.
But what does change is that there is much less reason to give frameworks/libraries "neat compact APIs" (which is a common things people spend A LOT of time one and which is prone to be the source of issues as people insist on making things "look simpler" then they are and in turn accidentally make them not just simpler but outright wrong, or prevent use-cases you might need).
Now depending on you definition of framework you could argue that AI removes boiler-parts issues in ways which allow effectively replacing all frameworks with libraries.
But you still need to review code, especially AI generated code. To some degree the old saying that code is far more read then written is even more true with AI (as most isn't "written"(by human) anymore). Now you could just not review AI code, but that can easily count as gross negligence and in some jurisdictions it's not (fully) possible to opt out of damages from gross negligence no matter what you put in TOS or other contracts. I.e. I can't recommend such negligent actions.
So IMHO there is still use for some kind of frameworks, even if what you want from them will likely start to differ and many of them can be partially or fully "librarified".
> Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
LLM generated code is the ultimate abstraction. A mess of code with no trusted origin that nobody has ever understood. It's worse than even the worst maintained libraries and frameworks in every way.
If you have no idea on how to setup the pillars you're absolutely right, maybe you should try
honestly this blog post was pretty off base. Current AIs have a limited ability to keep up with complexity and using known frameworks helps with managing that complexity. If you need to write everything from scratch every time you have to go through the process of scaffolding and harnessing the whole system from scratch. I don't think it's worth rewriting react from scratch every time you make a browser application, even in the best case it's just a huge waste of tokens.
Mindblowing observations.
I've never did see any value in monsters like React. Always use plain JavaScript, wrote web components and used some narrow scope 3rd party libraries. Works like a charm for me. Now instead of writing whole web components on my own I write skeletons with some comments and ask IDE with AI services (I use IDEs from JetBrains) to complete it. I then do the same with tee main application. So far the results are stellar. I do similar with my backend applications (mostly C++) but there is much more work from my side is involved as the requirements are way stricter, for example performance being a major thing.
LinkedIn article?
What frameworks and what have you accomplished with it?
Nah. Nothing has changed. To offload the work to an agent and make it a productivity gain it is exactly the same as using a framework, it's a black box portion of your system, written by someone else, that you don't understand.
Unless you are quite literally spending almost the same amount of time you'd spend yourself to deeply understand each component, at which point, you could write it yourself anyway, nothing has changed when it comes to the dynamics of actually authoring systems.
There are exceptions, but generally speaking untempered enthusiasm for agents correlates pretty well with lack of understanding about what engineering software actually entails (it's about relational and conceptual comprehension, communication, developing shared knowledge, and modeling, not about writing code or using particular frameworks!)
EDIT: And to be clear, the danger of "agentizing" software engineering is precisely that it promotes a tendency to obscure information about the system, turn engineers into personal self-llm silos, and generally discard all the second-order concerns that make for good systems, resilience, modifiability, intelligibility, performance.
AI rolled cryptographic libraries now make it feasible to just roll your own crypto.
I feel like commenting on the article without reading was really vindicated with the advent of AI slop.
Every day I feel closer to leaving this industry when I see articles like this.
Is software even a real industry with patterns, safety, design, performance, review, etc.
Or are we just a hype generating machine that's happy to ship the most broken stuff possible the fastest.
Why do we have to constantly relearn the same lessons.
I suggest to read the full article :)
This is about green field development which is relatively rare. Much of the time the starting point is a bunch of code using React or maybe just a lump of PHP. Business logic ends up plunked down all over the place and LLMs tend to make a huge mess with all this unless kept on a tight leash.
I'm glad this guy is doing well, but I'm dreading the amount of work being created for people who can reverse engineer the mountains of hallucinated bullshit that he and others are now actively producing.
And if the frameworks aren't useful then maybe work up the chain and ditch compilers next?
The author makes a valid observation wrapped in an overstatement. Yes, AI coding agents have changed the economics of building custom tooling. But the conclusion—that frameworks are now obsolete—misses the forest for the trees.The problem with "framework culture" wasn't that frameworks exist, but that we lost the ability to critically evaluate when they're appropriate. We reached for React for static sites, Kubernetes for three-server deployments, and microservices for monolithic problems—not because these tools were wrong, but because we stopped thinking.What AI agents actually restore isn't "pure software engineering"—it's optionality. The cost of writing a custom solution has dropped dramatically, which means the decision tree has changed. Now you can prototype both approaches in an afternoon and make an informed choice.But here's what AI doesn't solve: understanding the problem domain deeply enough to architect a maintainable solution. You can generate 10,000 lines of bespoke code in minutes, but if you don't understand the invariants, edge cases, and failure modes, you've just created a different kind of technical debt—one that's harder to unwind because there's no community, no documentation, and no shared understanding.Frameworks encode decades of collective battle scars. Dismissing them entirely is like dismissing the wheel because you can now 3D-print custom rollers. Sometimes you want the custom roller. Sometimes you want the battle-tested wheel. AI gives you both options faster—it doesn't make the decision for you.
I feel the same way, but I’m not a traditional software engineer. Just an old-school Webmaster who’s been trying to keep up with things, but I’ve had to hire developers all along.
I’m an idea’s guy, and in the past month or so my eyes have also fully opened to what’s coming.
But there’s a big caveat. While the actual grunt work and development is going away, there’s no telling when the software engineering part is going to go away as well. Even the ideas guy part. What happens when a simple prompt from someone who doesn’t even know what they’re doing results in an app that you couldn’t have done as well with whatever software engineering skills you have?
Next up "coding agents replaced me"
now we get to watch an entire generation of clowns who struggled to create anything at all learn the need for self-discipline in the face of newly accessible NIH traps
ai slop shit
Interesting analysis