Before LLMs, a friend of mine lamented that all the juniors at his gig were really fast at producing buggy code. The greater lament was that his bosses loved it. And as a dev, you're getting paid to do what your bosses want.
LLMs can really help you get what your bosses want a lot faster.
As an older dev, myself, I'd already been bitching about the state of software quality before all of this. Companies just didn't give a shit. Sure, people within them did, but as a whole companies will do the bare minimum to not lose your business (because that's what's best for the bottom line). Can't really fault them for their nature.[1]
And then I step back and look at something like Linux or GNU. Perfect and bug-free? Certainly not. But they're damn fine pieces of software. Many open-source projects have historically been damn fine pieces of software. Because they don't care if they lose your "business". They just want to build something cool that they can be proud of.[2]
It's why so many of us agonize over the details of the things we produce and give away for free. It might not even net us another user, but we have pride in our craft and want to do the best we possibly can.
But that way of thinking is a money loser, at least in the short-term. And companies live in the short-term.
So what's going to stop software from just collapsing into a massive pile of crap?
I don't know. Maybe it just has to get so bad that people start going to the marginally-better competition. Isn't exactly a great consolation to me, that.
[1] Small companies are often idealistic and try to do the Right Thing, admittedly. But big ones who tend to be market leaders tend to not.
[2] Insert the entire GNU philosophy here because I just glossed over it completely and I don't want to get called out on it. :)
> The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be.
I feel like this is a very profound insight.
Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.
But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.
It’s almost as though some people fall over themselves trying to achieve maximum possible speed without giving any thought to where they want to be heading.
> AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
And it's not just AI, the removal of friction seems to be pursued mindlessly in all areas, with not even an attempt to understand what value it might be providing.
It shows that previously he likely worked only at companies which catered to him, honestly.
That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.
Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?
The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.
Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.
You may notice that some employers do, however.
You're just unlikely to get a job there, because their team is already established.
I'm on round 3 of arguing with my boss's LLM about a terrible PR he refuses to review manually. I can tell from the PR that it was 100% generated by Claude code because I've seen identical suggestions in PRs from juniors. But this man is my boss. He won't listen.
Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.
Shit sucks.
I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.
The way AI is being used feels like it is proving that, in many orgs, what has always mattered has been the appearance of work, not results of work. Will we wake up in a few years and find out we’ve fired all the doers and are now overloaded with the fakers?
I find that to be a very defeatist take. It always mattered how much value you provide to the business. Writing pretty code or arguing about some implementation detail never really mattered. If you are good at coming up with solutions to problems AI is just one additional tool in your toolbox and personally it allows me to do much more than before.
There were fakers before, and there will be fakers after.
> Writing pretty code or arguing about some implementation detail never really mattered.
True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.
Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.
Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.
Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.
Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.
Are you willing to wake up at 3 AM when that "valuable" AI-written code pages on-call?
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
I think the last ten+ years has taught us that massive security breaches are more of an insurance claim problem and some $4/mo credit monitoring payouts.
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Excuse me? Amazon lost more money in one day than most companies have in revenue, from dropped orders. I would say that matters. Believe it or not, the systems we work on do things that matter in the real world.
Seems to be an instance of the prevention paradox: Security (in general) is taken seriously enough that major incidences are low enough that people think that security does not matter that much.
There is a shift to software mass production over the last decade(s). AI is now speeding up this process extremely. There will be most software produced with AI and "cog coders", similar to a production line in manufacturing.
Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.
I you love programming you should try to get into the second category. Be a master craftsman.
Actually i think we will see a faker take over and then a doer conquest. All those going now take the recipe with them and are capable of cooking it elsewhere. Elsewhere being a place without ai management.
Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.
You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.
Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.
This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.
Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.
This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".
What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.
From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.
And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.
Your graph model lack the aspect of increasing complexity. As you traverse the graph every available node gets increasingly more distant. In some areas of the graph less so than others, a good heuristic function not only identifies a single shortest path, but also dense areas of possible value in the graph.
The question is if blind speed scales quicker then distances grow.
That's true, and I guess the reason why we're building so many datacenters is to answer the question how far exactly will blind speed take us, assuming that we fail to make substantial improvements to AI architecture.
I work at MSFT and I feel burnt out too and am in a similar situation where I feel like resigning would be better for my mental health but AI isn’t a big contributing factor. I do have some arguments against speculative uses of AI though.
Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.
1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?
2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.
3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.
There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.
Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.
I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.
A lot of this is about knowledge debt if I’m reading it correctly (people not knowing things that they should know, or knowing the wrong things). In my last few jobs, I’ve maintained an Anki deck about facts relevant to my job (who certain people are, how certain systems work, details of the programming languages we use, etc.)
I’ve started kind of a funny rule, which is that when I make a change now, I can use Claude or not. But if I use Claude, some cards have to go into the deck. Both about how the implementation works, and also about anything within the implementation I didn’t know about. It does force you to double-check things before committing them to memory.
I think a lot of people relate with this but kind of sit with this silently for reasons the author mentioned:
“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “
Hey OP, I quit my job and said "screw it" at the start of the Year for very similar reasons.
I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.
I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.
Don't work for the man. Your job cannot love you back, in fact, it actively hates you.
I'm going through something similar. All the symptoms described in the article are present in the company where I work. But I don't blame AI. AI is just a tool. I blame the company culture, because it's the source of those problems.
Can definitely relate. It is no more complicated than I really enjoyed designing and writing code by hand, and get very little joy out of agentic processes. I use the tools and see the velocity increase, but it has just become… bland work. I completely get others’ excitement around the tools and the newfound “super powers”, but it hasn’t much resonated with me.
That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.
This report lists failures of some AI systems. They look consequential - but the company does not seem to care. This is very strange - how can it be? I really like AI products they help me all the time - but I know I need to take into account their failure modes and be careful. But lots of organisations don't seem to do that calculation. Will competition root them out? I don't know - I am so enthusiastic about AI - but ever after the LangChain situation I can see that what is adopted is always something that has a lot of flows. The more careful developers that notice the flaws and try to find true workarounds fail because it takes time to do the design well. It is not new thing - there were Betamax mourners for decades - but it seems that the hype machine is now more and more powerful.
What I meant was how LangChain dominated the llm frameworks scene because it loaded VC money. It was just at the beginning - now it has normalised - but I believe it did a lot of damage at that early stage by sucking all oxygen.
I want to focus on the "colleagues submit thousands of AI generated lines of code for review" comment.
Humanity developed Code and programming languages for people. They are supposed to provide sufficient expressiveness so that we people can understand what is happening, and 0 ambiguity, so that the machine can perform is instructions.
But computer code has been a way to communicate among us people on our intentions (what we intend the machine to do). Otherwise, we would still be writing in assembler.
But now, computers are generating code, A LOT of code. So much, that it's becoming more and more difficult to stay on top with our verbose languages.
We will need to develop a better way for the computers to a) produce the instructions to perform the tasks we tell them to , b) produce reports or some accessible way for us people to understand and share what the instructions are doing.
"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
Odoo suffers from others issues though.
Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.
Also the very ugly death they gave OpenERP/Odoo on-premise.
Obviously the author's experience is a nightmare but what was this place like pre-AI? I have a hard time believing people who are this willing to hand over all of their thinking to LLMs were doing anything productive beforehand.
I think you must be right to _some_ degree. The article illustrates that this org doesn’t know why they are doing certain things.
But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.
I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.
I'd rather (and often do) take notes manually than turn on the notetaker.
- 0:00 - Introductions
- 3:30 - Joe gives a summary of the problem and shows diagrams
- 7:52 - Kim asks clarification questions and introduces relevant infrastructural concepts
- 10:25 - People waffling about unrelated stuff
...
* Put the video and the transcript on the same GUI, where I can shuffle through the timeline, choose chapters or click the transcript to be taken to the relevant part of the video.
* Bonus points if it highlights the relevant part of the summary as the video is playing.
I see this as a temporary phase driven by AI hype.
In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever.
Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.
AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.
So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.
While I certainly relate to some of your points, and I'm not an AI maximalist by any means, a few thoughts:
> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.
> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.
> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.
> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.
I'm asking myself the same question for a different reason: nobody will even interview me. I've been out of work for a while. Savings are running out. I apparently don't even know how to look for a job anymore.
Yeah. Got word I was being laid off in November. Officially because of restructuring, but after having had some conversations it's clear I've been replaced by a junior with a Claude subscription.
20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.
Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.
If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.
My response is probably controversial. But I genuinely think it’s generally helpful advice. Ofc I don’t have any other information than the comment about this person.
I have no advice to offer, I only wish you good luck. I am still lucky enough to be employed, but when this whole parade ends, I have no idea what comes next - my only skill is programming and related knowledge work. I think the only path forward is to try to jump ship to another white or blue collar industry…
I thought along those lines as well. The only thing I could come up with that would be semi-viable was medical school, and I"m not sure I'd survive residency. I definitely would never be able to pay back the debt, if I had to take any.
The era of anyone interested in programming for fun being able to make upper 10% incomes is drawing to a close. You'll unfortunately have to join the rest of us who work for money and program for fun. I suggest engineering (the real kind, not software 'engineering')
Unfortunately, I have a visual-spatial processing disability. You don't want me near anything mechanical, and I can't do visualization-based tasks because I literally can't visualize. That eliminates most engineering jobs.
There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.
Where do you live, what are your skills, and what is your citizenship status?
If you are gunning for a remote job, that's not happening anymore expect for the top 5% of candidates.
If you are gunning for a job outside of a Tier 1 tech hub like the Bay, NYC, London, TLV, Beijing, Shanghai, Hangzhou, Singapore, BLR, HYD, etc you will have a hard time.
If you are not up-to-date with modern stacks and the capacities as well as limitations of AI/ML enhanced workflows, you will have a hard time.
Edit: can't reply
> Paul-Craft
Based on your profile below, I am surprised you aren't finding anything in the Bay. It's a hot market right now. Maybe get your resume reviewed?
> Most of the job openings for humans are remote and not in big tech
Absolutely agree about the "not in big tech" part, but remote being the majority of tech hiring is absolutely false in 2026.
> My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness
Go back to using a human written resume. An LLM generated resume is obvious and a negative signal (you could be a bot)
I'm tailoring my resume to individual postings a good portion of the time. My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness. There are no hallucinations in it, and I feel it accurately represents my experience.
It happens to many, it's happened to me three times so far - the mods rate limit (only X comments per Y time period) people who have been flagged, judged, and found to be a bit prone to get in rapid back n forth exchanges that have crossed guidelines.
It can generally be reversed on request via hn email, sometimes it's a blessing, sometimes it's not even something that impacts a user very often unless they find themselves in an interesting exchange.
Most of the job openings for humans are remote and not in big tech, but the pay in absolute terms is significantly lower (same wage percentile for the area you live though).
It's important to understand the world beyond your bubble. If those jobs seem unrealistic as an option, you may need to consider if your cost of living is unrealistic.
I'm fine with "not big tech," along with a "not big tech" salary. In fact, I prefer "not big tech." My cost of living is not absurd for the Bay Area. I'd even be willing to take a little less than what I made before. After all, less than before is still better than 0. I'm using AI to tailor my resume to every posting, and still not getting calls.
You’ve got nine years of experience, so work your network and get referrals. It’s very hard to get mid-career jobs through the front door; most people want someone they trust to vouch for you.
Yeah I was implying you might need to move to optimize for cost of living, but I don't know your situation and am not really asking. It's actually surprising sometimes to hear how long this took to affect some tech workers. You're lucky it's now that housing prices have stabilized (everyone else has stopped moving), and not a few years ago.
Remote work doesn't necessarily mean you aren't still tethered to some radius. Otherwise I'd be living in Monaco or something haha.
The worst part so far has been some people have Claude write tickets and they don’t check what the very detailed piece of crap ticket says. Just tell me the few pieces of true knowledge you know rather than a full page of AI slop that has multiple errors in it that causes me to waste hours trying to figure out what’s true
No comment on the ethics; however, I think when people's instincts to survive kick in, many of these larger goals get sidelined.
There's a growing belief that it's now or never as far as accumulating wealth, securing a house, etc. go because people think once AGI comes their chances of having the lives they want will diminish. The bay area has only gotten more expensive to live in, and that's where all of the AI folks are, so no surprise.
I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.
Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.
I genuinely enjoy software development, but if I could provide for my family, I’d also enjoy selling croissants at a local bakery or filling up shelves at the supermarket.
I don't think the now or never thinking is healthy, but I certainly understand the motivation. I myself have never really fit into a career path climbing the corporate ladder, and entrepreneurship is a skill that takes time to develop. When you're oscillating between stability and bleeding money, it's natural to want to go all in on an opportunity when it presents itself.
You can just... not live in California. Most other places are doing just fine and experiencing the usual moderate economic instability that happens every decade or two along with the rest of the world.
If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.
Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.
I actually considered that, myself. The thing is, California is where the jobs are for me. If I move out of California, I may never be able to come back. That could cost me a lot.
Who cares about California? If you dont have family there, just head to Europe as fast as you can, one way ticket, don't ever pay the IRS to come back.
I feel like all this hype around generated code overlooks a distinct opportunity for enterprising focus on excellent, clean, maintainable, curated code - baked by humans, for other humans.
We also haven't really seen how large volumes of generated sourcecode will stand up over time (like, decades) in terms of maintainability. My prediction is you'll encounter a lot more disposable software. That's fine for making general code more of a commodity (cheap and accessible), but where you get commodities you eventually find demand for more premium flavors of product. Those tend to derive from taste and opinion (attributes which, for example, were major success factors of the iPhone at its peak design).
The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.
While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.
What would that look like? In my experience, real production codebases tend to have lots of bugs. Most of them never get prioritized, because features matter more than fixing obscure bugs.
Indeed - one of my biggest pet peeves is when organizations chronically avoid budgeting the time and resources to deal with their technical debt. Or when they lack leadership that is confident and bold enough to make the hard decisions to do so (which requires experience and reputation), or suffer a culture that doesn't tolerate some degree of risk-taking, with contingencies (particularly in schedule and blast radius containment) to safely deal with occasional failure on the road to improvement.
I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.
I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!
Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.
I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.
Another problem the author may be facing that if they decide to get back to the tech market and get a new job, it may be difficult with tech still going forward - not in a meaningful way, as computers still compute as before, but enough that lack of experience with a new tool or framework will make them unattractive compared to other candidates.
Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.
I really don't see this getting better from the sound if it, at-least from all the headlines present at the moment. The spending taking place from these big tech companies is alarming, not only is it centralized to single category " well by a large percentage". We still don't have a clear picture of the landscape for tech yet, yes there are some great tech innovation taking place in the US.
Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.
The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.
The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.
Automation seems like a very surface-level reading of this article.
Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.
I’m not part of the owner class so what tech jobs has and always will be is a paycheck. Why should I be excited about automating myself to homelessness
This happened once with open sores now this behavior has turned up to 11. People taking dependencies they don't even know what, full of incorrect code, vulns intentionally or not, delegate everything take no responsibility.
Probably. I hate the AI boom too and see no need to get all political, or even outrught blame the politicians. What'd you expect, politicians with a master degree in every field there is? Not gonna happen.
If we're putting the blame on anything, it's on us hacker types for going where the money flows and not fighting the corporate overlords nail and tooth.
Before LLMs, a friend of mine lamented that all the juniors at his gig were really fast at producing buggy code. The greater lament was that his bosses loved it. And as a dev, you're getting paid to do what your bosses want.
LLMs can really help you get what your bosses want a lot faster.
As an older dev, myself, I'd already been bitching about the state of software quality before all of this. Companies just didn't give a shit. Sure, people within them did, but as a whole companies will do the bare minimum to not lose your business (because that's what's best for the bottom line). Can't really fault them for their nature.[1]
And then I step back and look at something like Linux or GNU. Perfect and bug-free? Certainly not. But they're damn fine pieces of software. Many open-source projects have historically been damn fine pieces of software. Because they don't care if they lose your "business". They just want to build something cool that they can be proud of.[2]
It's why so many of us agonize over the details of the things we produce and give away for free. It might not even net us another user, but we have pride in our craft and want to do the best we possibly can.
But that way of thinking is a money loser, at least in the short-term. And companies live in the short-term.
So what's going to stop software from just collapsing into a massive pile of crap?
I don't know. Maybe it just has to get so bad that people start going to the marginally-better competition. Isn't exactly a great consolation to me, that.
[1] Small companies are often idealistic and try to do the Right Thing, admittedly. But big ones who tend to be market leaders tend to not.
[2] Insert the entire GNU philosophy here because I just glossed over it completely and I don't want to get called out on it. :)
> The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be.
I feel like this is a very profound insight.
Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.
But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.
There's a Dutch idiom: no shine without friction.
AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
It’s almost as though some people fall over themselves trying to achieve maximum possible speed without giving any thought to where they want to be heading.
> AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
And it's not just AI, the removal of friction seems to be pursued mindlessly in all areas, with not even an attempt to understand what value it might be providing.
It shows that previously he likely worked only at companies which catered to him, honestly.
That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.
Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?
The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.
Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.
You may notice that some employers do, however.
You're just unlikely to get a job there, because their team is already established.
it's an extension of the principle that the purpose of writing code is to write it for your successor.
I'm on round 3 of arguing with my boss's LLM about a terrible PR he refuses to review manually. I can tell from the PR that it was 100% generated by Claude code because I've seen identical suggestions in PRs from juniors. But this man is my boss. He won't listen.
Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.
Shit sucks.
I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.
The way AI is being used feels like it is proving that, in many orgs, what has always mattered has been the appearance of work, not results of work. Will we wake up in a few years and find out we’ve fired all the doers and are now overloaded with the fakers?
I find that to be a very defeatist take. It always mattered how much value you provide to the business. Writing pretty code or arguing about some implementation detail never really mattered. If you are good at coming up with solutions to problems AI is just one additional tool in your toolbox and personally it allows me to do much more than before.
There were fakers before, and there will be fakers after.
> Writing pretty code or arguing about some implementation detail never really mattered.
True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.
Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.
Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.
Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.
Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.
Are you willing to wake up at 3 AM when that "valuable" AI-written code pages on-call?
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
I think the last ten+ years has taught us that massive security breaches are more of an insurance claim problem and some $4/mo credit monitoring payouts.
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Excuse me? Amazon lost more money in one day than most companies have in revenue, from dropped orders. I would say that matters. Believe it or not, the systems we work on do things that matter in the real world.
Seems to be an instance of the prevention paradox: Security (in general) is taken seriously enough that major incidences are low enough that people think that security does not matter that much.
I would too. I’m saying businesses don’t seem to. At least not like we assume.
People pushed unread and buggy code to production long before AI.
There is a shift to software mass production over the last decade(s). AI is now speeding up this process extremely. There will be most software produced with AI and "cog coders", similar to a production line in manufacturing.
Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.
I you love programming you should try to get into the second category. Be a master craftsman.
Actually i think we will see a faker take over and then a doer conquest. All those going now take the recipe with them and are capable of cooking it elsewhere. Elsewhere being a place without ai management.
It feels like it but this is not true.
Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.
You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.
Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.
This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.
Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.
This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".
What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.
From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.
And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.
Your graph model lack the aspect of increasing complexity. As you traverse the graph every available node gets increasingly more distant. In some areas of the graph less so than others, a good heuristic function not only identifies a single shortest path, but also dense areas of possible value in the graph.
The question is if blind speed scales quicker then distances grow.
That's true, and I guess the reason why we're building so many datacenters is to answer the question how far exactly will blind speed take us, assuming that we fail to make substantial improvements to AI architecture.
Inshallah.
I work at MSFT and I feel burnt out too and am in a similar situation where I feel like resigning would be better for my mental health but AI isn’t a big contributing factor. I do have some arguments against speculative uses of AI though.
Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.
1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?
2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.
3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.
There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.
Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.
I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.
CEO see performative work happening as the cut is still not deep enough.
Can you add in the missing words that make this comment make sense, please?
A lot of this is about knowledge debt if I’m reading it correctly (people not knowing things that they should know, or knowing the wrong things). In my last few jobs, I’ve maintained an Anki deck about facts relevant to my job (who certain people are, how certain systems work, details of the programming languages we use, etc.)
I’ve started kind of a funny rule, which is that when I make a change now, I can use Claude or not. But if I use Claude, some cards have to go into the deck. Both about how the implementation works, and also about anything within the implementation I didn’t know about. It does force you to double-check things before committing them to memory.
I think a lot of people relate with this but kind of sit with this silently for reasons the author mentioned:
“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “
Hey OP, I quit my job and said "screw it" at the start of the Year for very similar reasons.
I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.
I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.
Don't work for the man. Your job cannot love you back, in fact, it actively hates you.
I'm going through something similar. All the symptoms described in the article are present in the company where I work. But I don't blame AI. AI is just a tool. I blame the company culture, because it's the source of those problems.
Can definitely relate. It is no more complicated than I really enjoyed designing and writing code by hand, and get very little joy out of agentic processes. I use the tools and see the velocity increase, but it has just become… bland work. I completely get others’ excitement around the tools and the newfound “super powers”, but it hasn’t much resonated with me.
That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.
This report lists failures of some AI systems. They look consequential - but the company does not seem to care. This is very strange - how can it be? I really like AI products they help me all the time - but I know I need to take into account their failure modes and be careful. But lots of organisations don't seem to do that calculation. Will competition root them out? I don't know - I am so enthusiastic about AI - but ever after the LangChain situation I can see that what is adopted is always something that has a lot of flows. The more careful developers that notice the flaws and try to find true workarounds fail because it takes time to do the design well. It is not new thing - there were Betamax mourners for decades - but it seems that the hype machine is now more and more powerful.
Which "LangChain situation" are you talking about? Anything specific, or just everything that's happened in the past year or so?
What I meant was how LangChain dominated the llm frameworks scene because it loaded VC money. It was just at the beginning - now it has normalised - but I believe it did a lot of damage at that early stage by sucking all oxygen.
I want to focus on the "colleagues submit thousands of AI generated lines of code for review" comment.
Humanity developed Code and programming languages for people. They are supposed to provide sufficient expressiveness so that we people can understand what is happening, and 0 ambiguity, so that the machine can perform is instructions.
But computer code has been a way to communicate among us people on our intentions (what we intend the machine to do). Otherwise, we would still be writing in assembler.
But now, computers are generating code, A LOT of code. So much, that it's becoming more and more difficult to stay on top with our verbose languages.
We will need to develop a better way for the computers to a) produce the instructions to perform the tasks we tell them to , b) produce reports or some accessible way for us people to understand and share what the instructions are doing.
"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
Are there any companies that aren't AI-pilled at this point?
Odoo, Belgium, cloud ERP. Not very AI pilled, even if AI is considered and used somehow
Odoo suffers from others issues though. Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.
Also the very ugly death they gave OpenERP/Odoo on-premise.
Obviously the author's experience is a nightmare but what was this place like pre-AI? I have a hard time believing people who are this willing to hand over all of their thinking to LLMs were doing anything productive beforehand.
I think you must be right to _some_ degree. The article illustrates that this org doesn’t know why they are doing certain things.
But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.
Thank you for writing this. I didn't realize it, but I feel a lot more of this than I thought.
Good article.
I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.
I'd rather (and often do) take notes manually than turn on the notetaker.
I'd like for these tools to:
* Cut the video down into chapters, e.g.
* Put the video and the transcript on the same GUI, where I can shuffle through the timeline, choose chapters or click the transcript to be taken to the relevant part of the video.* Bonus points if it highlights the relevant part of the summary as the video is playing.
I see this as a temporary phase driven by AI hype.
In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever. Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.
AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.
So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.
While I certainly relate to some of your points, and I'm not an AI maximalist by any means, a few thoughts:
> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.
> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.
> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.
> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.
@arcfour - you are absolutely correct and you will be PIP'd and pushed out if you try this in FAANG company today, where *everything* is about AI.
I'm asking myself the same question for a different reason: nobody will even interview me. I've been out of work for a while. Savings are running out. I apparently don't even know how to look for a job anymore.
Yeah. Got word I was being laid off in November. Officially because of restructuring, but after having had some conversations it's clear I've been replaced by a junior with a Claude subscription.
20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.
Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.
If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.
Use AI to mass-apply to all available job postings. It's a numbers game.
The best way to find out: just start. You’ll improve along the way. Questions like this (and anxiety) are best fixed by action.
When someone says “no one will interview me” this is a pretty unhelpful response.
My response is probably controversial. But I genuinely think it’s generally helpful advice. Ofc I don’t have any other information than the comment about this person.
You literally said they should do something.
Yes exactly. I stand by that advice. What’s the alternative? Do nothing?
So you advise that they do not need to change their approach at all, since they’re already doing something: posting on hacker news.
I mean, I am. How else would I know nobody wants to interview me? :)
Fair enough :) wasn’t clear to me from your first comment. It’s definitely pretty tough out there right now.
It was completely clear from the first comment, which is why yours was so clearly unhelpful.
I have no advice to offer, I only wish you good luck. I am still lucky enough to be employed, but when this whole parade ends, I have no idea what comes next - my only skill is programming and related knowledge work. I think the only path forward is to try to jump ship to another white or blue collar industry…
I thought along those lines as well. The only thing I could come up with that would be semi-viable was medical school, and I"m not sure I'd survive residency. I definitely would never be able to pay back the debt, if I had to take any.
The era of anyone interested in programming for fun being able to make upper 10% incomes is drawing to a close. You'll unfortunately have to join the rest of us who work for money and program for fun. I suggest engineering (the real kind, not software 'engineering')
Unfortunately, I have a visual-spatial processing disability. You don't want me near anything mechanical, and I can't do visualization-based tasks because I literally can't visualize. That eliminates most engineering jobs.
There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.
Where do you live, what are your skills, and what is your citizenship status?
If you are gunning for a remote job, that's not happening anymore expect for the top 5% of candidates.
If you are gunning for a job outside of a Tier 1 tech hub like the Bay, NYC, London, TLV, Beijing, Shanghai, Hangzhou, Singapore, BLR, HYD, etc you will have a hard time.
If you are not up-to-date with modern stacks and the capacities as well as limitations of AI/ML enhanced workflows, you will have a hard time.
Edit: can't reply
> Paul-Craft
Based on your profile below, I am surprised you aren't finding anything in the Bay. It's a hot market right now. Maybe get your resume reviewed?
> Most of the job openings for humans are remote and not in big tech
Absolutely agree about the "not in big tech" part, but remote being the majority of tech hiring is absolutely false in 2026.
> My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness
Go back to using a human written resume. An LLM generated resume is obvious and a negative signal (you could be a bot)
Also, make sure your resume is 1 page.
Huh, weird that you can't reply.
I'm tailoring my resume to individual postings a good portion of the time. My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness. There are no hallucinations in it, and I feel it accurately represents my experience.
> Huh, weird that you can't reply.
It happens to many, it's happened to me three times so far - the mods rate limit (only X comments per Y time period) people who have been flagged, judged, and found to be a bit prone to get in rapid back n forth exchanges that have crossed guidelines.
It can generally be reversed on request via hn email, sometimes it's a blessing, sometimes it's not even something that impacts a user very often unless they find themselves in an interesting exchange.
By hand written, I think he means something like a letter written by hand, or anyway sent via post. Not "chatgpt that is basically handwritten"
Nope. I mean text created by a human not an LLM.
Bay Area, 9 YoE primarily backend, US citizen. I'm familiar with AI coding tools. I've done real work on real systems.
What is your experience in? The company I work for is constantly hiring
At this conversation depth thete is no reply button here but you can open the comment by clicking the time "8 hours ago" then reply.
Most of the job openings for humans are remote and not in big tech, but the pay in absolute terms is significantly lower (same wage percentile for the area you live though).
It's important to understand the world beyond your bubble. If those jobs seem unrealistic as an option, you may need to consider if your cost of living is unrealistic.
> Most of the job openings for humans are remote and not in big tech
where do you find these?
I'm fine with "not big tech," along with a "not big tech" salary. In fact, I prefer "not big tech." My cost of living is not absurd for the Bay Area. I'd even be willing to take a little less than what I made before. After all, less than before is still better than 0. I'm using AI to tailor my resume to every posting, and still not getting calls.
You’ve got nine years of experience, so work your network and get referrals. It’s very hard to get mid-career jobs through the front door; most people want someone they trust to vouch for you.
I've tried that. They don't have anything for me.
> not absurd for the Bay Area
Yeah I was implying you might need to move to optimize for cost of living, but I don't know your situation and am not really asking. It's actually surprising sometimes to hear how long this took to affect some tech workers. You're lucky it's now that housing prices have stabilized (everyone else has stopped moving), and not a few years ago.
Remote work doesn't necessarily mean you aren't still tethered to some radius. Otherwise I'd be living in Monaco or something haha.
The worst part so far has been some people have Claude write tickets and they don’t check what the very detailed piece of crap ticket says. Just tell me the few pieces of true knowledge you know rather than a full page of AI slop that has multiple errors in it that causes me to waste hours trying to figure out what’s true
i never got along with tickets, anyway.
No comment on the ethics; however, I think when people's instincts to survive kick in, many of these larger goals get sidelined. There's a growing belief that it's now or never as far as accumulating wealth, securing a house, etc. go because people think once AGI comes their chances of having the lives they want will diminish. The bay area has only gotten more expensive to live in, and that's where all of the AI folks are, so no surprise.
I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.
Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.
I genuinely enjoy software development, but if I could provide for my family, I’d also enjoy selling croissants at a local bakery or filling up shelves at the supermarket.
I don't think the now or never thinking is healthy, but I certainly understand the motivation. I myself have never really fit into a career path climbing the corporate ladder, and entrepreneurship is a skill that takes time to develop. When you're oscillating between stability and bleeding money, it's natural to want to go all in on an opportunity when it presents itself.
You can just... not live in California. Most other places are doing just fine and experiencing the usual moderate economic instability that happens every decade or two along with the rest of the world.
If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.
Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.
I actually considered that, myself. The thing is, California is where the jobs are for me. If I move out of California, I may never be able to come back. That could cost me a lot.
Who cares about California? If you dont have family there, just head to Europe as fast as you can, one way ticket, don't ever pay the IRS to come back.
I feel like all this hype around generated code overlooks a distinct opportunity for enterprising focus on excellent, clean, maintainable, curated code - baked by humans, for other humans.
We also haven't really seen how large volumes of generated sourcecode will stand up over time (like, decades) in terms of maintainability. My prediction is you'll encounter a lot more disposable software. That's fine for making general code more of a commodity (cheap and accessible), but where you get commodities you eventually find demand for more premium flavors of product. Those tend to derive from taste and opinion (attributes which, for example, were major success factors of the iPhone at its peak design).
The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.
While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.
What would that look like? In my experience, real production codebases tend to have lots of bugs. Most of them never get prioritized, because features matter more than fixing obscure bugs.
Indeed - one of my biggest pet peeves is when organizations chronically avoid budgeting the time and resources to deal with their technical debt. Or when they lack leadership that is confident and bold enough to make the hard decisions to do so (which requires experience and reputation), or suffer a culture that doesn't tolerate some degree of risk-taking, with contingencies (particularly in schedule and blast radius containment) to safely deal with occasional failure on the road to improvement.
I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.
I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!
This resonates a lot with me.
Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.
I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.
Good luck!
what a clear reason to never use Vercel or next.js
Another problem the author may be facing that if they decide to get back to the tech market and get a new job, it may be difficult with tech still going forward - not in a meaningful way, as computers still compute as before, but enough that lack of experience with a new tool or framework will make them unattractive compared to other candidates.
Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.
I really don't see this getting better from the sound if it, at-least from all the headlines present at the moment. The spending taking place from these big tech companies is alarming, not only is it centralized to single category " well by a large percentage". We still don't have a clear picture of the landscape for tech yet, yes there are some great tech innovation taking place in the US.
Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.
The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.
The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.
optimism is clearer without fog.
Bluntly, no, you probably don't belong in tech.
This is what tech has always been. A never (yet) ending race to automate. Our job will be done when there's nothing left to automate.
Automation seems like a very surface-level reading of this article.
Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.
I’m not part of the owner class so what tech jobs has and always will be is a paycheck. Why should I be excited about automating myself to homelessness
This happened once with open sores now this behavior has turned up to 11. People taking dependencies they don't even know what, full of incorrect code, vulns intentionally or not, delegate everything take no responsibility.
All kind of interesting point and then suddenly a wild Trump card appears.
What does Trump have to do with AI?
> Generative AI tools, .. supercharge the spread of disinformation and fascism, ... and concentrate wealth in fewer hands
People caught up in this line of beliefs generally tend to be more neurotic and unhappy about most things.
Probably. I hate the AI boom too and see no need to get all political, or even outrught blame the politicians. What'd you expect, politicians with a master degree in every field there is? Not gonna happen.
If we're putting the blame on anything, it's on us hacker types for going where the money flows and not fighting the corporate overlords nail and tooth.