I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.
I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).
I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.
LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.
I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".
I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.
Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.
It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.
It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.
I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.
I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.
Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.
I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on
To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.
> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.
100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"
Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.
Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.
The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.
I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
I thought about it a little deeper and I think software development has always had the addictive tendency. That hunt for the solution to the problem, has a rush when you complete it.
It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
The counterweight has been, after using it for a bunch of projects, I have internalized that it will very, very quickly get me to maybe 60% and then I'll have to take it the rest of the way mostly by myself (or handholding it tightly for the remaining 40% at a much slower pace).
In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.
When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.
Instead of jumping from project to project, I focus on one (maybe a few) and let myself free while agents spew out their output.
Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.
My trick is to (try to) do something that requires high focus, on unrelated matters.
To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.
It keeps my brain in focus, busy and engaged.
Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.
Also, I am back at using pomodoro technique more frequently.
Just some pointers, in case you want to try out, or suggest some you find effective yourself.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
I can relate to this. Last October, I had a real epiphany using Claude Code at work. Suddenly, that initial inertia of starting something whether it’s drafting a JIRA ticket, structuring a PR, or just brainstorming completely vanished.
I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.
However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.
In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.
There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.
I am still trying to create a system that works -- now using the very tools. Long journey ahead.
EDIT: My experience --
I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.
Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.
I've been using Augment's agents (VS code, CLI) for 8ish months. It let's me easily switch between GPT and Claude models.
I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.
I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.
I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:
When I don't have time, I just ask AI to summarize the main points and expand on the point I like. I do this with even HN discussions. I just copy the whole HN page and paste into Claude and ask it to summarize and deduplicate talking points.
Some traits I recognized in many excellent coders i worked with, their drive to optimization, intellectual thirst, critical and creative thinking are attributes i consistently correlated with them being in some sort of neurodivergence spectrum.
Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours.
My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.
I'm wrestling with this right now. I only use LLMs for design and exploration because I am not employed and can't pay for a subscription right now, and they make the design phase feel like less of a fever dream because checking my ideas doesn't involve hours of scanning search results online and trying to see how my ideas fit with what exists or trying to evaluate if my ideas even make sense, so I feel more encouraged to get started on working, but I often wonder if the prompts are being sycophants
In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea
It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
>AI development doesn't involve luck to any appreciable degree
Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
> If you're making the argument that LLMs are gambling simply because they're faster than humans
No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.
My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.
I don’t like the gambling comparison either. It’s more like smoking or drinking. It’s an addiction you lean on to help you do something- even if that something is just getting through the day.
Yeah but those are classified as addictions because they have a harm component (lung cancer, liver disease, societal impact). LLMs aren't going to kill you. If anything, it might be like gaming addiction.
If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.
Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
> Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
People absolutely do say that video games are slot machines. [0][1]
I'd observe that there are professional gamblers, and there are amateur gamblers.
If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")
If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")
Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.
"It's just paying to get stuff done..." is, with respect, superflous.
I don't know, I can understand "some people might overdo it and get addicted to LLMs". I can't understand "LLMs are slot machines and that's all they're good for" when I use LLMs every day to do tons of actual work.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
Addressing the end of the article, I think that we are all very much still learning how to use AI responsibly. It's like we just discovered alcohol and we're going on a rager every night because we don't know any better yet.
It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.
For me it's different. I am not diagnosed, but I think my executive function doesn't work right. It's really hard for me to start a new task, but when it is interesting enough I can hyper focus until it's done. In the past that often happened when I needed to implement something not too trivial. But now that AI does the implementation in minutes I need to switch tasks constantly and it is honestly super exhausting for me.
Sounds to me like what people are identifying as dopamine, generating it and enjoying it. I am not educated though about brain function.
Noticing novelty is beneficial in nature as it surfaces opportunities to conscious level. "Squirrel!" famously, from the movie "Up". It feels good to experience. Then, creating ones own dopamine supply can drive behavior, and increasing the number of behavior can exhaust energy supply on different human dimensions.
So now, managing this process and limiting the dopamine cycle becomes also worthwhile -- avoiding fatigue potentially perhaps -- while still not negating the attractiveness of dopamine derivable from the endless opportunities of the world. <3
As someone with ADHD, it’s a lot more nuanced than that. Coding agents can remove task paralysis, but they also introduce many other distractions. Being one prompt away from zero to one is a double edged sword, because it means any random thought, idea and side project is also a prompt away.
I've a thought that AI could drive humanity to appreciate humans, as a side effect of its rise.
Nowadays we're bumping up against alternative nonhuman intelligences, nowadays as we go about our lives. New neighbors, kind of.
And AI has its idea of 'living' in this world .. as a servant to us mainly.
So human life is changing: we now have the opportunity to relate to life (existential) while we're being influenced by the valuable accompaniment of these new docile servants. We're able to "see our plantation and peacocks" if you will.
We experience our life-challenges differently ... now being alive to see our daily labors accomplished by others, and we're able to reap the benefits: more dopamine, resources, whatever.
Our role is changing somewhat, being 'wealthy' or 'elevated'.
I think this poses new questions implicitly, like: Q: Do we like our new wealthy-in-productive-results selves? Is this a life worth living?
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
> What is it good for?
> For me, personally? It helps me overcome my task paralysis. As mentioned earlier: I have a plan. A strategy. An idea. I just need someone (or something), who has fun in churning through the implementation. I have the ideas. But boy is coding exhausting.
I find the same. AI helps me overcome any paralysis. I just think "hey it's cheap to write the prompt" and go on.
- good for me in the short term (e.g., I can fulfill what my company asks from me)
- good for the company in the short term (see above)
- bad for me in the long-term. E.g, I'm starting to become more and more replaceable at my job; I don't have the same depth of understanding of the systems we're building as I used to; my peers and I collaborate way less now (instead of talking to each other, we just ask claude directly); and there's not much to be proud of in my day-to-day work (we're not building CRUDs, but we're not building netflix either, it's something in between). The compounding effect worries me too: every shortcut I take today is a piece of context I'm not internalizing, a debugging instinct I'm not sharpening, a tradeoff I;m not learning to weigh. The skills that used to differentiate me are slowly atrophying. We're all individually more "productive" on paper, but collectively i think we're gonna end up with a codebase nobody fully understands and a team that barely knows each other
- good for the company in the long-term: they can fire me easily, they don't need 80% of us anymore. They can just pay anthropic for the agents instead. They don't need people to maintain or read the codebase either: agents do that now. And executives never really cared about us in the first place, so that part hasn't changed I guess. The math is simple from their side: headcount is the biggest line item, and agents don't ask for raises, don't burn out, don't go on leave, and dont push back when leadership makes a dumb call. We're the worst part of the business on a spreadsheet, and the tools to replace us are finally cheap enough that someone is gonna pull the trigger
I'm not a superstar engineer. I know that. I'm probably in the 80% bag of engineers out there. Some of you may be in the top 20%, and you probably gonna keep your job somehow (or not, who knows). But for the rest of us, I think we simply cannot compete anymore.
I regret every single time I've used AI so far. Nothing good has come from it for me; the feeling is so different from any other technology I've used in the past (frameworks, languages, libraries, whatever): it used to be fun, it improved my career prospects, it expanded my knowledge. AI/LLMs are precisely the opposite: it's not fun, it's making my career worse, and it's not expanding my knowledge.
I CANNOT UNDERSTAND HOW MOST OF US, ENGINEERS, ARE OUT HERE VOUCHING FOR AI. WE ARE LITERALLY CHEERING ON THE THING THAT IS COMING FOR OUR JOBS, AND WE'RE DOING IT FOR FREE, POSTING BENCHMARKS AND EVANGELIZING IT TO OUR MANAGERS LIKE WE'RE GETTING A COMMISSION. WE ARE NOT. THE LABS AND THE EXECS GET PAID. WE're HANDING THEM THE ROPE
Another way to put this is that focus is ultimately what matters, when it comes to actually getting stuff done. Choosing what not to do is often more important than what you actually do.
Since AI tools make it extremely easy to get started, it's really easy to begin half a dozen different projects, feel like you're being productive, but actually accomplish nothing.
This accurately described how I used to utilize AI – and my ChatGPT history is filled with all sorts of grandiose project plans. But lately I've been more and more narrow with what I actually prompt.
This leads me to think that a chatbox is not the best UI for using AI, as it's too open-ended and too prone to give you long, broad answers, rather than hyper-specific ones.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.
I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.
I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).
I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.
LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.
I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".
I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.
Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.
It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.
It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.
I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.
I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.
Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.
I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on
To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.
> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.
100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"
Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.
Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.
The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.
I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.
I could have written this article myself.
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
I thought about it a little deeper and I think software development has always had the addictive tendency. That hunt for the solution to the problem, has a rush when you complete it.
It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.
I’ll finish modding that Dreamcast one day…
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
The counterweight has been, after using it for a bunch of projects, I have internalized that it will very, very quickly get me to maybe 60% and then I'll have to take it the rest of the way mostly by myself (or handholding it tightly for the remaining 40% at a much slower pace).
In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.
When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.
Instead of jumping from project to project, I focus on one (maybe a few) and let myself free while agents spew out their output.
Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.
My trick is to (try to) do something that requires high focus, on unrelated matters.
To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.
It keeps my brain in focus, busy and engaged. Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.
Also, I am back at using pomodoro technique more frequently.
Just some pointers, in case you want to try out, or suggest some you find effective yourself.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
Might call it the OnlyFans model of Software Development.
I can relate to this. Last October, I had a real epiphany using Claude Code at work. Suddenly, that initial inertia of starting something whether it’s drafting a JIRA ticket, structuring a PR, or just brainstorming completely vanished.
I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.
However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.
Resonates with me.
In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.
There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.
I am still trying to create a system that works -- now using the very tools. Long journey ahead.
EDIT: My experience --
I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.
Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.
I've been using Augment's agents (VS code, CLI) for 8ish months. It let's me easily switch between GPT and Claude models.
I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.
I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.
I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:
https://github.com/rsyring/agent-configs/blob/main/default.m...
This has all been very effective, more than I would have predicted a year ago.
Nitpick: Stop the throat clearing and get to the point. The final paragraph is the whole point of the article.
It's a real turnoff when I have to scroll past a moral lecture on artistry and piracy when I just want to hear your thoughts on task paralysis.
---
To the author's point though, AI is incredible at building some initial momentum on a task. The initialization energy is basically zero.
When I don't have time, I just ask AI to summarize the main points and expand on the point I like. I do this with even HN discussions. I just copy the whole HN page and paste into Claude and ask it to summarize and deduplicate talking points.
You didn't have to read the article.
IP law is incompatible with AI. It's an important point, but not here.
Not a nitpick, but a justified criticism of the post. The technical term is "burying the lede" and it is incompetence at best and malice at worst.
It's absolutely awful. It's not a novel or entertainment. Don't "foreshadow" or "set the scene". Just get to the fucking point.
Some traits I recognized in many excellent coders i worked with, their drive to optimization, intellectual thirst, critical and creative thinking are attributes i consistently correlated with them being in some sort of neurodivergence spectrum.
Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours. My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.
I'm wrestling with this right now. I only use LLMs for design and exploration because I am not employed and can't pay for a subscription right now, and they make the design phase feel like less of a fever dream because checking my ideas doesn't involve hours of scanning search results online and trying to see how my ideas fit with what exists or trying to evaluate if my ideas even make sense, so I feel more encouraged to get started on working, but I often wonder if the prompts are being sycophants
In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea
It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
>AI development doesn't involve luck to any appreciable degree
Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.
That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way. This works well for me:
https://news.ycombinator.com/item?id=48083267
What's your monthly token spend?
I have a $100 Claude sub and a $20 OpenAI sub.
> certainly not more than hiring people to do a job can be considered "gambling"
Actually it's quite possible that being a business manager/owner is actually addictive (having power over people), we just don't recognize it as such.
All gambling addiction is addiction, not all addiction is gambling.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
Yeah, that hasn't been my experience. The outcome, for me, is extremely consistent. I ~never have to "reroll" by wiping work and doing it again.
Strange. I tell Claude Code to do things differently all the time.
I'd recommend a different workflow, with extensive upfront planning. This works extremely well for me:
https://www.stavros.io/posts/how-i-write-software-with-llms/
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
I don't think the difference between a designer and a slot machine is that one gives you results more slowly, "therefore it's not gambling".
If you're making the argument that LLMs are gambling simply because they're faster than humans, I'd like to see some evidence.
> If you're making the argument that LLMs are gambling simply because they're faster than humans
No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.
My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.
I don’t like the gambling comparison either. It’s more like smoking or drinking. It’s an addiction you lean on to help you do something- even if that something is just getting through the day.
Like the internet!
Yeah but those are classified as addictions because they have a harm component (lung cancer, liver disease, societal impact). LLMs aren't going to kill you. If anything, it might be like gaming addiction.
If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.
Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
> Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
People absolutely do say that video games are slot machines. [0][1]
0: https://lvl-42.com/2018/11/06/video-games-as-slot-machines/
1: https://www.psu.com/news/three-ways-casino-games-are-similar...
Hence the parenthesized section of the part of my comment you quoted.
I'd observe that there are professional gamblers, and there are amateur gamblers.
If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")
If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")
Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.
"It's just paying to get stuff done..." is, with respect, superflous.
I don't know, I can understand "some people might overdo it and get addicted to LLMs". I can't understand "LLMs are slot machines and that's all they're good for" when I use LLMs every day to do tons of actual work.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
Really? All the hires I've seen had an 8-hour/5-day limit, or you had to pay through the nose for extended usage outside that window.
Where do you get your 24/7 hires from?
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
Addressing the end of the article, I think that we are all very much still learning how to use AI responsibly. It's like we just discovered alcohol and we're going on a rager every night because we don't know any better yet.
It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.
For me it's different. I am not diagnosed, but I think my executive function doesn't work right. It's really hard for me to start a new task, but when it is interesting enough I can hyper focus until it's done. In the past that often happened when I needed to implement something not too trivial. But now that AI does the implementation in minutes I need to switch tasks constantly and it is honestly super exhausting for me.
Sounds to me like what people are identifying as dopamine, generating it and enjoying it. I am not educated though about brain function.
Noticing novelty is beneficial in nature as it surfaces opportunities to conscious level. "Squirrel!" famously, from the movie "Up". It feels good to experience. Then, creating ones own dopamine supply can drive behavior, and increasing the number of behavior can exhaust energy supply on different human dimensions.
So now, managing this process and limiting the dopamine cycle becomes also worthwhile -- avoiding fatigue potentially perhaps -- while still not negating the attractiveness of dopamine derivable from the endless opportunities of the world. <3
As someone with ADHD, it’s a lot more nuanced than that. Coding agents can remove task paralysis, but they also introduce many other distractions. Being one prompt away from zero to one is a double edged sword, because it means any random thought, idea and side project is also a prompt away.
I've a thought that AI could drive humanity to appreciate humans, as a side effect of its rise.
Nowadays we're bumping up against alternative nonhuman intelligences, nowadays as we go about our lives. New neighbors, kind of.
And AI has its idea of 'living' in this world .. as a servant to us mainly.
So human life is changing: we now have the opportunity to relate to life (existential) while we're being influenced by the valuable accompaniment of these new docile servants. We're able to "see our plantation and peacocks" if you will.
We experience our life-challenges differently ... now being alive to see our daily labors accomplished by others, and we're able to reap the benefits: more dopamine, resources, whatever.
Our role is changing somewhat, being 'wealthy' or 'elevated'.
I think this poses new questions implicitly, like: Q: Do we like our new wealthy-in-productive-results selves? Is this a life worth living?
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
Does one also get dopamine from using LLMs to write comments on Hacker News?
Re: Claude usage limits
There was a comment the other day that explained how to use the new DeepSeek V4 with Claude Code.
I mention because it's roughly fifty times cheaper than Claude, and the quality gap is closing.
Which is the difference between "I don't use it for anything serious because I constantly run into limits" and "I can actually use the thing..."
https://news.ycombinator.com/item?id=48002640
It seems "Sonnet-ish" in quality so far, but I haven't tested it much yet.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
I've come to the conclusion that using AI is:
- good for me in the short term (e.g., I can fulfill what my company asks from me)
- good for the company in the short term (see above)
- bad for me in the long-term. E.g, I'm starting to become more and more replaceable at my job; I don't have the same depth of understanding of the systems we're building as I used to; my peers and I collaborate way less now (instead of talking to each other, we just ask claude directly); and there's not much to be proud of in my day-to-day work (we're not building CRUDs, but we're not building netflix either, it's something in between). The compounding effect worries me too: every shortcut I take today is a piece of context I'm not internalizing, a debugging instinct I'm not sharpening, a tradeoff I;m not learning to weigh. The skills that used to differentiate me are slowly atrophying. We're all individually more "productive" on paper, but collectively i think we're gonna end up with a codebase nobody fully understands and a team that barely knows each other
- good for the company in the long-term: they can fire me easily, they don't need 80% of us anymore. They can just pay anthropic for the agents instead. They don't need people to maintain or read the codebase either: agents do that now. And executives never really cared about us in the first place, so that part hasn't changed I guess. The math is simple from their side: headcount is the biggest line item, and agents don't ask for raises, don't burn out, don't go on leave, and dont push back when leadership makes a dumb call. We're the worst part of the business on a spreadsheet, and the tools to replace us are finally cheap enough that someone is gonna pull the trigger
I'm not a superstar engineer. I know that. I'm probably in the 80% bag of engineers out there. Some of you may be in the top 20%, and you probably gonna keep your job somehow (or not, who knows). But for the rest of us, I think we simply cannot compete anymore.
I regret every single time I've used AI so far. Nothing good has come from it for me; the feeling is so different from any other technology I've used in the past (frameworks, languages, libraries, whatever): it used to be fun, it improved my career prospects, it expanded my knowledge. AI/LLMs are precisely the opposite: it's not fun, it's making my career worse, and it's not expanding my knowledge.
Another way to put this is that focus is ultimately what matters, when it comes to actually getting stuff done. Choosing what not to do is often more important than what you actually do.
Since AI tools make it extremely easy to get started, it's really easy to begin half a dozen different projects, feel like you're being productive, but actually accomplish nothing.
This accurately described how I used to utilize AI – and my ChatGPT history is filled with all sorts of grandiose project plans. But lately I've been more and more narrow with what I actually prompt.
This leads me to think that a chatbox is not the best UI for using AI, as it's too open-ended and too prone to give you long, broad answers, rather than hyper-specific ones.
It is really weird reading things but I guess normal? It seems many feel this, including me. AI just compounds this behavior even more! Darn.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.