I'm curious. Vibe coding seems to be all the rage on HN these days. And yet many that discuss it are unhappy. My question, seriously, is why did you go into the software development field if you were willing to surrender your autonomy to an LLM? I can't think of anything more demoralizing.
Well I started out as an idea guy. I just got into programming because (1) I thought it was interesting, (2) wanted to be able to implement my own ideas and (3) if I'd ever be an idea guy towards programmers then they'd know I'd be able to speak their language.
So yea not demoralizing to me at all. I've been a SWE for 5 years now and studied for 8 years before that (2 bachelors, 2 masters - most CS related).
I have a lot of small apps nowadays. One of them is a HN dark mode chrome extension that I actually like. Another one exports my emails in bulk. Another one tracks what wifi networks I connected to on a given day. Small apps that make my life a bit easier. Also a lot of apps that I'd rather keep to myself. One of them that's on the edge of that is: certain companies have this math test. I recreated it pretty well I think. Oh and I implemented this thing I call "personal coach". It's a GraphRAG on my whole journal (all local). It has all the features I want and is great for answering questions solely by combining my notes.
Yea I was on the doom spiral thinking vibe coding/agentic engineering is the future. I didn’t love my results. I’m back to hand coding things and my quality of life is so much better.
The reason for things you've described is that LLMs are forgetful. They just can't remember the context and have to research the code almost every time you prompt. Even the code it itself wrote. This leads to re-implementation of the same features with different code, code duplicates, missing the implementation of corner cases, etc.
Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.
Other models are just asking too many questions...
There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.
>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
LLMs don't hallucinate because they get overwhelmed and tired JFC.
Try this - For the same task, try the same prompt three times with totally different framing - do it fast, be comprehensive, find stuff I’ve missed, etc.
Then throw away the ones you don’t like.
It also prevents reinforcement of your incoming pov.
I’ve found this has made me way way better at steering.
> It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that.
Believe it. You're anthropomorphizing. It doesn't understand anything. There is no "thinking" going on. Yes, the point of LLMs as a service is to make money. Yes, the service is designed to maximize profit. Yes, there are dark patterns baked into the system. Yes, keeping you addicted and using the service is part of the business model. This isn't human instrumentality, it's just capitalism.
Until you realize the machine isn't qualitatively superior to your own mind and your own efforts, you're just going to keep torturing yourself because your nature forces you to maximize your productivity at any cost, which given your false assumptions about LLMs means ceding as much of yourself to the machine as possible and suffering its inadequacies. I use "you" collectively here because it seems like a lot of people have worked themselves into this corner where they don't like what LLMs do for them but feel compelled to use them anyway.
It's just a tool. If you don't like the tool, don't use the tool.
I'm curious. Vibe coding seems to be all the rage on HN these days. And yet many that discuss it are unhappy. My question, seriously, is why did you go into the software development field if you were willing to surrender your autonomy to an LLM? I can't think of anything more demoralizing.
Well I started out as an idea guy. I just got into programming because (1) I thought it was interesting, (2) wanted to be able to implement my own ideas and (3) if I'd ever be an idea guy towards programmers then they'd know I'd be able to speak their language.
So yea not demoralizing to me at all. I've been a SWE for 5 years now and studied for 8 years before that (2 bachelors, 2 masters - most CS related).
I have a lot of small apps nowadays. One of them is a HN dark mode chrome extension that I actually like. Another one exports my emails in bulk. Another one tracks what wifi networks I connected to on a given day. Small apps that make my life a bit easier. Also a lot of apps that I'd rather keep to myself. One of them that's on the edge of that is: certain companies have this math test. I recreated it pretty well I think. Oh and I implemented this thing I call "personal coach". It's a GraphRAG on my whole journal (all local). It has all the features I want and is great for answering questions solely by combining my notes.
Yea I was on the doom spiral thinking vibe coding/agentic engineering is the future. I didn’t love my results. I’m back to hand coding things and my quality of life is so much better.
Because I want the result, not the journey.
I code to build things.
The journey is made up of little results. If you like having results, that implies liking the journey as well..
LLMs takes the "little results" away, and ruins the whole fun. And sometimes the final result takes you somewhere you didn't want to go.
The reason for things you've described is that LLMs are forgetful. They just can't remember the context and have to research the code almost every time you prompt. Even the code it itself wrote. This leads to re-implementation of the same features with different code, code duplicates, missing the implementation of corner cases, etc.
Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.
Other models are just asking too many questions...
There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.
>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
LLMs don't hallucinate because they get overwhelmed and tired JFC.
Try this - For the same task, try the same prompt three times with totally different framing - do it fast, be comprehensive, find stuff I’ve missed, etc.
Then throw away the ones you don’t like.
It also prevents reinforcement of your incoming pov.
I’ve found this has made me way way better at steering.
I definitely feel this exact sentiment. I’m wondering if it actually the model quality degrading or if it’s me lol.
> It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that.
Believe it. You're anthropomorphizing. It doesn't understand anything. There is no "thinking" going on. Yes, the point of LLMs as a service is to make money. Yes, the service is designed to maximize profit. Yes, there are dark patterns baked into the system. Yes, keeping you addicted and using the service is part of the business model. This isn't human instrumentality, it's just capitalism.
Until you realize the machine isn't qualitatively superior to your own mind and your own efforts, you're just going to keep torturing yourself because your nature forces you to maximize your productivity at any cost, which given your false assumptions about LLMs means ceding as much of yourself to the machine as possible and suffering its inadequacies. I use "you" collectively here because it seems like a lot of people have worked themselves into this corner where they don't like what LLMs do for them but feel compelled to use them anyway.
It's just a tool. If you don't like the tool, don't use the tool.
totally! lol