This was an insightful read despite sounding like the usual shallow AI-pocalypse posting. I had a suspicion that the initial velocity that coding assistants made achievable would cool down rather quickly as we all are aware of the shortcomings of coding agents. The illustrated K confirms my gut feeling that the amount of steering, handholding and double checking would eventually consume more human ressources than a slower build process.
Perhaps current incentives and agent behavior will reshape in the near future - right now even the SOTA-models give me Golden Retriever vibes in their eagerness to please and work for me. The constant paranoia I developed when working in larger or complicated code bases has me apprehend “don’t code, talk to me” still, even after the advent of system prompts and MD-based skills.
Nice collection of sources and the author doesn’t gallop logically or starts evangelizing as the usual developer crowd tends to do when writing.
This was an insightful read despite sounding like the usual shallow AI-pocalypse posting. I had a suspicion that the initial velocity that coding assistants made achievable would cool down rather quickly as we all are aware of the shortcomings of coding agents. The illustrated K confirms my gut feeling that the amount of steering, handholding and double checking would eventually consume more human ressources than a slower build process.
Perhaps current incentives and agent behavior will reshape in the near future - right now even the SOTA-models give me Golden Retriever vibes in their eagerness to please and work for me. The constant paranoia I developed when working in larger or complicated code bases has me apprehend “don’t code, talk to me” still, even after the advent of system prompts and MD-based skills.
Nice collection of sources and the author doesn’t gallop logically or starts evangelizing as the usual developer crowd tends to do when writing.