(Author here) Haha that is a great point. I was trying to come up with a term that described my personal workflow and specifically felt different than vibe coding (because it's geared towards how professional programmers can use agents). Very open to alternative terms!
I want to understand the distinction you're making against vibe coding.
In vibe coding, the developer specifies only functional requirements (what the software must do) and non-functional requirements (the qualities it must have, like performance, scalability, or security). The AI delivers a complete implementation, and the developer reviews it solely against those behaviors and qualities. Any corrections are given again only in terms of requirements, never code, and the cycle repeats until the software aligns.
But you're trying to coin a term for the following?
In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.
Did I understand it right?
If so, I've most seen the latter be called AI pair-programming or AI-assisted coding. And I'd agree with the other commenters, please DO NOT call it async programming (even if you add async AI it's too confusing).
> In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.
Yes
> If so, I've most seen the latter be called AI pair-programming or AI-assisted coding.
I specifically considered both terms and am not a fan
* "pair-programming" is something that involves two people paying attention while writing code, and in this case, i'm not looking at the screen while the AI system writes code
* "AI-assisted coding" is generally anchored to copilots/IDE style agents where people are actively writing code, and an AI assists them.
I totally hear you on conflating async. However, I think the appropriate term would clearly indicate that this happens without actively watching the AI write code. Unfortunately I think other terms like "background" may also be confusing for similar reasons.
This would be incredibly useful for software development, period. A 10x factor, all by itself. Yet it happens infrequently, or, at best, in significantly limited ways.
The main problem, I think, is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.
I guess maybe the context is cranking out REST endpoints or some other constrained detail of a larger thing. Then, sure.
> is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.
My experience is different. I find that AI-powered coding agents drop the barriers to experimentation drastically, so that ... yes if I don't know what I Want, I can go try things very easily, and learn. Exploration just got soooo much cheaper. Now that may be a different interaction that what is described in this blog post. The exploration may be a precursor to what is happening in this blog post. But once I'm done exploring I can define the problem and ask for solutions.
If it's DOA you'd better tell everyone who is currently doing this, that they're not really doing this.
(Author here) I can certainly appreciate having an alternate perspective, but I think it's unfair to say it's DOA. I've personally used this workflow for the last 6 months and shipped a lot of features into our product, including the lowest levels of infra all the way to UI code. I definitely think there is a lot to improve. But it works, at least for me :)
Figuring out what you want is the hard part about programming. I think that's where AI augmentation will really shine, because it lowers the time between iterations and experiments.
That said, this article is basically describing being a product owner.
I did this early in my career as a product owner with an offshore team in India... Write feedback/specs, send them over at end of day US time. Have a fresh build ready for review by start of business.
Worked amazingly when it worked. Really stretched things out when the devs misunderstood us or got confused by our lack of clarity and we had to find time for a call... Also eventually there got to be some gnarly technical debt and things really slowed down.
I think it can only work if the product owner literally owns the product as in has FULL decision making power about what goes or doesn't go etc. it doesn't work when a product manager is a glorified in between guy, dictating the wishes of the CEO through a game of telephone from the management.
You’ll have to be more specific about what you mean by “product owner”, because that’s a very nebulous job title. For example, how technical is this product owner? Are they assumed to “just know” that they’re asking for an overly complex, expensive technical solution?
This works until you get to the point that your actual programming skills atrophy due to lack of use.
Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.
You're right, reviews aren't the way forward. We don't do code reviews on compiler output (unless you're writing a compiler). The way forward is strong static and analytic guardrails and stochastic error correction (multiple solutions proposed with LLM as a judge before implementation, multiple code review agents with different personas that have been prompted to be strict/adversarial but not nit-pick) with robust test suites that have also been through multiple passes of audits and red-teaming by agents. You should rarely have to look at the code, it should be a significant escalation event like when you need to coordinate with Apple due to XCode bugs.
> You should rarely have to look at the code, it should be a significant escalation event
This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.
(Author here) Personally, I try to combat this by synchronously working on 1 task and asynchronously working on others. I am not sure it's perfect, but it definitely helps me avoid atrophy.
For generative skills I agree, but for me the real change is in how I read and debug code. After reading so much AI-generated code with subtle mistakes, I can spot errors much quicker even in human-written code. And when I can't, that usually means the code needs a refactor.
I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.
But yeah, default to the better exercise and just code yourself, at least on the project's core.
I mean that I've read so much AI generated code with subtle mistakes that my brain jumps straight to the likely failure point and I've noticed it generalizes. Even when I look at an OSS project I'm not super familiar with, I can usually spot the bugs faster then before. I'll edit my initial response for clarity.
1. Learn how to describe what you want in an unambiguous dialect of natural language.
2. Submit it to a program that takes a long time to transform that input into a computer language.
3. Review the output for errors.
Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.
Product managers never get that right though. In practice it always falls back on the developer to understand the problem and fill in the missing pieces.
In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.
For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.
Is it the work of a product manager? I believe the latter only specify features and business rules (and maybe some other specifications like UX and performance). But no technical details at all. That would be like an architect reviewing the brand of nails used in an house framing.
I have a static analysis and refactoring tool that does wonders to identify duplication and poor architecture patterns and provide a roadmap for agents to fix the issues. It's like magic, just point it at your codebase then tell the agent to grind away at the output (making sure to come up for air and rerun tests regularly) and it'll go for hours.
I don't know why we need a term like "Async AI programming." this is literally what you would do if you were a Tech Lead directing a team of other developers. You define what you want and hand it to one of your devs.
This is just being a TL. the agent is an assistant or a member of the team. I don't know why we need to call it "Async AI programming", unless we want to shy away from or obscure the idea that the agent is actually performing the job a human used to perform.
Hi everyone, thanks for the spirited debate! I think there are some great points in the discussion so far. Some thoughts:
* "This didn't work for offshoring, why will it work all of a sudden?" I think there are good lessons to draw from offshoring around problem definition and what-not but the key difference is the iteration speed. Agents allow you to review stuff much faster, and you can look at smaller pieces of incremental work.
* "I thought this would be about async primitives in python, etc" Whoops sorry, I can understand how the name is confusing/ambiguous! The use of "async" here refers to the fact that I'm not synchronously looking at an IDE while writing code all the time.
* "You can only do this because you used to handwrite code". I don't think this workflow is a replacement for handwriting code. I still love doing that. This workflow just helps me do more.
I do think that AI will work well compared to the low end of offshoring, where to get good results you need people who could do the work themselves tightly involved. AI will give you slop code faster and cheaper, and that is sometimes enough.
The question is how it compares to the medium level of offshoring. Near term I think that at comparable cost ($100s of dollars per week), it'll give faster results at an acceptable tradeoff in quality for most uses. I don't think most companies want to spend thousands of dollars a month on developer tools per developer though... even though they often do.
I actually like writing code, it does get tedious I get that when you're making yet another component. I don't feel joy when you just will a bunch of code into existence with words. It's like actively participating in development when typing. Which yeah people use libraries/frameworks/boilerplate.
My dream is to not be employed in software and do it for fun (or work on something I actually care about)
Even if I wrote some piece of crap, it is my piece of crap
They don't really as once the spec gets detailed enough it becomes so large and unwieldy that nobody with any actual power reads the things.
An executive at a large company once told me about something where a spec had been written and reviewed by all relevant stakeholders: "That may be what I asked for, but its not what I want."
I am teaching asynchronous programming in typescript to junior developpers.
And i find really tricky to tell them that async and await do MAJOR magic behind their back to make their code readable as synchronous code.
And then, I need to detail very precisely what "Promise.all()" (and "return") really mean in the context of async/await. Which is something that (I feel) could have been abstracted away during the async/await syntax definition, and make the full magic much more natural.
Async/await themselves are not that much magic really, it's a bit of syntactic sugar over promise chains. Of course, understanding promises is its own bag.
To elaborate a bit, telling them that you should not "aList.foreach(asyncMethod)", but you'd better do "Promise.all(aList.map(asyncMethod))" is NOT very easy for them.
Sounds great in principle but I have been trained to value individuals and interactions over processes and tools and working software over comprehensive documentation.
I think there is a confusion here between Coding and Programming. I think what is described here as "Async Programming" is just programming the way it should be which is different than coding. This is what Leslie Lamport pointed out a while back [1] and recently [2]. According to him programming has 3 stages:
1- Define what task the program should perform
2- Define how the program should do it
3- Writing the code that does it.
Most SWEs usually skip to step 3 instead of going through 1 and 2 without giving it much thought, and implement their code iteratively. I think Step 3 also includes testing, review, etc.
With AI developers are forced to think about the functionality and the specs of their code to pass it to AI to do the job and can no longer just jump to step 3. For delegating to other devs, the same process is required, senior engineers usually create design docs and pass it to junior engineers.
IMO automated verification and code reviews are already part of many developers workflows, so it's nothing new.
I get the point of the article though, that there are new requirements for programming and things are different in terms of how folks approach programming. So I do not agree that the method is new or should be called "async", it's the same method with brand new tools.
Those of us who worked in hardware, or are old programmers will find this familiar. Chip/board routing jobs that took days to complete. Product build/test jobs that took hours to run.
See also that movie with Johnny Depp where AI takes over the world.
Indeed! Why not just call it "asynchronous software development" or something similar? "asynchronous programming" is a bad choice, partly because it will be un-googleable.
This kind of workflow really doesn't appeal to me in the slightest. Maybe it works for some people, but it just seems to drain all the pleasure out of programming. For me, at least, solving the little problems are like little satisfying puzzles which makes it easier to maintain motivation.
I actually enjoy writing code... most of the time. I find myself turning to AI to write code I have an aversion to writing, not as a substitute for my own practice, but to get code that I would not have written in the first place. Like benchmarks, bash scripts, dashboards, unit tests, etc.
I can live without these things, but they're nice to have without expending the effort to figure out all the boilerplate necessary for solving very simple problems at their core. Sometimes AI can't get all the way to a solution, but usually it sets up enough of the boilerplate that only the fun part remains, and that's easy enough to do.
Effective async programming specs read like technical documentation
The thing I like least about software engineering will now become the primary task. It's a sad future for me, but maybe a great one for some different personality type.
Before I read the article I thought this meant programming with "async".
Just call it Agent-based programming or somesuch, otherwise it's really confusing!
Exactly. I think the traditional meaning of “asynchronous programming” was coined first. So, let’s stick with that.
(Author here) Haha that is a great point. I was trying to come up with a term that described my personal workflow and specifically felt different than vibe coding (because it's geared towards how professional programmers can use agents). Very open to alternative terms!
I want to understand the distinction you're making against vibe coding.
In vibe coding, the developer specifies only functional requirements (what the software must do) and non-functional requirements (the qualities it must have, like performance, scalability, or security). The AI delivers a complete implementation, and the developer reviews it solely against those behaviors and qualities. Any corrections are given again only in terms of requirements, never code, and the cycle repeats until the software aligns.
But you're trying to coin a term for the following?
In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.
Did I understand it right?
If so, I've most seen the latter be called AI pair-programming or AI-assisted coding. And I'd agree with the other commenters, please DO NOT call it async programming (even if you add async AI it's too confusing).
> In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.
Yes
> If so, I've most seen the latter be called AI pair-programming or AI-assisted coding.
I specifically considered both terms and am not a fan * "pair-programming" is something that involves two people paying attention while writing code, and in this case, i'm not looking at the screen while the AI system writes code * "AI-assisted coding" is generally anchored to copilots/IDE style agents where people are actively writing code, and an AI assists them.
I totally hear you on conflating async. However, I think the appropriate term would clearly indicate that this happens without actively watching the AI write code. Unfortunately I think other terms like "background" may also be confusing for similar reasons.
Same! I was hoping this would have some insights into pitfalls or the like with javascript promises or python async, but alas no such luck.
Same here. I've read the author's braintrust.dev as "brain - Rust - Dev", so I was expecting a discussion on Rust Async development.
This vision of AI programming is DOA.
The first step is "define the problem clearly".
This would be incredibly useful for software development, period. A 10x factor, all by itself. Yet it happens infrequently, or, at best, in significantly limited ways.
The main problem, I think, is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.
I guess maybe the context is cranking out REST endpoints or some other constrained detail of a larger thing. Then, sure.
> is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.
My experience is different. I find that AI-powered coding agents drop the barriers to experimentation drastically, so that ... yes if I don't know what I Want, I can go try things very easily, and learn. Exploration just got soooo much cheaper. Now that may be a different interaction that what is described in this blog post. The exploration may be a precursor to what is happening in this blog post. But once I'm done exploring I can define the problem and ask for solutions.
If it's DOA you'd better tell everyone who is currently doing this, that they're not really doing this.
I disagree with being detailed, many times I want to AI to think of things, half the time it comes up with something I wouldn't have that I like.
The thing I would add is to retry to prompt, don't tell it to fix a mistake. Rewind and change the prompt to tell It not to do that it did.
I agree there is a lot of value to have it do, what it considers, the obvious thing.
It is almost by definition what the average programmer would expect to find, so it's valuable as such.
But the moment you want to do something original, you need to keep high-level high-quality documentation somewhere.
(Author here) I can certainly appreciate having an alternate perspective, but I think it's unfair to say it's DOA. I've personally used this workflow for the last 6 months and shipped a lot of features into our product, including the lowest levels of infra all the way to UI code. I definitely think there is a lot to improve. But it works, at least for me :)
Figuring out what you want is the hard part about programming. I think that's where AI augmentation will really shine, because it lowers the time between iterations and experiments.
That said, this article is basically describing being a product owner.
I did this early in my career as a product owner with an offshore team in India... Write feedback/specs, send them over at end of day US time. Have a fresh build ready for review by start of business.
Worked amazingly when it worked. Really stretched things out when the devs misunderstood us or got confused by our lack of clarity and we had to find time for a call... Also eventually there got to be some gnarly technical debt and things really slowed down.
I think it can only work if the product owner literally owns the product as in has FULL decision making power about what goes or doesn't go etc. it doesn't work when a product manager is a glorified in between guy, dictating the wishes of the CEO through a game of telephone from the management.
You’ll have to be more specific about what you mean by “product owner”, because that’s a very nebulous job title. For example, how technical is this product owner? Are they assumed to “just know” that they’re asking for an overly complex, expensive technical solution?
I'd guess they'd be assumed to "just know" to trust the developers working with them on that?
Agreed. A glorified go between person is rarely going to succeed at delivering something good.
> it can only work if the product owner literally owns the product as in has FULL decision making power
This seems like a fairly rare situation in my experience.
It's not uncommon in the sort of solo-dev bootstrapped startup that is going wild for AI coding right now though.
This works until you get to the point that your actual programming skills atrophy due to lack of use.
Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.
You're right, reviews aren't the way forward. We don't do code reviews on compiler output (unless you're writing a compiler). The way forward is strong static and analytic guardrails and stochastic error correction (multiple solutions proposed with LLM as a judge before implementation, multiple code review agents with different personas that have been prompted to be strict/adversarial but not nit-pick) with robust test suites that have also been through multiple passes of audits and red-teaming by agents. You should rarely have to look at the code, it should be a significant escalation event like when you need to coordinate with Apple due to XCode bugs.
> You should rarely have to look at the code, it should be a significant escalation event
This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.
good fucking luck writing adequate test suites for qualitative business logic
if it's even possible it will be more work than writing the code manually
Coding interview of the future: "Show us how you would prompt this binary sort."
not a joke.
Also, the future you are referring to is... like... 6 weeks from now.
(Author here) Personally, I try to combat this by synchronously working on 1 task and asynchronously working on others. I am not sure it's perfect, but it definitely helps me avoid atrophy.
For generative skills I agree, but for me the real change is in how I read and debug code. After reading so much AI-generated code with subtle mistakes, I can spot errors much quicker even in human-written code. And when I can't, that usually means the code needs a refactor.
I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.
But yeah, default to the better exercise and just code yourself, at least on the project's core.
What do you mean you can spot errors much quicker?
I mean that I've read so much AI generated code with subtle mistakes that my brain jumps straight to the likely failure point and I've noticed it generalizes. Even when I look at an OSS project I'm not super familiar with, I can usually spot the bugs faster then before. I'll edit my initial response for clarity.
> subtle mistakes that my brain jumps straight to the likely failure ... I can usually spot the bugs faster then before
doubt intensifies
Doubt accepted. A spot-the-bug challenge on real OSS/prod code would be fun.
What the article describes is:
1. Learn how to describe what you want in an unambiguous dialect of natural language.
2. Submit it to a program that takes a long time to transform that input into a computer language.
3. Review the output for errors.
Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.
No, it sounds like the work of a product manager, you’re just working with agents rather than with developers.
Product managers never get that right though. In practice it always falls back on the developer to understand the problem and fill in the missing pieces.
In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.
For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.
Agree with the last bit, dev teams are even more non-deterministic than LLMs.
Is it the work of a product manager? I believe the latter only specify features and business rules (and maybe some other specifications like UX and performance). But no technical details at all. That would be like an architect reviewing the brand of nails used in an house framing.
Tech Lead, not PM. (in my experience)
so... normal team lead -> manager pipeline?
Agreed.
> Hand it off. Delegate the implementation to an AI agent, a teammate, or even your future self with comprehensive notes.
The AI agent just feels like a way to create tech debt on a massive scale while not being able to identify it as tech debt.
I have a static analysis and refactoring tool that does wonders to identify duplication and poor architecture patterns and provide a roadmap for agents to fix the issues. It's like magic, just point it at your codebase then tell the agent to grind away at the output (making sure to come up for air and rerun tests regularly) and it'll go for hours.
This is what a lot of business leaders miss.
The benefits you might gain from LLMs is that you are able to discern good output from bad.
Once that's lost, the output of these tools becomes a complete gamble.
The business leaders already can't discern good from bad.
I don't know why we need a term like "Async AI programming." this is literally what you would do if you were a Tech Lead directing a team of other developers. You define what you want and hand it to one of your devs.
This is just being a TL. the agent is an assistant or a member of the team. I don't know why we need to call it "Async AI programming", unless we want to shy away from or obscure the idea that the agent is actually performing the job a human used to perform.
That's not the kind of async programming I was expecting.
I was ready for a deep-dive into things like asyncio in python; where it came from and what problems it promised to solve!
(Author here)
Hi everyone, thanks for the spirited debate! I think there are some great points in the discussion so far. Some thoughts:
* "This didn't work for offshoring, why will it work all of a sudden?" I think there are good lessons to draw from offshoring around problem definition and what-not but the key difference is the iteration speed. Agents allow you to review stuff much faster, and you can look at smaller pieces of incremental work.
* "I thought this would be about async primitives in python, etc" Whoops sorry, I can understand how the name is confusing/ambiguous! The use of "async" here refers to the fact that I'm not synchronously looking at an IDE while writing code all the time.
* "You can only do this because you used to handwrite code". I don't think this workflow is a replacement for handwriting code. I still love doing that. This workflow just helps me do more.
I do think that AI will work well compared to the low end of offshoring, where to get good results you need people who could do the work themselves tightly involved. AI will give you slop code faster and cheaper, and that is sometimes enough.
The question is how it compares to the medium level of offshoring. Near term I think that at comparable cost ($100s of dollars per week), it'll give faster results at an acceptable tradeoff in quality for most uses. I don't think most companies want to spend thousands of dollars a month on developer tools per developer though... even though they often do.
It's just a different workflow IMO. AI is effectively real-time, whereas offshoring, no matter the quality, is something you have to do in batches.
Idk if I'm a luddite or what
I actually like writing code, it does get tedious I get that when you're making yet another component. I don't feel joy when you just will a bunch of code into existence with words. It's like actively participating in development when typing. Which yeah people use libraries/frameworks/boilerplate.
My dream is to not be employed in software and do it for fun (or work on something I actually care about)
Even if I wrote some piece of crap, it is my piece of crap
He says define the problem like its the easy part. If we had full specs, life would be a lot easier.
They don't really as once the spec gets detailed enough it becomes so large and unwieldy that nobody with any actual power reads the things.
An executive at a large company once told me about something where a spec had been written and reviewed by all relevant stakeholders: "That may be what I asked for, but its not what I want."
I am teaching asynchronous programming in typescript to junior developpers. And i find really tricky to tell them that async and await do MAJOR magic behind their back to make their code readable as synchronous code.
And then, I need to detail very precisely what "Promise.all()" (and "return") really mean in the context of async/await. Which is something that (I feel) could have been abstracted away during the async/await syntax definition, and make the full magic much more natural.
Async/await themselves are not that much magic really, it's a bit of syntactic sugar over promise chains. Of course, understanding promises is its own bag.
ChatGPT explanation: https://chatgpt.com/share/68c30421-be3c-8011-8431-8f3385a654...
During my interviews, may be I should ask them to read and understand this:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
prior to any dev they plan to do in JS/TS.
PS: 10 bucks that none of them would stay.
That reminds me of my Unix guru of the 90s: "man pages ARE easy to read".
[spoil: "when you are already an expert of the tool detailled in it"]
To elaborate a bit, telling them that you should not "aList.foreach(asyncMethod)", but you'd better do "Promise.all(aList.map(asyncMethod))" is NOT very easy for them.
Man are you going to be disappointed when you read the article.
Man, for the first time in HN, I am teased to actually read the article.
Update: oh my god, I read the article. And feel completely cheated!!!!
Note for my future self: continue to read only the HN comments
Sounds great in principle but I have been trained to value individuals and interactions over processes and tools and working software over comprehensive documentation.
I think there is a confusion here between Coding and Programming. I think what is described here as "Async Programming" is just programming the way it should be which is different than coding. This is what Leslie Lamport pointed out a while back [1] and recently [2]. According to him programming has 3 stages:
Most SWEs usually skip to step 3 instead of going through 1 and 2 without giving it much thought, and implement their code iteratively. I think Step 3 also includes testing, review, etc.With AI developers are forced to think about the functionality and the specs of their code to pass it to AI to do the job and can no longer just jump to step 3. For delegating to other devs, the same process is required, senior engineers usually create design docs and pass it to junior engineers.
IMO automated verification and code reviews are already part of many developers workflows, so it's nothing new.
I get the point of the article though, that there are new requirements for programming and things are different in terms of how folks approach programming. So I do not agree that the method is new or should be called "async", it's the same method with brand new tools.
Off topic, but I assume the name braintrust comes from Creativity, Inc. Amazing book by Pixar co-founder Edwin Catmull.
Those of us who worked in hardware, or are old programmers will find this familiar. Chip/board routing jobs that took days to complete. Product build/test jobs that took hours to run.
See also that movie with Johnny Depp where AI takes over the world.
Oh, async?
> This version of "async programming" is different from the classic definition. It's about how developers approach building software.
Oh async=you wait until it is done. How interesting.
Redefining commonly understood phrases to mean something else in your own little world make you look ignorant.
Indeed! Why not just call it "asynchronous software development" or something similar? "asynchronous programming" is a bad choice, partly because it will be un-googleable.
The intent is more likely clickbait.
Not that catchy (even in fewer words).When this bubble finally pops, someone is going to have to clean up all the nonsense AI code out there.
Dream on!
This kind of workflow really doesn't appeal to me in the slightest. Maybe it works for some people, but it just seems to drain all the pleasure out of programming. For me, at least, solving the little problems are like little satisfying puzzles which makes it easier to maintain motivation.
It takes my longer to thoroughly review code I didn't write, especially code written by a junior developer.
Why would I choose to slow myself down in the short term and allow my skills to atrophy in the long term (which will also slow me down)?
I actually enjoy writing code... most of the time. I find myself turning to AI to write code I have an aversion to writing, not as a substitute for my own practice, but to get code that I would not have written in the first place. Like benchmarks, bash scripts, dashboards, unit tests, etc.
I can live without these things, but they're nice to have without expending the effort to figure out all the boilerplate necessary for solving very simple problems at their core. Sometimes AI can't get all the way to a solution, but usually it sets up enough of the boilerplate that only the fun part remains, and that's easy enough to do.
That sounds reasonable and similar to how I use it.
Managing a team of interns isn't fun, and I have no idea why someone who is a competent developer would choose to do that to themselves.
Effective async programming specs read like technical documentation
The thing I like least about software engineering will now become the primary task. It's a sad future for me, but maybe a great one for some different personality type.
Awful font.