Yeah an exec asked me about the METR study, somewhat alarmed. I told him they were measuring productivity of open source developers. He didn't understand the implications. He was relieved when I informed him that he should still expect productivity gains because his devs were bad enough at their jobs to see a boost.
Rates need to be higher, we are clearly in stagflation outside of AI. Once AI hype cools the stock market will correct. But we need to suck out all of the excess cash, cool consumer demand, which in turn will force businesses to lower their prices to attract customers.
I am not strictly sure why you are being downvoted, but raising interest rates does not get you out of the 'stag' in stagflation; it actually makes it worse. Cue EU governments circa 2008 for a recent example of where that kind of economic policy gets you.
I think what many people are missing is that everybody values AI companies as if they're special, when I'm seeing customers approach it as a commodity - once a task is done it's done. It has about as much lock-in as a brand of toilet paper, and the economics more like steel production than VPS' or social media.
There are some qualitative differences, but none that seems to last for more than 6 months. On the flip side, the costs are energy & location. Scaling won't bring that down.
In a twist of irony, big tech might structurally reduce their high profit margins because of AI.
I'm bullish on AI, and competition is great for consumers in the end.
But first the bubble has to pop, and it is going to hurt.
The post talks about how a lot of AI investment could just be a waste which would be bad for the economy and stocks.
My comment is really a off-topic, its that if AI really does work well it'll put half of us out of work, leaving us to do non-knowledge work that the robots can't do yet.
>Business spending on AI is more than all consumer spending combined as a share of GDP.
Your phrasing there is misleading. The article says, "In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined," and the key is the "added more to GDP growth" part.
Growth in consumer spending was sluggish, while growth in business AI spending was insane, so in terms of how much the economy grew, the rise in AI spending exceeded the rise in consumer spending. Which is pretty amazing, actually. But, as far as the total amount of spending, business AI is at most only a few percentage points of GDP (and that's if you interpret "business AI spending" very broadly), while consumer spending is somewhere around 67-70% of GDP.
I don’t think its doing so much propping up because the spend is being thrown into a couple of piles. When companies hire some ai superstar for millions it isn’t like much of that money is circulating in the wider economy like if you invested it into a diversified portfolio or hired a lot of lower wage people who spend a much higher share of their paycheck on stuff that keeps it circulating in local economies. Likewise if that spend is going to server racks or electricity. The money just doesn’t have a lot of avenues to spill out into different subsets of the economy.
This is probably what is going on when we see McD executives say there are basically two economies in the US at the moment, one doing very well for itself and the other experiencing a lot of hardship.
It's a good thought exercise, but the opening hinges on a premise that feels obviously wrong: that experts in doing work by hand will do that work faster with AI.
My experience is that, in many cases, people who are very good at doing something by hand are excellent at the process of doing that thing by hand, not generally excellent at doing that thing or talking about that thing or teaching others how to do that thing. And I've found it to be true (sometimes particularly true) for people who have done that thing for a long time, even if they're actively interested in finding new and better ways to do the work. Their skills are fine-tuned for a manual way of working.
Working with AI feels very, very different from writing software by hand. When I let myself lean into AI's strengths, it allows me to move much faster and often without sacrificing quality. But it takes constant effort to avoid the trap of my well-established comfort with doing things by hand.
The people who are best positioned to build incredible things with AI do not have or do not fall into that comfortable habit of manual work. They often aren't expert engineers (yet) in the traditional sense, but in a new sense that's being worked out in realtime by all of us. Gaps in technical understanding are still there, but they're being filled extremely fast by some of the best technology for learning our species has ever created.
It's hard for me to watch them and not see a rapidly approaching future where all this AI skepticism looks like self-soothing delusion from people who will struggle to adjust to the new techniques (and pace) of doing work with these tools.
This is the trouble with AI as an "assistant". If it's good enough to do the job unattended, then there's a big increase in productivity. If it needs constant handholding, maybe not.
I believe it’s a bubble, and that it will burst, but I’m only a little worried about the market.
Yeah, there could be a sell-off of tech, but the people who sold will reinvest elsewhere.
Also, 401ks still exist. Every week people get paychecks and they automatically buy stocks, no matter what the market situation. The only thing that can bring that down is if less people have jobs with retirement plans. If AI busts there will be some amount of re-hiring somewhere to cover it.
Could be scary for people with tech stocks, but less scary with index funds going long term.
I think it may be good for jobs in the tech sector when it bursts. The 300B invested in hardware and data centers were not being invested in labourers like ourselves.
I think once Exec learn all the programmers they fired need to be rehired cause AI ain't going to replace them and the AI tools need to be integrated in with all the current tech.
This article presents the apparently widespread, but incorrect and, frankly, boring view that "coding" is the bottleneck in development. The following statement summarizes this view best:
> Many knowledge-work tasks are harder to automate than coding, which benefits from huge amounts of training data and clear definitions of success.
which implies that "coding" is not knowledge work. If "coding" is understood as the mere typing of requirements into executable code, then that simply is not a bottleneck and the gains to be had there are marginal (check out Amdahl's law). And if "coding" is to be understood as the more general endeavour of software development, then the people making these statements have no fucking idea what they're talking about and cannot be taken seriously.
It's a familiar approach. "We have a ton of tickets, each pretty simple, but not enough hands to devote to all of them." It usually turns out that the "pretty simple" bears a lot of weight there, and assumes more than meets the eye reading the ticket.
so was the article conflating the two or was it just pointing out the software developer roles typically have many hats that can and sometimes are held by other in the process. ie. systems analyst, business analyst, architect, designer, PM, QA etc.
I don't know exactly where AI is going to go, but the fact that I keep seeing this one study of programmer productivity, with like 16 people with limited experience with cursor,uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.
The joy of this particular study (the METR one) is that it maps pretty well to a lot of professional programming work, hence why it gets reported a lot.
Do you have some other, better study in mind that people should be talking about? My sense is that comparative data is scarce, because leaders who believe in AI tend to believe in it so strongly that they mandate adoption rather than collecting data from organic growth.
To be fair when you look at most studies that are hemm and hawwed over in the press, they tend to look like that. Well controlled, high n, replicated in independent cohorts, the readership of mass market media doesn’t understand any of this nuance. The bar is merely whatever seems to validate an existing bias whatever the topic since the incentives are built around engagement and not scientific truth.
> uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.
Counterpoint: the AI fanboys and AI companies, with all their insane funding couldn't come up with a better study and bigger sample size, because LLMs simply don't help experienced developers.
What follows is that the billion dollar companies just couldn't create a better study, either because they tried and didn't like the productivity numbers not being in their favor (very likely), or because they are that sloppy and vibey that they don't know how to make a proper study (I wouldn't be surprised, see ChatGPT's latest features: "study mode" which had a blog post! and you know that the level is not very high).
Again, until there is a better study, the consensus is LLMs are 19% productivity drain for experienced developers, and if they help certain developers, then most likely those developers are not experienced.
I mean, surely, if and when they demonstrably work, the sceptic can just adopt them, having lost nothing? There seems to be a new one every month anyway, so it’s not like experience from using the one from three years ago is going to be particularly helpful.
There seems to be an attitude, or at least a pretended attitude, amongst the true believers that the heretics are dooming themselves, left behind in a glorious AI future. But the AI coding tools du jour are completely different from the ones a year ago! And in six months they'll be different again!
LLMs have been existing for 5-6 years already in some shape and form. How long do I have to wait for Claude to actually do something and me starting it to see it in OSS?
Cause I am pretty sure NFT still has people who swear by them and say "just give it time". At what point can we confidently declare that NFTs are useless without the cultist fanbase going hurr durr? What about LLMs?
AI is currently being treated as if it's a multi-trillion dollar market. What if it turns out to be more of a, say, tens of billions of dollars market?
> What if it turns out to be more of a, say, tens of billions of dollars market?
If it was treated as a multi-trillion dollar market, and that was necessary to justify the current investments, then it turning out to be a tens of billions of dollar market would make it not useful.
We can go to the most extreme example: Human life, that presumably is invaluable, which would mean that, no matter what, if we have an effective treatment for a life threatning diseases, that's useful. But it clearly is not: If the single treatment cost the GDP of the entire country, we should clearly not do it, even if we technically could. The treatment is simply not useful.
For AI the case is much simpler: If the AI, that we are currently building, will in effect have destroyed economic value, then it will not have been useful (because, as far as I can tell, at a minimum the promise of AI has to be positive economic value).
This is like the dot com bubble. The web was ultimately transformative, but the early days see all kinds of wild things that don't work so well and massive investment in them.
It is genuinely a useful technology. But it can't do everything and we will have to figure out where it works well and where it doesn't
For myself, I am not a huge user of it. But on my personal projects I have:
1) built graphing solutions in JavaScript in a day despite not really knowing the language or the libraries. This would have taken me weeks (elapsed) rather than one Saturday.
2) finished a large test suite, again in a day that would have been weeks of elapsed effort for me.
3) figured out how to intercept messages to alter default behaviour in a java swing ui. Googling didnt help.
So I have found it to be a massive productivity boost when exploring things I'm not familiar with, or automating boring tests. So I'm surprised that the study says developers were slower using it. Maybe they were holding it wrong ;)
a few possibilities for me are: over engineering, rabbit holes, trying to build on top of something that only 80% works, and trying to fix something you don't understand how it works. also integrating in with existing code bases. it will ignore field name capitalization, forget about fields, other things like that.
I prefer working with AI but it ain't prefect for sure.
Either AI is a total fraud, completely useless (at least for programming), or it's deus ex machina.
But reality has more than one bit to answer questions like "Is AI a hype?"
Despite only recently becoming a father and feeling like I am in my prime, I've seen many hypes.
And IT is an eternal cycle of hypes. Every few years a new holy cow is sent through the village to bring salvation to all of us and rid us of every problem (un)imaginable.
To give a few examples:
Client-Server
SPAs
Industry 4.0
Machine Learning
Agile
Blockchain
Cloud
Managed Languages
To me LLMs are nice, though no revelation.
I can use them fine to generate JS or Python code, because apparently the training sets were big enough, and they help me by writing boilerplate code I was gonna write anyway.
When I try them to help me write Rust or Zig, they fall extremely short though.
LLMs are extremely overhyped.
They made a few people very rich by promising too much.
They are not AI by any means but marketing.
But they are a tool. And as such they should be treated. Use them when appropriate, but don't hail them...
"Is it a hype?" is a different question, that's just not very interesting to me. Anything will be described as both hyped and good, depending on who you ask.
You were previously talking about AI being a bubble and also useful For reference, wikipedia defines a bubble as: "a period when current asset prices greatly exceed their intrinsic valuation". I find that hard to reason about. One way to think about it is that all that AI does is create economic value, and for it to be useful it would have to create more economic value than it destroys. But that's hard to reason about without knowing the actual economics of the business, which the labs are not super transparent about. On the other hand I would imagine that all that infra building by all big players shows some level of confidence that we are way past "is it going to be useful enough?". That is not what reasonable people do when they think there's a bubble, at least that would be unprecedented.
Ah, this is such a silly non-issue at this point in time. As long as I have to wait ~10 minutes to get my agent to return code, we are ~10 minutes of compute per request away from a bubble. 10 minutes covers a lot of compute. Nothing we are building right this second will not instantly be eaten up by improving already existing use cases (code, image generation, video generation) for a very long time.
The effects of the technology would have to rival that of.. idk, soap and the locomotive combined? In order for us not to be in a bubble.
It has swallowed nearly all discourse about technology and almost all development, nearly every single area of technology is showing markers of recession.. except AI, which has massively inflated salaries and valuations.
I can’t even think of a scenario where we don’t look back on this as a bubble. What does the tech need to be able to do in order to cover its investment?
Replace a significant portion of the workforce reliably (IMHO). And if that happens, I wonder what's next. I mean, if we automate say 30% of jobs these people won't be employable anymore without a drastic change in their careers' trajectories (I mean, they have been automated out of work once, any company would eventually do the same).
When that happens they will end up overcrowding other jobs sectors pushing salaries down for people already in these fields. Once that happens, the lost of purchase power will hit every sector and drag down economies anyway.
So, if I have to summarize my thoughts, we are either in a bubble that will pop and drag down AI related stocks, or it is not a bubble because the tech will actually deliver on what CEOs are touting and there will be high unemployment/lower salaries for many which in turn will mess up other parts of the economy.
Happy to be wrong, but given the hype on AI the winning conditions have to be similarly high, and millions losing their jobs will for sure have huge repercussions.
Look at history, there’s so many jobs that have been automated yet we still have 95%+ employment in most countries and effectively “double” the workforce as we’re pushing dual-income households as a standard.
I’m not sure how our obscenely wealthy overlords think things will play out when we’re all wage-slaves barely able to scrape by. It hasn’t worked out for any society historically.
I think that misses the nuance of what happens as this shakes out. We have ghost towns all over the west and cities with a fraction of their historic population in the middle of the country. Maybe employment is 95% in whatever county that ghost town is in, but that is only because people have left for other jobs and not stuck around to be destitute. And this was made possible thanks to there being other jobs available someplace else.
Now, what happens if there are no jobs available someplace else? Would the sort of leadership we have in power these days consider a New Deal esque plan for mass public work projects and employment opportunities when the private sector has none available? Or would they see it as an opportunity to bring back a sort of feudalism or plantation economy where people aren’t really compensated at all and allowed to starve if not immediately economically useful?
That said: In principle, if the tech did what the strongest proponents forecast it would do, it would change "the economy" in a manner for which we don't even have a suitable metaphor, let alone soap and locomotives.
Now, I do not believe the AI on the horizon at the moment[0] will do any of that. The current architectures are making up for being too stupid to live[1] by doing its signal processing faster than anything alive by the same degree to which jogging is faster than continental drift[2].
As for "minimum" needed for this to not be a bubble, it needs to provide economic value in the order of 0.1 trillion USD per year, but specifically in a way that the people currently investing the money can actually capture that as profit.
The first part of that, providing economic value, I can believe: Being half-arsed with basic software development, being first line customer support, supplying tourist-grade translation services, etc. I can easily belive this adds up to a single percentage point of the world GDP, 10x what I think it needs to be.
Capturing any significant fraction of that, though? Nah. I think this will be like spreadsheets (Microsoft may be able to charge for Office, but Google Docs and LibreOffice are free) or Wikipedia. Takes an actual business plan to turn any of this into income streams, and there's too many people all competing for the same space, so any profit margin is going to be ground to nothing until any/all of them actually differentiate themselves properly.
[0] Not that this says very much given how fast the space is moving; it's quite plausible a better architecture has already been found but it isn't famous yet and nobody's scaled it up enough to make it look shiny and interesting, after all Transformers took years before anyone cared about them.
[1] Literally: no organic brain could get away with being this slow vs. number of examples needed to learn anything
AI can be a bubble and a real technological revolution at the same time, there is absolutely no reason these two things need to be mutually exclusive. The most recent example is the Internet in the late 90s-early 2000s. It was a bubble, it did crash and it did eventually fulfill all af its promises and more. There is no contradiction here
1) Overhype can cause a “winter” where the word itself will become toxic once the bubble pops, leading to significant underinvestment for the period following. This actually has already happed twice with AI, and is part of why we use “Machine Learning” for concepts that used to be called AI… because AI was a toxic word that investors ran from.
2) A bubble sucks all the oxygen out of the room for all other technological endeavours. It’s strictly a bad thing as it can crush the entire technology sector (or potentially even the economy).
3) Bubbles might cause a return to form, but the internet was more like the railroads. Once built the infrastructure was largely existing and it being sold “for cheap” lead to a resurgence. AI has fewer of these core bits of infrastructure that will become cheap in a bust.
The difference with the .com era is that a website is and was immediately useful when it came out. Yes there was too much speculative investment, but the inherent merits of the technology were plainly obvious.
This is not the case with AI. It is not even clear if these tools are useful. This really is much more an “Emperor has no clothes” situation, which is why this bubble is perhaps more dangerous.
> The effects of the technology would have to rival that of.. idk, soap and the locomotive combined? In order for us not to be in a bubble.
Can you quantify this or are you going based on vibes? If I were to compare this to previous bubbles (ie crypto, dot com bubble), in valuation multiple terms we haven't gotten started. And a number of tech companies eventually grew into and surpassed their bubble valuations.
Soap was essentially the onset of what we’d consider urban living, before soap (and even for a long time after to be fair) medical operations had a stupidly hi morbidity and city life was also quite.. unsurvivable. Soap itself may not have been an investment, but it unlocked a lot of capabilities for our civilisation. Almost impossible to quantify how much.
Locomotives were something that had a definite upfront cost, unlike soap, and we invested so much money - up to 3% of GDP in some years in the 1800s.. but the economic benefits were ridiculous.
I dont know why you keep on bringing up soap. Soap, something anyone with knowledge can create in their own home, is not comparable to gpus and ai models which both require massive capital investments and skills to produce and thus are able to command massive margins.
> Spending on AI data centers is so massive that it's taken a bigger chunk of GDP growth than shopping
You havent compared this to previous bubbles like the internet, smart phones and personal computing in general. Your argument isnt convincing.
But AI investment is outpacing all the things you mentioned.
I mentioned soap not because of capital investments, but because of what the GDP growth was following the invention. If AI is qualitatively lower than the GDP growth caused by SOAP then we're going to see a lot of unemployment. So, it's a bubble- nothing could possibly match soap.
http://archive.today/anbRZ
Yeah an exec asked me about the METR study, somewhat alarmed. I told him they were measuring productivity of open source developers. He didn't understand the implications. He was relieved when I informed him that he should still expect productivity gains because his devs were bad enough at their jobs to see a boost.
I would see AI being a multiplying factor. and 3 * a negative number is 3 times worse.
The FAQ of the actual study contains a lot of useful info that addresses some of the comments here, for anyone curious:
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
Rates need to be higher, we are clearly in stagflation outside of AI. Once AI hype cools the stock market will correct. But we need to suck out all of the excess cash, cool consumer demand, which in turn will force businesses to lower their prices to attract customers.
I am not strictly sure why you are being downvoted, but raising interest rates does not get you out of the 'stag' in stagflation; it actually makes it worse. Cue EU governments circa 2008 for a recent example of where that kind of economic policy gets you.
when your markets are hyper monopolized they can just produce less and jack prices up even more.
> The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.
This feels like confusing the economy with the markets. Now, a market crash would cause economic fallout, but the markets are not the economy.
https://fortune.com/2025/08/06/data-center-artificial-intell...
I think what many people are missing is that everybody values AI companies as if they're special, when I'm seeing customers approach it as a commodity - once a task is done it's done. It has about as much lock-in as a brand of toilet paper, and the economics more like steel production than VPS' or social media.
There are some qualitative differences, but none that seems to last for more than 6 months. On the flip side, the costs are energy & location. Scaling won't bring that down.
In a twist of irony, big tech might structurally reduce their high profit margins because of AI.
I'm bullish on AI, and competition is great for consumers in the end.
But first the bubble has to pop, and it is going to hurt.
> The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.
or, even worse perhaps the productivity gains will be high.
I'm not sure if this point is essential or pedantic, but they're not claiming that the economy is being propped up, just stock prices.
The post talks about how a lot of AI investment could just be a waste which would be bad for the economy and stocks.
My comment is really a off-topic, its that if AI really does work well it'll put half of us out of work, leaving us to do non-knowledge work that the robots can't do yet.
They are claiming both. Business spending on AI is more than all consumer spending combined as a share of GDP. That’s propping up the whole economy.
>Business spending on AI is more than all consumer spending combined as a share of GDP.
Your phrasing there is misleading. The article says, "In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined," and the key is the "added more to GDP growth" part.
Growth in consumer spending was sluggish, while growth in business AI spending was insane, so in terms of how much the economy grew, the rise in AI spending exceeded the rise in consumer spending. Which is pretty amazing, actually. But, as far as the total amount of spending, business AI is at most only a few percentage points of GDP (and that's if you interpret "business AI spending" very broadly), while consumer spending is somewhere around 67-70% of GDP.
Thanks I did misunderstand that.
I don’t think its doing so much propping up because the spend is being thrown into a couple of piles. When companies hire some ai superstar for millions it isn’t like much of that money is circulating in the wider economy like if you invested it into a diversified portfolio or hired a lot of lower wage people who spend a much higher share of their paycheck on stuff that keeps it circulating in local economies. Likewise if that spend is going to server racks or electricity. The money just doesn’t have a lot of avenues to spill out into different subsets of the economy.
This is probably what is going on when we see McD executives say there are basically two economies in the US at the moment, one doing very well for itself and the other experiencing a lot of hardship.
We might have some reserve power generation capacity after the bubble.
It's a good thought exercise, but the opening hinges on a premise that feels obviously wrong: that experts in doing work by hand will do that work faster with AI.
My experience is that, in many cases, people who are very good at doing something by hand are excellent at the process of doing that thing by hand, not generally excellent at doing that thing or talking about that thing or teaching others how to do that thing. And I've found it to be true (sometimes particularly true) for people who have done that thing for a long time, even if they're actively interested in finding new and better ways to do the work. Their skills are fine-tuned for a manual way of working.
Working with AI feels very, very different from writing software by hand. When I let myself lean into AI's strengths, it allows me to move much faster and often without sacrificing quality. But it takes constant effort to avoid the trap of my well-established comfort with doing things by hand.
The people who are best positioned to build incredible things with AI do not have or do not fall into that comfortable habit of manual work. They often aren't expert engineers (yet) in the traditional sense, but in a new sense that's being worked out in realtime by all of us. Gaps in technical understanding are still there, but they're being filled extremely fast by some of the best technology for learning our species has ever created.
It's hard for me to watch them and not see a rapidly approaching future where all this AI skepticism looks like self-soothing delusion from people who will struggle to adjust to the new techniques (and pace) of doing work with these tools.
to me the biggest problem with AI is verifying the stuff it generates works before you build on top of it.
This is the trouble with AI as an "assistant". If it's good enough to do the job unattended, then there's a big increase in productivity. If it needs constant handholding, maybe not.
AI is not ready to be an "assistant". It can be used as a "tool" though. Far from self-driving, more like auto-parking and cruise control.
I believe it’s a bubble, and that it will burst, but I’m only a little worried about the market.
Yeah, there could be a sell-off of tech, but the people who sold will reinvest elsewhere.
Also, 401ks still exist. Every week people get paychecks and they automatically buy stocks, no matter what the market situation. The only thing that can bring that down is if less people have jobs with retirement plans. If AI busts there will be some amount of re-hiring somewhere to cover it.
Could be scary for people with tech stocks, but less scary with index funds going long term.
I think it may be good for jobs in the tech sector when it bursts. The 300B invested in hardware and data centers were not being invested in labourers like ourselves.
I think once Exec learn all the programmers they fired need to be rehired cause AI ain't going to replace them and the AI tools need to be integrated in with all the current tech.
People with 401Ks can move money from stocks to bonds or money market accounts. Poof!
Gift link, as opposed to archive: https://www.theatlantic.com/economy/archive/2025/09/ai-bubbl...
(@dang, why can't I link it under https://news.ycombinator.com/item?id=45161656 ?)
This article presents the apparently widespread, but incorrect and, frankly, boring view that "coding" is the bottleneck in development. The following statement summarizes this view best:
> Many knowledge-work tasks are harder to automate than coding, which benefits from huge amounts of training data and clear definitions of success.
which implies that "coding" is not knowledge work. If "coding" is understood as the mere typing of requirements into executable code, then that simply is not a bottleneck and the gains to be had there are marginal (check out Amdahl's law). And if "coding" is to be understood as the more general endeavour of software development, then the people making these statements have no fucking idea what they're talking about and cannot be taken seriously.
It's a familiar approach. "We have a ton of tickets, each pretty simple, but not enough hands to devote to all of them." It usually turns out that the "pretty simple" bears a lot of weight there, and assumes more than meets the eye reading the ticket.
so was the article conflating the two or was it just pointing out the software developer roles typically have many hats that can and sometimes are held by other in the process. ie. systems analyst, business analyst, architect, designer, PM, QA etc.
I don't know exactly where AI is going to go, but the fact that I keep seeing this one study of programmer productivity, with like 16 people with limited experience with cursor,uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.
Obviously, more studies would be better, and that one study certainly isn't conclusive, but for now it is pretty much what is _available_.
The joy of this particular study (the METR one) is that it maps pretty well to a lot of professional programming work, hence why it gets reported a lot.
Do you have some other, better study in mind that people should be talking about? My sense is that comparative data is scarce, because leaders who believe in AI tend to believe in it so strongly that they mandate adoption rather than collecting data from organic growth.
To be fair when you look at most studies that are hemm and hawwed over in the press, they tend to look like that. Well controlled, high n, replicated in independent cohorts, the readership of mass market media doesn’t understand any of this nuance. The bar is merely whatever seems to validate an existing bias whatever the topic since the incentives are built around engagement and not scientific truth.
> uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.
Counterpoint: the AI fanboys and AI companies, with all their insane funding couldn't come up with a better study and bigger sample size, because LLMs simply don't help experienced developers.
What follows is that the billion dollar companies just couldn't create a better study, either because they tried and didn't like the productivity numbers not being in their favor (very likely), or because they are that sloppy and vibey that they don't know how to make a proper study (I wouldn't be surprised, see ChatGPT's latest features: "study mode" which had a blog post! and you know that the level is not very high).
Again, until there is a better study, the consensus is LLMs are 19% productivity drain for experienced developers, and if they help certain developers, then most likely those developers are not experienced.
How's that for an interpretation?
I never tell anyone they have to use AI tools. You do you. In a few years we will see who is better off.
I mean, surely, if and when they demonstrably work, the sceptic can just adopt them, having lost nothing? There seems to be a new one every month anyway, so it’s not like experience from using the one from three years ago is going to be particularly helpful.
There seems to be an attitude, or at least a pretended attitude, amongst the true believers that the heretics are dooming themselves, left behind in a glorious AI future. But the AI coding tools du jour are completely different from the ones a year ago! And in six months they'll be different again!
It has already been a couple of years. What time period should we revisit? And also how would we measure success?
LLMs have been existing for 5-6 years already in some shape and form. How long do I have to wait for Claude to actually do something and me starting it to see it in OSS?
- Cause currently what we see in OSS is LLM trash. https://www.reddit.com/r/webdev/comments/1kh72zf/open_source...
- And a large majority of users don't want that copilot trash in their default github experience: https://www.techradar.com/pro/angry-github-users-want-to-dit...
At what point that trash will become gold? 5 more years? And if it doesn't, at what point trash stays trash?
- When there is a study showing that trash is actually sapping 19% of your performance? https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
- When multiple studies show using it makes you dumb? https://tech.co/news/another-study-ai-making-us-dumb
Cause I am pretty sure NFT still has people who swear by them and say "just give it time". At what point can we confidently declare that NFTs are useless without the cultist fanbase going hurr durr? What about LLMs?
Something can be useful and still be a bubble at the same time.
AI is such a thing.
Care to explain?
Railways are clearly useful. However: https://en.wikipedia.org/wiki/Panic_of_1873#United_States (Interestingly, there were quite a few railway bubbles.)
AI is currently being treated as if it's a multi-trillion dollar market. What if it turns out to be more of a, say, tens of billions of dollars market?
> What if it turns out to be more of a, say, tens of billions of dollars market?
If it was treated as a multi-trillion dollar market, and that was necessary to justify the current investments, then it turning out to be a tens of billions of dollar market would make it not useful.
We can go to the most extreme example: Human life, that presumably is invaluable, which would mean that, no matter what, if we have an effective treatment for a life threatning diseases, that's useful. But it clearly is not: If the single treatment cost the GDP of the entire country, we should clearly not do it, even if we technically could. The treatment is simply not useful.
For AI the case is much simpler: If the AI, that we are currently building, will in effect have destroyed economic value, then it will not have been useful (because, as far as I can tell, at a minimum the promise of AI has to be positive economic value).
This is like the dot com bubble. The web was ultimately transformative, but the early days see all kinds of wild things that don't work so well and massive investment in them.
It is genuinely a useful technology. But it can't do everything and we will have to figure out where it works well and where it doesn't
For myself, I am not a huge user of it. But on my personal projects I have:
1) built graphing solutions in JavaScript in a day despite not really knowing the language or the libraries. This would have taken me weeks (elapsed) rather than one Saturday.
2) finished a large test suite, again in a day that would have been weeks of elapsed effort for me.
3) figured out how to intercept messages to alter default behaviour in a java swing ui. Googling didnt help.
So I have found it to be a massive productivity boost when exploring things I'm not familiar with, or automating boring tests. So I'm surprised that the study says developers were slower using it. Maybe they were holding it wrong ;)
a few possibilities for me are: over engineering, rabbit holes, trying to build on top of something that only 80% works, and trying to fix something you don't understand how it works. also integrating in with existing code bases. it will ignore field name capitalization, forget about fields, other things like that.
I prefer working with AI but it ain't prefect for sure.
I mostly see two extreme positions:
Either AI is a total fraud, completely useless (at least for programming), or it's deus ex machina.
But reality has more than one bit to answer questions like "Is AI a hype?"
Despite only recently becoming a father and feeling like I am in my prime, I've seen many hypes.
And IT is an eternal cycle of hypes. Every few years a new holy cow is sent through the village to bring salvation to all of us and rid us of every problem (un)imaginable.
To give a few examples:
Client-Server SPAs Industry 4.0 Machine Learning Agile Blockchain Cloud Managed Languages
To me LLMs are nice, though no revelation.
I can use them fine to generate JS or Python code, because apparently the training sets were big enough, and they help me by writing boilerplate code I was gonna write anyway.
When I try them to help me write Rust or Zig, they fall extremely short though.
LLMs are extremely overhyped. They made a few people very rich by promising too much.
They are not AI by any means but marketing.
But they are a tool. And as such they should be treated. Use them when appropriate, but don't hail them...
"Is it a hype?" is a different question, that's just not very interesting to me. Anything will be described as both hyped and good, depending on who you ask.
You were previously talking about AI being a bubble and also useful For reference, wikipedia defines a bubble as: "a period when current asset prices greatly exceed their intrinsic valuation". I find that hard to reason about. One way to think about it is that all that AI does is create economic value, and for it to be useful it would have to create more economic value than it destroys. But that's hard to reason about without knowing the actual economics of the business, which the labs are not super transparent about. On the other hand I would imagine that all that infra building by all big players shows some level of confidence that we are way past "is it going to be useful enough?". That is not what reasonable people do when they think there's a bubble, at least that would be unprecedented.
And that's why I was asking.
Ah, this is such a silly non-issue at this point in time. As long as I have to wait ~10 minutes to get my agent to return code, we are ~10 minutes of compute per request away from a bubble. 10 minutes covers a lot of compute. Nothing we are building right this second will not instantly be eaten up by improving already existing use cases (code, image generation, video generation) for a very long time.
Grok fast..
?
Please wait, the ai is typing a response...
Is it even a question that we’re in a bubble?
The effects of the technology would have to rival that of.. idk, soap and the locomotive combined? In order for us not to be in a bubble.
It has swallowed nearly all discourse about technology and almost all development, nearly every single area of technology is showing markers of recession.. except AI, which has massively inflated salaries and valuations.
I can’t even think of a scenario where we don’t look back on this as a bubble. What does the tech need to be able to do in order to cover its investment?
Replace a significant portion of the workforce reliably (IMHO). And if that happens, I wonder what's next. I mean, if we automate say 30% of jobs these people won't be employable anymore without a drastic change in their careers' trajectories (I mean, they have been automated out of work once, any company would eventually do the same).
When that happens they will end up overcrowding other jobs sectors pushing salaries down for people already in these fields. Once that happens, the lost of purchase power will hit every sector and drag down economies anyway.
So, if I have to summarize my thoughts, we are either in a bubble that will pop and drag down AI related stocks, or it is not a bubble because the tech will actually deliver on what CEOs are touting and there will be high unemployment/lower salaries for many which in turn will mess up other parts of the economy.
Happy to be wrong, but given the hype on AI the winning conditions have to be similarly high, and millions losing their jobs will for sure have huge repercussions.
I always love the the solution thrown out is we're all going to be plumbers...
Yeah, this basically continually happens.
Look at history, there’s so many jobs that have been automated yet we still have 95%+ employment in most countries and effectively “double” the workforce as we’re pushing dual-income households as a standard.
I’m not sure how our obscenely wealthy overlords think things will play out when we’re all wage-slaves barely able to scrape by. It hasn’t worked out for any society historically.
I think that misses the nuance of what happens as this shakes out. We have ghost towns all over the west and cities with a fraction of their historic population in the middle of the country. Maybe employment is 95% in whatever county that ghost town is in, but that is only because people have left for other jobs and not stuck around to be destitute. And this was made possible thanks to there being other jobs available someplace else.
Now, what happens if there are no jobs available someplace else? Would the sort of leadership we have in power these days consider a New Deal esque plan for mass public work projects and employment opportunities when the private sector has none available? Or would they see it as an opportunity to bring back a sort of feudalism or plantation economy where people aren’t really compensated at all and allowed to starve if not immediately economically useful?
I think we're in a bubble.
That said: In principle, if the tech did what the strongest proponents forecast it would do, it would change "the economy" in a manner for which we don't even have a suitable metaphor, let alone soap and locomotives.
Now, I do not believe the AI on the horizon at the moment[0] will do any of that. The current architectures are making up for being too stupid to live[1] by doing its signal processing faster than anything alive by the same degree to which jogging is faster than continental drift[2].
As for "minimum" needed for this to not be a bubble, it needs to provide economic value in the order of 0.1 trillion USD per year, but specifically in a way that the people currently investing the money can actually capture that as profit.
The first part of that, providing economic value, I can believe: Being half-arsed with basic software development, being first line customer support, supplying tourist-grade translation services, etc. I can easily belive this adds up to a single percentage point of the world GDP, 10x what I think it needs to be.
Capturing any significant fraction of that, though? Nah. I think this will be like spreadsheets (Microsoft may be able to charge for Office, but Google Docs and LibreOffice are free) or Wikipedia. Takes an actual business plan to turn any of this into income streams, and there's too many people all competing for the same space, so any profit margin is going to be ground to nothing until any/all of them actually differentiate themselves properly.
[0] Not that this says very much given how fast the space is moving; it's quite plausible a better architecture has already been found but it isn't famous yet and nobody's scaled it up enough to make it look shiny and interesting, after all Transformers took years before anyone cared about them.
[1] Literally: no organic brain could get away with being this slow vs. number of examples needed to learn anything
[2] Also literally
> would have to rival that of.. idk, soap
soap + water = bubble
AI can be a bubble and a real technological revolution at the same time, there is absolutely no reason these two things need to be mutually exclusive. The most recent example is the Internet in the late 90s-early 2000s. It was a bubble, it did crash and it did eventually fulfill all af its promises and more. There is no contradiction here
The issue is:
1) Overhype can cause a “winter” where the word itself will become toxic once the bubble pops, leading to significant underinvestment for the period following. This actually has already happed twice with AI, and is part of why we use “Machine Learning” for concepts that used to be called AI… because AI was a toxic word that investors ran from.
2) A bubble sucks all the oxygen out of the room for all other technological endeavours. It’s strictly a bad thing as it can crush the entire technology sector (or potentially even the economy).
3) Bubbles might cause a return to form, but the internet was more like the railroads. Once built the infrastructure was largely existing and it being sold “for cheap” lead to a resurgence. AI has fewer of these core bits of infrastructure that will become cheap in a bust.
The difference with the .com era is that a website is and was immediately useful when it came out. Yes there was too much speculative investment, but the inherent merits of the technology were plainly obvious.
This is not the case with AI. It is not even clear if these tools are useful. This really is much more an “Emperor has no clothes” situation, which is why this bubble is perhaps more dangerous.
> The effects of the technology would have to rival that of.. idk, soap and the locomotive combined? In order for us not to be in a bubble.
Can you quantify this or are you going based on vibes? If I were to compare this to previous bubbles (ie crypto, dot com bubble), in valuation multiple terms we haven't gotten started. And a number of tech companies eventually grew into and surpassed their bubble valuations.
Soap was essentially the onset of what we’d consider urban living, before soap (and even for a long time after to be fair) medical operations had a stupidly hi morbidity and city life was also quite.. unsurvivable. Soap itself may not have been an investment, but it unlocked a lot of capabilities for our civilisation. Almost impossible to quantify how much.
Locomotives were something that had a definite upfront cost, unlike soap, and we invested so much money - up to 3% of GDP in some years in the 1800s.. but the economic benefits were ridiculous.
Spending on AI data centers is so massive that it's taken a bigger chunk of GDP growth than shopping! (according to Yahoo!: https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-s...)
I mean, it’s really apples to oranges, but I can’t imagine us getting the returns anywhere close to what rail gave us in the 1800s.
I dont know why you keep on bringing up soap. Soap, something anyone with knowledge can create in their own home, is not comparable to gpus and ai models which both require massive capital investments and skills to produce and thus are able to command massive margins.
> Spending on AI data centers is so massive that it's taken a bigger chunk of GDP growth than shopping
You havent compared this to previous bubbles like the internet, smart phones and personal computing in general. Your argument isnt convincing.
I have in my other comments.
But AI investment is outpacing all the things you mentioned.
I mentioned soap not because of capital investments, but because of what the GDP growth was following the invention. If AI is qualitatively lower than the GDP growth caused by SOAP then we're going to see a lot of unemployment. So, it's a bubble- nothing could possibly match soap.
[dead]