This technology demos incredibly well and you can just see how everyone gets giddy with excitement around using it. I watch my colleagues and executives proud to show what they could do or make endless jokes about it. It reminds me of when people first got their phones and couldn't stop showing everyone how cool they were.
This leads to an over rotation in the perceived value.. the value is significant just as the mobile phone was, but not going to live up to the hype in the near term.
It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
>> in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
When your boss is hyping it up and demanding all hands on deck full steam ahead on the Good Ship AI, lots of people join in out of fear, particularly in the currently awful job market which is partly being ruined by AI itself.
Some of us just stay quiet, keep our heads down, and plug away using tools actually fit for purpose, like LSPs and refactoring tools.
Very few have the courage to stand up in a professional setting and say the emperor has no clothes.
No developer job has yet been lost to AI. We are in a huge recession, and companies like to have an excuse to fire people that can be spun as a positive.
That makes the fear worse, not better. It's clear that fake AI enthusiasm is a litmus test for how much shit you'll eat in order to toe the company line so that you can avoid layoffs in the worst jobs market since the GFC.
I still believe the hype around llms killed way more jobs than the llms themselves, I wouldn't be surprised if they actually create job in the next few years given the awful shit that's being deployed these days
Absolutely, and you see this in non-Mag7/non-tech industries where CEOs are all announcing a parade of "head of AI" hires. Will it end up being a permanent role like CISO, or is this just faddish follow-along? I know some of the guys being hired into these roles and, lol.. let's just say they are not experts.
My spouse works at a large (50k-100k) org in a program management role where she is getting a lot of pressure to organize various AI evangelism efforts aimed at developers. Workshops, bake offs, demo days, etc.
I mean sounds neat, but is this being done because it's useful or because someone up high needs to justify their AI budget spend with AI usage metrics?
Do we believe that ICs are actually so stupid/stubborn they need to be mandated, coaxed, coached, bullied and bribed to use something that makes their jobs easier?
Doesn't most of the best tech end up being bottoms-up?
Most of us who were around 15+ years ago recall a lot of BigCorp had to be dragged unwillingly into mobile by internal useres/devs who got their first iPhone and saw the light. A lot of stuff starts as small team internal skunk works / unofficial projects working around productivity drains. I am highly suspicious that the C-suite knows what people 10 levels down actually need for productivity enhancement.
> Most of us who were around 15+ years ago recall a lot of BigCorp had to be dragged unwillingly into mobile by internal useres/devs who got their first iPhone and saw the light.
Yeah thanks guys. Now I have outlook/teams on my phone and am expected to be reachable 24/7. If not, I'm expected to respond to text, and share my phone number with my colleagues. Those I don't directly share it with will get it from someone who knows me.
This is true, but it is also true that traditionally, people whose jobs are most likely to be affected by technological change are not the most dispassionate, objective judges of the technology.
I sympathize with this, I would hate having some cargo cult bullshit foisted upon me by disingenuous management. However I'm not sure that it's AI who is "the emperor with no clothes". What I think _is_ such a facade are oftentimes the company itself, its business model, the product it is trying to build and the way its trying to build it. I have sometimes spoken my mind about these things -- petulantly, on my way out the door.
I think it's easy to forget how much low-hanging-fruit there still is, in terms of taking full advantage of this technology.
People are still figuring out very basic integrations, and even now, at this early stage, the things I can do with LLMs are pretty incredible. For example, I was able to set Cursor up on finally dragging an old codebase out of the dark ages. It then built new features that I've long wanted. It took a few hours on my end, but would have been at least a week or two without it.
I'm not exposed to much of the hype, though, so maybe my calibration of what the hype is, is wrong.
So many things, once you start looking. However, most of the critics seem to focus on what it can't do currently, which seems to turn off their brains to the possibilities.
Just look at what Cursor (and similar) have done in terms of the tooling for LLMs. There's still tons of progress to be made there, but similar tooling can happen across a variety of industries and categories.
For example, I run a database of information that needs constant updating. I set up automated fact checking (with a human looped in), that enables nearly live updates, which would be incredibly expensive without an LLM. There are so many projects, big and small, just like that one, that are being created right now. The low hanging fruit is extremely abundant, for those who are able and willing to find it.
I'm using LLM tools to do quite a lot of things, and am very happy with the results in my business. So yeah, I am indeed running my own firm and profiting from this opportunity.
> It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
I have seen that in anonymous forums there's a lot more people pointing out that they think this is transformative like say early smartphones.
When it comes to wearing professional hats people fall into two categories.
People who are using it outside their expertise area. Things they had to previously rely on others like back end operations. They gush about it - praising the technology as if they have discovered smartphones. Only when they use it for their expertise area they realize that AI might not be all it is hyped to be and that they need to be cautiously optimistic.
Then there are people who are using it within their expertise area. Like vibe coding. There are people who gush about it. But there are more who are cautiously optimistic. They will tell you it works but in a contained environment.
There are no error bars, no confidence intervals. Just a one trick pony that pastes tokens together to give you something that may look good to many people. Sure there are many good use cases, but there are still enough non-patchable and indeterminable in size holes in the watering pail to limit its effectiveness.
Absolutely. Votes, comments, competing answers, posted/edited dates, and the context of being on stack overflow all provide useful signals about how reliable a bit of information is.
Conventional AI and statistics has many opportunities for bounding answers. Then, a less subject knowable person can do something with the result.
But if a LLM returns: THE INTERNET IS A FAD, someone may run off and post the result on HN, which would be embarrrasing.
One thing I find fascinating; go to any forum/subreddit/whatever for any LLM thing, and it will be full of people complaining that it's not as good as it used to be, and that OpenAI/Anthropic/Google/whoever is intentionally degrading it, because they are so evil and want their products to be worse.
Then a new model or tool comes out, all is wonderful for a bit, then repeat (except for GPT-5, oddly; that one seems to have inspired hatred from the start).
It rarely seems to occur to people that familiarity breeds contempt; once the novelty wears off people start noticing the problems. The model isn't getting magically worse, you're just _noticing_ more.
I can't speak for all models but I can tell you for an absolute fact that Claude has degraded recently. Yes I've noticed it subjectively but Anthropic have also straight up come out and said "lol oops we 'accidentally' made it way worse, now that you've all noticed we'll roll that back".
When GPT 5 came out, I was using GPT 4 mini for a regular automated task. This had been working quite well for some time. It stopped working right when GPT 5 came out. I switched to GPT 5 mini, and it started working exactly the same as it did previously. So yeah, I'm pretty sure they nuked GPT 4 when they launched GPT 5.
Likewise with the dot com bubble and the web - it was a bubble and it was overhyped, but it was still transformative if you look back 20 years as to how things are different in terms of media, and commerce.
Both smartphones (and tablets, and smart watches) and the internet went through the hype cycle [0], and the sentiments I've been reading lately indicate AI is in the "trough of disillusionment" right now. That said, I don't believe AI will ever reach the heights (i.e. the measurable ones, how much it penetrates our lives, how much money goes into it) as either smartphones or the internet had. Probably higher than VR / AR, but nowhere near the other ones.
It will of they can actually make it think better than we do. Whether they ever will is hard to say, but it feels pretty clear that throwing more money at LLMs isn't going to get us there.
Transformative, but not necessarily in a good way: likely to lead to the end of the open internet, along with all sorts of weird social effects from lowering the cost of convincing fakes.
> how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
Different speeches for different audiences. On HN, for all its faults, people don't need to be told that yes, SOTA LLM can somewhat help you with code, parsing documentation, etc. A lot of people in the "real world" are still grossly underestimating this technology.
Most businesses don't need google or netflix level engineering. People on HN are outliers. The corporate world needs simple CRUD apps so tech savvy but non developer employees can stop grinding away at Excel.
You just summed up machine learning, not just AI/LLMs.
My domain is very far from LLMs, but even in my domain, you can build a really cool demo, that is entirely misleading.
> It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
Well, when people's financial and employment stability are dependent on placating the overlords who are entranced...
I've been following progress on self-driving cars since the 2004 and 2005 Darpa Challenges. Things were very exciting in the 2011-2015 timeframe. Demos were mindblowing. Engineers were giddy. The auto industry went from laughing to shaking in their boots, and then scrambling to make it appear as though they had a horse in the self-driving car race. Hundreds of billions in investment were flying around. Now here we are in 2025, things aren't dead at all, but also nobody with skin in the game is under any delusions about how hard it is to build and scale a viable robotaxi operation.
So far the Self driving car hype cycle has served as a useful reference for understanding the hype with LLMs. The main difference with LLMs is that there is 10 times as much money flying around.
SAE Level 2 automated vehicles are useful now, just not so useful that it's safe to take one's eyes off the road and hands off the wheel.
LLMs are in a similar place, and still a long way from doing the whole job reliably by themselves. The current state of AI is one or several unknown unknowns away from real AGI. We're missing something fundamental.
Likewise with autonomous vehicles, the mythical SAE level 5 vehicle that goes anywhere and everywhere there is a road is still very much science fiction.
I see you were very precise with what you wrote about SAE 2.
I ride in waymo all the time, they are SAE 4.
I know many people who take a waymo every day.
What percentage of driving represents the delta between 4 and 5?
SAE 4 level LLM/Ai (if we can really even make that comparison) would have far less difficulty in deployment and would be far more disruptive in a far shorter period of time than SAE 4 self driving cars.
An L4 LLM in my mind, would be one that can perform fully autonomously at some domain specific job, a job being a collection of tasks that have to come together coherently to achieve a desired goal.
Waymo's robotaxis, likewise, are able to do the driving task in geo-fenced areas. Waymo does a lot of hand-built code and testing to deal with particular problems, such as the 5-points in the Cairo district of SF, where it's a completely unique intersection, there's no other intersection quite like it. A ton of effort went into dealing with just that one intersection, and the bespoke effort doesn't generalize to any other intersection.
So if you want to be able to say, have a platform for producing working video games out of prompts, well, I believe that can be done with our current AI, but it will depend on a lot hard work making tools and hand-built code that do not generalize to other domain specific jobs.
Now if you want to make a movie worth watching out of prompts, that could be done too, but it depends on solving a whole different set of bespoke problems that once again need to be solved the hard way using more conventional software.
Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.
New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.
Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.
As sibling commentor mentions, Zuckerberg is dropping billions on AGI currently (or "super human intelligence", whatever the difference is). And, I don't have time to find it, but maybe Sam Altman might've said AGI is the ultimate goal at somepoint - idk, I don't pay too much attention to this stuff tbh, you'll have to look it up if you're interested.
Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.
The "game plan" is, and always was, to target human labor. Some human labor is straight up replaceable by AI already, other jobs get major productivity boosts. The economic value of that is immense.
We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.
Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.
If LLMs could double the efficiency of white collar workers, major companies would be asked for far more than $100,000 a year. If could cut their expensive workforce in half and then paid even 25% of their savings it could easily generate enough revenue to make that valuation look cheap.
Unfortunately for the LLM vendors, that's not what we're seeing. I guess that used to be the plan, and now they're just scrambling around for whatever they can manage before it all falls apart.
Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.
Even at $10k/yr/employee, you'd need 30 million people on the 10k/yr plan to hit 300B ARR. I think that's a hell of a big swing. 3 million, recoup over ten years? Maybe, but I still don't think so. And then competition between 4 or 5 vendors, larger customers figuring out it's cheaper to train their own models for one thing that gives them 90% of the productivity gains, etc.
But rather than speculating, I'm generally curious what the companies are saying to their investors about the matter.
Eh, seems likely to me existing companies are structured for human labor in a way that's hard to really hard to untangle — smart individuals can level up with this stuff, but remaking an entire company demands human-level AI (not there yet) or a mostly AI-fluent team (working with/through AI is a new skill and few workers have developed it).
New co's built by individuals who get AI are best positioned to unlock the dramatic effects of the technology, and it's going to take time for them to eclipse encumbent players and then seed the labor market with AI-fluent talent
Apparently the total market capitalisation of the US stock market is $62.8 trillion. Shiller's CAPE ratio for the S & P index is currently about 38 -- CAPE is defined as current price / (earnings, averaged over the trailing 10 years)
That suggests that over the last 10 years, the average earnings of the US stock market is about $1.7 trillion annually.
So $344B of spending is about 1/5 of the average earnings of the total US stock market.
Still hard to interpret that, but 1/5 is an easier number to think about.
These days am looking at managing my own portfolio. I go off expected returns(using gdp as a component) plus dividend and adjust it for risk to compute which country/region has good expected returns. This i adjust a couple of times a year.
If one would assume it's nearly all a bubble, How would you correct earnings for the US? I am interested in applying it to any investment that tracks AI heavy companies in the US.
If you believe in long term mean reversion of CAPE ratio for US stocks, you'd expect price/earning multiples to contract by a factor of 2, over some hard to predict time frame, where CAPE reduces from 38 back to about 20. If we arbitrarily guess that contraction happens over 10 years, that'd be -6.7% / yr for 10 years, from 0.5^(1/10). Then add the return components you mentioned from dividends and earnings growth.
One approach I've seen a few folks do is to fit a regression model of annualized real stock market returns over the next 10 years as some function of CAPE or 1/CAPE or log(CAPE).
It doesn't give a very good fit on training data, R^2 in the range of 0.2-0.3, i.e. it cannot "explain" most of the variation in 10 year returns.
CAPE based regression models like that have said the US stock market has been overpriced for the last decade! But investors in the US stock market have done pretty well over that period, with really good returns. Maybe these models are accurate but we've just gotten lucky? Maybe these models aren't very good. Hard to tell.
> This year the world’s four largest tech firms will spend $344 billion on AI
> Altogether, the four companies are expected to spend more than $344 billion for the year, with much of it going to the data centers necessarily to run AI models.
so both articles frame that $344B as estimates of capex within 1 year.
I think what we have right now is an excellent interface for low-friction, non-specialised interaction by humans (work or personal use) with a vast array of highly specialised and highly-complex systems.
What it isn't is the actual final "thing" itself. It's just the thin veneer right now.
I'm not convinced that that revolution was worth whatever trillions we'll end up spending, but fortunately that's not on my shoulders to be worried about.
there is clearly overhype. Given the influx of people building here (and i am not aware if it happened previously too), in a bid to differentiate from other startups building the same thing, many just stripped the nuance out of any technical idea, and made it a simple marketing term. As an exec where ten startups are promising you "training on your data" to provide the best chatbot, it's hard to tell the difference between who would be actually training and who would be just tweaking prompts. This has happened to other concepts too (anecdotally, most famous is how everyone is offering deep research). yes, it helps growth, but comes at the cost of trust. There is this startup which promised "experience based learning" when all they were doing is adding memory to the prompt to get it to perform better. (you can look it up, recenty raised series A).
This does not mean ideas are not working. I personally think pretraining has done its job. We did not know what the job previously was, but now we do given the way RL works. Pretraining and test time compute enables models to develop a generalized prior they can use to solve any given problem (much like how humans solve such problems). Sometimes priors are lacking so you need to train more using RLVR, and still early days, but directionally I think we have another scaling curve here.
This “article” is clickbait. Controversial title with no substance, asking “why are companies investing heavily in a technology that works for some (limited but valuable) use cases, when they could invest in pure R&D for something that might be better someday”.
Oracle's mega moves in the market (cpl hundred $B in market cap),due to their claim that OpenAI was doing a multi-year commit to move much of their workloads to their cloud ... likely a heavy revenue play rather than profit... with an aspiration to push as much CapEx deprecation to out years as possible (btw Oracle where is all the capex?) AKA Financial aengineering... just shows how overly leveraged this bubble has become.
The Economist recently featured a piece pointing out that it's no longer risk that drives the market but a balance of fear of loss and fear of missing out (https://www.economist.com/finance-and-economics/2025/08/06/w...). FOMO is out of control right now
This Oracle surge and revenue predictions really feels like jumping the shark. I mean, it's Oracle.... I've never felt confident enough to bet against a company, but a short position on Oracle may well be too tempting for me.
I lost money betting against Oracle. It taught me to never underestimate the power of a sales and marketing organization, even if the product they’re selling is technically backwards.
Crypto never ended up filing those original goals of being an actual currency and smart contracts actually doing things. Blockchains and NFTs were never used to solve any of the problems they claimed to.
It’s just none of that had any baring on the value of the coins.
It's obviously a lot of money, and of course it's too much money, but I think we can still get LLMs much further and I think they're probably the currently most interesting approach.
I don't even care about multimodality etc. I think pure text models are a very appealing idea.
One can call into question the paucity of AI-critical posts & comments here on HN. Not much is being said about the economics, everybody's living off the hope that "they'll figure it out." And AGI is just a few hundred-billion-dollar loans away, so why quit while we're ahead?
If they do "figure it out" (both AGI and a viable business model), a lot of people here will likely be out of a job. If they don't, the whole thing will come crashing down, taking our invested savings with it.
The comparison to crypto keeps coming up. Not everyone's savings went into crypto, but a lot of people's savings and retirement funds are being invested in funds tied to the stock market. And right now its growth depending on pumping cash into the AI bubble.
They MUST. It doesn't matter if it looks fragile or how much money it is.
LLMs risk most of those companies business, they can't afford to not be ahead. If they aren't ahead, there's a risk that the entire US's economy would be in a terrible shape.
American Big Tech companies that make plenty of INTERNATIONAL revenue from Ads (Meta, Google), can quickly become a shell of its former self.
How? Countries and economic blocks could quickly substitute their American products counterparts if they have nothing to offer and could roll out their own.
The US's economy has become very dependant on FAANG cashflow, it's what gets other parts of the economy moving.
No wonder they had a dinner with Trump. If this fades away, US will look very weak and with a terrible economic outlook.
https://archive.is/2rFK4
This technology demos incredibly well and you can just see how everyone gets giddy with excitement around using it. I watch my colleagues and executives proud to show what they could do or make endless jokes about it. It reminds me of when people first got their phones and couldn't stop showing everyone how cool they were.
This leads to an over rotation in the perceived value.. the value is significant just as the mobile phone was, but not going to live up to the hype in the near term.
It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
>> in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
When your boss is hyping it up and demanding all hands on deck full steam ahead on the Good Ship AI, lots of people join in out of fear, particularly in the currently awful job market which is partly being ruined by AI itself.
Some of us just stay quiet, keep our heads down, and plug away using tools actually fit for purpose, like LSPs and refactoring tools.
Very few have the courage to stand up in a professional setting and say the emperor has no clothes.
No developer job has yet been lost to AI. We are in a huge recession, and companies like to have an excuse to fire people that can be spun as a positive.
> No developer job has yet been lost to AI.
https://content.techgig.com/technology/developer-fires-entir...
Sounds like those jobs still exist, from the article.
That makes the fear worse, not better. It's clear that fake AI enthusiasm is a litmus test for how much shit you'll eat in order to toe the company line so that you can avoid layoffs in the worst jobs market since the GFC.
* toe not tow
Thanks, I actually looked it up because I can never remember, but then autocorrect got me anyway!
At my company at least four developer jobs were lost because of AI tooling, but keep spouting off about the entire economy at once.
I still believe the hype around llms killed way more jobs than the llms themselves, I wouldn't be surprised if they actually create job in the next few years given the awful shit that's being deployed these days
Four out of how many?
Absolutely, and you see this in non-Mag7/non-tech industries where CEOs are all announcing a parade of "head of AI" hires. Will it end up being a permanent role like CISO, or is this just faddish follow-along? I know some of the guys being hired into these roles and, lol.. let's just say they are not experts.
My spouse works at a large (50k-100k) org in a program management role where she is getting a lot of pressure to organize various AI evangelism efforts aimed at developers. Workshops, bake offs, demo days, etc.
I mean sounds neat, but is this being done because it's useful or because someone up high needs to justify their AI budget spend with AI usage metrics?
Do we believe that ICs are actually so stupid/stubborn they need to be mandated, coaxed, coached, bullied and bribed to use something that makes their jobs easier?
Doesn't most of the best tech end up being bottoms-up?
Most of us who were around 15+ years ago recall a lot of BigCorp had to be dragged unwillingly into mobile by internal useres/devs who got their first iPhone and saw the light. A lot of stuff starts as small team internal skunk works / unofficial projects working around productivity drains. I am highly suspicious that the C-suite knows what people 10 levels down actually need for productivity enhancement.
> Most of us who were around 15+ years ago recall a lot of BigCorp had to be dragged unwillingly into mobile by internal useres/devs who got their first iPhone and saw the light.
Yeah thanks guys. Now I have outlook/teams on my phone and am expected to be reachable 24/7. If not, I'm expected to respond to text, and share my phone number with my colleagues. Those I don't directly share it with will get it from someone who knows me.
This is true, but it is also true that traditionally, people whose jobs are most likely to be affected by technological change are not the most dispassionate, objective judges of the technology.
I sympathize with this, I would hate having some cargo cult bullshit foisted upon me by disingenuous management. However I'm not sure that it's AI who is "the emperor with no clothes". What I think _is_ such a facade are oftentimes the company itself, its business model, the product it is trying to build and the way its trying to build it. I have sometimes spoken my mind about these things -- petulantly, on my way out the door.
I think it's easy to forget how much low-hanging-fruit there still is, in terms of taking full advantage of this technology.
People are still figuring out very basic integrations, and even now, at this early stage, the things I can do with LLMs are pretty incredible. For example, I was able to set Cursor up on finally dragging an old codebase out of the dark ages. It then built new features that I've long wanted. It took a few hours on my end, but would have been at least a week or two without it.
I'm not exposed to much of the hype, though, so maybe my calibration of what the hype is, is wrong.
"I think it's easy to forget how much low-hanging-fruit there still "
Such as?
So many things, once you start looking. However, most of the critics seem to focus on what it can't do currently, which seems to turn off their brains to the possibilities.
Just look at what Cursor (and similar) have done in terms of the tooling for LLMs. There's still tons of progress to be made there, but similar tooling can happen across a variety of industries and categories.
For example, I run a database of information that needs constant updating. I set up automated fact checking (with a human looped in), that enables nearly live updates, which would be incredibly expensive without an LLM. There are so many projects, big and small, just like that one, that are being created right now. The low hanging fruit is extremely abundant, for those who are able and willing to find it.
"The low hanging fruit is extremely abundant, for those who are able and willing to find it."
Ok fella. Its so abundant right. So why not go ahead, start your own firm and profit from this opportunity, that according to you exists? lol.
I'm using LLM tools to do quite a lot of things, and am very happy with the results in my business. So yeah, I am indeed running my own firm and profiting from this opportunity.
Link?
Apple has still not integrated AI anywhere in to their OS.
> It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
I have seen that in anonymous forums there's a lot more people pointing out that they think this is transformative like say early smartphones.
When it comes to wearing professional hats people fall into two categories.
People who are using it outside their expertise area. Things they had to previously rely on others like back end operations. They gush about it - praising the technology as if they have discovered smartphones. Only when they use it for their expertise area they realize that AI might not be all it is hyped to be and that they need to be cautiously optimistic.
Then there are people who are using it within their expertise area. Like vibe coding. There are people who gush about it. But there are more who are cautiously optimistic. They will tell you it works but in a contained environment.
There are no error bars, no confidence intervals. Just a one trick pony that pastes tokens together to give you something that may look good to many people. Sure there are many good use cases, but there are still enough non-patchable and indeterminable in size holes in the watering pail to limit its effectiveness.
Is there any error bar or confidence intervals in stackoverflow answers?
Absolutely. Votes, comments, competing answers, posted/edited dates, and the context of being on stack overflow all provide useful signals about how reliable a bit of information is.
Are you saying the Internet is a fad?
Its value certainly has declined in recent years.
Conventional AI and statistics has many opportunities for bounding answers. Then, a less subject knowable person can do something with the result. But if a LLM returns: THE INTERNET IS A FAD, someone may run off and post the result on HN, which would be embarrrasing.
What even is the internet anymore? Do you mean the thing people use to access Facebook, Instagram, TikTok and "X"?
Just like those newfangled computer thingamadjigies. Overhyped, no long term value.
Yes
One thing I find fascinating; go to any forum/subreddit/whatever for any LLM thing, and it will be full of people complaining that it's not as good as it used to be, and that OpenAI/Anthropic/Google/whoever is intentionally degrading it, because they are so evil and want their products to be worse.
Then a new model or tool comes out, all is wonderful for a bit, then repeat (except for GPT-5, oddly; that one seems to have inspired hatred from the start).
It rarely seems to occur to people that familiarity breeds contempt; once the novelty wears off people start noticing the problems. The model isn't getting magically worse, you're just _noticing_ more.
I can't speak for all models but I can tell you for an absolute fact that Claude has degraded recently. Yes I've noticed it subjectively but Anthropic have also straight up come out and said "lol oops we 'accidentally' made it way worse, now that you've all noticed we'll roll that back".
https://www.reddit.com/r/ClaudeAI/comments/1nc4mem/update_on...
When GPT 5 came out, I was using GPT 4 mini for a regular automated task. This had been working quite well for some time. It stopped working right when GPT 5 came out. I switched to GPT 5 mini, and it started working exactly the same as it did previously. So yeah, I'm pretty sure they nuked GPT 4 when they launched GPT 5.
>It reminds me of when people first got their phones and couldn't stop showing everyone how cool they were.
Your comparison to smart phones is interesting. Smart phones are definitely transformative. There was a lot of hype, but still transformative.
Do you believe that LLMs and AI is also going to be transformative?
People keep rediscovering the trough of disillusionment and mistaking it for a dead end.
Likewise with the dot com bubble and the web - it was a bubble and it was overhyped, but it was still transformative if you look back 20 years as to how things are different in terms of media, and commerce.
Both smartphones (and tablets, and smart watches) and the internet went through the hype cycle [0], and the sentiments I've been reading lately indicate AI is in the "trough of disillusionment" right now. That said, I don't believe AI will ever reach the heights (i.e. the measurable ones, how much it penetrates our lives, how much money goes into it) as either smartphones or the internet had. Probably higher than VR / AR, but nowhere near the other ones.
[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle
It will of they can actually make it think better than we do. Whether they ever will is hard to say, but it feels pretty clear that throwing more money at LLMs isn't going to get us there.
Transformative, but not necessarily in a good way: likely to lead to the end of the open internet, along with all sorts of weird social effects from lowering the cost of convincing fakes.
I think the transformation will primarily be in search personally. As in Google search type experiences.
What that means is the ad model of the internet will come apart.
And what that means is that the LLMs will need to charge for answer optimization to plug the ads hole.
And so where this is going is basically a whole cottage industry around that. Around controlling and shaping knowledge in other words.
Yes frightening politically more so than economically. At least from my view.
And if it dumbs us down and erodes critical thinking then maybe it will have negative effects economically and politically long term.
I don't think it was about smart phones.
> how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
Different speeches for different audiences. On HN, for all its faults, people don't need to be told that yes, SOTA LLM can somewhat help you with code, parsing documentation, etc. A lot of people in the "real world" are still grossly underestimating this technology.
>> A lot of people in the "real world" are still grossly underestimating this technology.
Did you mean "overestimating"? "somewhat help" is putting it strongly, IMO.
Most businesses don't need google or netflix level engineering. People on HN are outliers. The corporate world needs simple CRUD apps so tech savvy but non developer employees can stop grinding away at Excel.
On the one hand I see posts here by people who have no idea what they are talking about.
On the other I get 5 hours of work done in 5 minutes every other day.
Worst I can see happening is a doc-com crash. pets.com will go out of business, but amazon won't.
> This technology demos incredibly well
You just summed up machine learning, not just AI/LLMs. My domain is very far from LLMs, but even in my domain, you can build a really cool demo, that is entirely misleading.
> It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
Well, when people's financial and employment stability are dependent on placating the overlords who are entranced...
I've been following progress on self-driving cars since the 2004 and 2005 Darpa Challenges. Things were very exciting in the 2011-2015 timeframe. Demos were mindblowing. Engineers were giddy. The auto industry went from laughing to shaking in their boots, and then scrambling to make it appear as though they had a horse in the self-driving car race. Hundreds of billions in investment were flying around. Now here we are in 2025, things aren't dead at all, but also nobody with skin in the game is under any delusions about how hard it is to build and scale a viable robotaxi operation.
So far the Self driving car hype cycle has served as a useful reference for understanding the hype with LLMs. The main difference with LLMs is that there is 10 times as much money flying around.
But I can actually use an LLM to do useful work for me and no one gets hurt.
I think there’s too much hype and money, but LLMs are useful right now.
SAE Level 2 automated vehicles are useful now, just not so useful that it's safe to take one's eyes off the road and hands off the wheel.
LLMs are in a similar place, and still a long way from doing the whole job reliably by themselves. The current state of AI is one or several unknown unknowns away from real AGI. We're missing something fundamental.
Likewise with autonomous vehicles, the mythical SAE level 5 vehicle that goes anywhere and everywhere there is a road is still very much science fiction.
I see you were very precise with what you wrote about SAE 2.
I ride in waymo all the time, they are SAE 4.
I know many people who take a waymo every day.
What percentage of driving represents the delta between 4 and 5?
SAE 4 level LLM/Ai (if we can really even make that comparison) would have far less difficulty in deployment and would be far more disruptive in a far shorter period of time than SAE 4 self driving cars.
An L4 LLM in my mind, would be one that can perform fully autonomously at some domain specific job, a job being a collection of tasks that have to come together coherently to achieve a desired goal.
Waymo's robotaxis, likewise, are able to do the driving task in geo-fenced areas. Waymo does a lot of hand-built code and testing to deal with particular problems, such as the 5-points in the Cairo district of SF, where it's a completely unique intersection, there's no other intersection quite like it. A ton of effort went into dealing with just that one intersection, and the bespoke effort doesn't generalize to any other intersection.
So if you want to be able to say, have a platform for producing working video games out of prompts, well, I believe that can be done with our current AI, but it will depend on a lot hard work making tools and hand-built code that do not generalize to other domain specific jobs.
Now if you want to make a movie worth watching out of prompts, that could be done too, but it depends on solving a whole different set of bespoke problems that once again need to be solved the hard way using more conventional software.
What's the path to recouping that money?
Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.
New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.
I'm just wondering what the plan is.
Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.
I’m sure somebody believed that? But I never met them.
> I’m sure somebody believed that?
“Somebody” like… Sam Altman? Because he said that’s what he actually believes.
https://www.startupbell.net/post/sam-altman-told-investors-b...
As sibling commentor mentions, Zuckerberg is dropping billions on AGI currently (or "super human intelligence", whatever the difference is). And, I don't have time to find it, but maybe Sam Altman might've said AGI is the ultimate goal at somepoint - idk, I don't pay too much attention to this stuff tbh, you'll have to look it up if you're interested.
Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.
I want to say Mark Zuckerberg but I think Meta's investment is also targeted at creating their own social media content
The "game plan" is, and always was, to target human labor. Some human labor is straight up replaceable by AI already, other jobs get major productivity boosts. The economic value of that is immense.
We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.
Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.
If AI Doesn’t Fire You, It Can’t Pay For Itself https://esborogardius.substack.com/p/if-ai-doesnt-fire-you-i...
They just have to be positive revenue on inference and run it for a long time. Why do you think they can’t recoup it?
If LLMs could double the efficiency of white collar workers, major companies would be asked for far more than $100,000 a year. If could cut their expensive workforce in half and then paid even 25% of their savings it could easily generate enough revenue to make that valuation look cheap.
Unfortunately for the LLM vendors, that's not what we're seeing. I guess that used to be the plan, and now they're just scrambling around for whatever they can manage before it all falls apart.
Ok but then another AI company would just offer the same thing at a lower cost.
How much lower though?
$100k/year is literally nothing.
Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.
Even at $10k/yr/employee, you'd need 30 million people on the 10k/yr plan to hit 300B ARR. I think that's a hell of a big swing. 3 million, recoup over ten years? Maybe, but I still don't think so. And then competition between 4 or 5 vendors, larger customers figuring out it's cheaper to train their own models for one thing that gives them 90% of the productivity gains, etc.
But rather than speculating, I'm generally curious what the companies are saying to their investors about the matter.
I don’t get why you need 300B arr?
That's literally not how the word "literally" works.
That’s literally how the English language works. It literally evolves
It literally is: https://www.merriam-webster.com/dictionary/literally
But we won’t get there unless the company integration failure rate falls below 95%
Eh, seems likely to me existing companies are structured for human labor in a way that's hard to really hard to untangle — smart individuals can level up with this stuff, but remaking an entire company demands human-level AI (not there yet) or a mostly AI-fluent team (working with/through AI is a new skill and few workers have developed it).
New co's built by individuals who get AI are best positioned to unlock the dramatic effects of the technology, and it's going to take time for them to eclipse encumbent players and then seed the labor market with AI-fluent talent
Major companies will spend 10-100x that if it resulted in real tangible productivity gains for their businesses.
I think it's "scam everyone into giving us lots of money, then run before the bills come".
How big is $344B?
Apparently the total market capitalisation of the US stock market is $62.8 trillion. Shiller's CAPE ratio for the S & P index is currently about 38 -- CAPE is defined as current price / (earnings, averaged over the trailing 10 years)
That suggests that over the last 10 years, the average earnings of the US stock market is about $1.7 trillion annually.
So $344B of spending is about 1/5 of the average earnings of the total US stock market.
Still hard to interpret that, but 1/5 is an easier number to think about.
These days am looking at managing my own portfolio. I go off expected returns(using gdp as a component) plus dividend and adjust it for risk to compute which country/region has good expected returns. This i adjust a couple of times a year.
If one would assume it's nearly all a bubble, How would you correct earnings for the US? I am interested in applying it to any investment that tracks AI heavy companies in the US.
If you believe in long term mean reversion of CAPE ratio for US stocks, you'd expect price/earning multiples to contract by a factor of 2, over some hard to predict time frame, where CAPE reduces from 38 back to about 20. If we arbitrarily guess that contraction happens over 10 years, that'd be -6.7% / yr for 10 years, from 0.5^(1/10). Then add the return components you mentioned from dividends and earnings growth.
One approach I've seen a few folks do is to fit a regression model of annualized real stock market returns over the next 10 years as some function of CAPE or 1/CAPE or log(CAPE).
It doesn't give a very good fit on training data, R^2 in the range of 0.2-0.3, i.e. it cannot "explain" most of the variation in 10 year returns.
CAPE based regression models like that have said the US stock market has been overpriced for the last decade! But investors in the US stock market have done pretty well over that period, with really good returns. Maybe these models are accurate but we've just gotten lucky? Maybe these models aren't very good. Hard to tell.
Elm capital publish estimates of expected returns of a few asset classes quarterly: https://elmwealth.com/capital-market-assumptions/
How do you plan to adjust for government actions that affect purchasing power of the currency?
But the $344B figure is not annualized, it’s cumulative.
the two nested bloomberg articles say
> This year the world’s four largest tech firms will spend $344 billion on AI
> Altogether, the four companies are expected to spend more than $344 billion for the year, with much of it going to the data centers necessarily to run AI models.
so both articles frame that $344B as estimates of capex within 1 year.
I think what we have right now is an excellent interface for low-friction, non-specialised interaction by humans (work or personal use) with a vast array of highly specialised and highly-complex systems.
What it isn't is the actual final "thing" itself. It's just the thin veneer right now.
I'm not convinced that that revolution was worth whatever trillions we'll end up spending, but fortunately that's not on my shoulders to be worried about.
there is clearly overhype. Given the influx of people building here (and i am not aware if it happened previously too), in a bid to differentiate from other startups building the same thing, many just stripped the nuance out of any technical idea, and made it a simple marketing term. As an exec where ten startups are promising you "training on your data" to provide the best chatbot, it's hard to tell the difference between who would be actually training and who would be just tweaking prompts. This has happened to other concepts too (anecdotally, most famous is how everyone is offering deep research). yes, it helps growth, but comes at the cost of trust. There is this startup which promised "experience based learning" when all they were doing is adding memory to the prompt to get it to perform better. (you can look it up, recenty raised series A).
This does not mean ideas are not working. I personally think pretraining has done its job. We did not know what the job previously was, but now we do given the way RL works. Pretraining and test time compute enables models to develop a generalized prior they can use to solve any given problem (much like how humans solve such problems). Sometimes priors are lacking so you need to train more using RLVR, and still early days, but directionally I think we have another scaling curve here.
This “article” is clickbait. Controversial title with no substance, asking “why are companies investing heavily in a technology that works for some (limited but valuable) use cases, when they could invest in pure R&D for something that might be better someday”.
Oracle's mega moves in the market (cpl hundred $B in market cap),due to their claim that OpenAI was doing a multi-year commit to move much of their workloads to their cloud ... likely a heavy revenue play rather than profit... with an aspiration to push as much CapEx deprecation to out years as possible (btw Oracle where is all the capex?) AKA Financial aengineering... just shows how overly leveraged this bubble has become.
The Economist recently featured a piece pointing out that it's no longer risk that drives the market but a balance of fear of loss and fear of missing out (https://www.economist.com/finance-and-economics/2025/08/06/w...). FOMO is out of control right now
This Oracle surge and revenue predictions really feels like jumping the shark. I mean, it's Oracle.... I've never felt confident enough to bet against a company, but a short position on Oracle may well be too tempting for me.
I lost money betting against Oracle. It taught me to never underestimate the power of a sales and marketing organization, even if the product they’re selling is technically backwards.
You probably saw in the news yesterday that Oracle stock shot up because they are scoring large AI deals: https://www.wsj.com/business/earnings/oracle-stock-orcl-ai-d...
> FOMO is out of control right now
Exactly. The whole stock market is currently behaving like the crypto bubbles.
Most of us believed that crypto currencies were trash. Look how valuable its now .
LLMs are a million times better than Crypto currencies.
Crypto never ended up filing those original goals of being an actual currency and smart contracts actually doing things. Blockchains and NFTs were never used to solve any of the problems they claimed to.
It’s just none of that had any baring on the value of the coins.
There is a distinction between "vehicle for a large volume of speculation" and "valuable". Cryptocurrency has not created any "value".
Pokemon cards are also super valuable these days, it says more about some people having way too much money for their own good than anything
"LLM's are the next crypto!" isn't the glowing endorsement you seem to think it is.
It's obviously a lot of money, and of course it's too much money, but I think we can still get LLMs much further and I think they're probably the currently most interesting approach.
I don't even care about multimodality etc. I think pure text models are a very appealing idea.
One can call into question the paucity of AI-critical posts & comments here on HN. Not much is being said about the economics, everybody's living off the hope that "they'll figure it out." And AGI is just a few hundred-billion-dollar loans away, so why quit while we're ahead?
If they do "figure it out" (both AGI and a viable business model), a lot of people here will likely be out of a job. If they don't, the whole thing will come crashing down, taking our invested savings with it.
The comparison to crypto keeps coming up. Not everyone's savings went into crypto, but a lot of people's savings and retirement funds are being invested in funds tied to the stock market. And right now its growth depending on pumping cash into the AI bubble.
The answer is so easy to solve.
Do I find value in paying 20 dollars a month for ChatGPT? Yes. Do others? As far as I can see, yes. Most people are happy with the value.
Are AI companies profitable if they stop R&D? Yes.
Where’s the skepticism coming from?
In the same way people thought the train would hit them when cinema first debuted so too do we believe the machine is thinking
> Hallucinations haven’t gone away, muddying the path to adoption for companies in healthcare or legal analysis
... I mean, of course they haven't. They are a natural consequence of how the things work!
They MUST. It doesn't matter if it looks fragile or how much money it is.
LLMs risk most of those companies business, they can't afford to not be ahead. If they aren't ahead, there's a risk that the entire US's economy would be in a terrible shape.
American Big Tech companies that make plenty of INTERNATIONAL revenue from Ads (Meta, Google), can quickly become a shell of its former self.
How? Countries and economic blocks could quickly substitute their American products counterparts if they have nothing to offer and could roll out their own.
The US's economy has become very dependant on FAANG cashflow, it's what gets other parts of the economy moving.
No wonder they had a dinner with Trump. If this fades away, US will look very weak and with a terrible economic outlook.
Fiat money in a credit based inflationary environment is free. Oil money is free, it literally comes from the ground. There's no "losses".
Not sure what this has to do with the article.
Oil isn't "free" anyway, it takes energy to make energy, and EROEI has been going down for some time as the easy oil is extracted.