Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet.
I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.
Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.
My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well.
It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting.
> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves
Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.
You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.
I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions.
I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines.
Our new CTO was remarking that our engineering teams AI spend is too low. I believe we have already committed a lot of money but only using 5% of the subscription.
This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.
At my job this would get you disciplined for leaking proprietary data to an unapproved vendor. We have to buy AI from approved vendors that keep our data partitioned from training data.
> They have already committed the money now having to justify it.
As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.
First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.
Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.
The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.
Agreed. I think that many companies force people to use AI in hopes that somebody will stumble upon a killer use case. They don't want competitors to get there first.
> but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
This makes no sense for coding subscriptions. Just how far behind can you be in skills by taking a wait and see position?
After all, it's not like this specific product needs more than a single day for the user to get up to speed.
I disagree, agentic coding is a very different skill set. When you are talking about maintaining massive corporate code bases it’s not a instant-gratification activity like vibe coding a small prototype, a lot of guardrails and frankly a new level of engagement in code review becomes necessary. Ultimately I think this will change the job enough that many folks won’t make the transition.
> Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
This doesn't make a huge amount of sense, because the stuff is changing so quickly anyway. It's far from clear that, in the hypothetical future where this stuff is net-useful in five years, experience with _today's_ tools will be of any real use at all.
> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
Companies do not necessarily understand sunk cost fallacy.
> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage
But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.
That is how the sausage is made. Ironically this is what democratic institutions like county admins etc are ridiculed for due to more transparency compared to private sector.
I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.
Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."
As a small business owner in a non tech business (60 employees, $40M revenue), AI is definitely worth $20/month but not as I anticipated.
I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.
What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.
Which summarizes the one useful property of LLMs: a slightly better search engine which on top doesn't populate the first 5 result pages with advertisements - yet anyway ;)
> Not worth hiring an attorney to walk us through bankruptcy filings
The AI doesn't carry professional liability insurance, so this is about as good as asking one of the legal subreddits. It's probably fine in this case since the worst case is not getting the money that you were at risk of not getting anyway.
The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.
It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.
The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...
Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.
Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).
The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself.
That in no way is the same as “an LLM-generated comment”.
The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM.
> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.
But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.
> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.
Bold claim. Toxic positivism seems to be too common in AI evangelists.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.
> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.
The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as old as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.
"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.
Fun fact, the report was/ is so controversial, that the link to the NANDA paper linked in fortune has been put behind a Google Form you now need to complete prior to being able to access it.
This reminds me of the internet in 2000. Lots of companies doing .COM stuff but many didn’t understand what they were doing and why they were doing it. But in the end the internet was a huge game changer. I see the same with AI. There will be a lot of money wasted but in the end AI will be huge transformation.
Using AI makes me want to resign from life, it removes all the fun and joy from coding.
I absolutely will resign if my job becomes 100% generating and reviewing AI generated slop, having to review my coworker's AI slop has already made my job way less fun.
The problem with AI is how confidently wrong it is. In Lisbon I uploaded a picture of myself on some church steps and asked ChatGPT where that is.
I came up with a place I was sure I’d never been. Then I asked if it could be part of some other place and it said sure it’s inside the main church. The pic was clearly outside. Next it gave me a random famous stair that is so clearly different a human could never be fooled.
Each of these lies were extremely elaborate citing sources and describing all the things that matched.
The only matching experience was with a taxi in Delhi some 20 years ago where the driver pretended to know where he was going and when I further questioned him he said the 40 story hotel I am looking for has been demolished 5 years after opening. At least he has monetary interest in lying to me so I enter his cab.
Well, as anecdotal data, have you folks noticed ads lately pushing Gemini/Claude/xx on both legacy media and online? If AI (and these products) is sooo great, why do these companies have to advertise to sell their wares?
And Google and Microsoft are hellbent on pushing AI into everything. Even if users don't want it. Nope, we're gonna throw the kitchen sink at you and see if it sticks.
In the non-tech world, nobody gives a shit about AI. People and businesses go about their daily lives without thinking about things like "Hmmm...maybe I could have prompted that LLM a different way..."
I believe this is the correct way of seeing things. We may not need a killer app though, since AI is not a platform but a core technology. It’s more about the evolution of IT infrastructure and SW systems. Non-tech companies/people don’t need to do anything really. AI will just come to them.
For most companies AI is a subscription service you sign up for. Because of great marketing campaigns, it has become a necessary tax. If you don't pay, the penalty is you lose value because it doesn't look like you are embracing the future. If you pay, well it's just a tax that you hope your employees will somehow benefit from.
If we ignore coding or tech industry for a min. Other companies Keep demanding new Report on certain things, and AI is doing that. Is it productive? probably not. But do execs loves it? Yes.
In a non-start up, bureaucratic companies, these report are there to make cover ups, or basically to cover everyone's ass so no one is doing anything wrong because the report said so.
Sounds like blockchain all over. Reminds me of an essay from two product managers in AWS that talked to clients all over US and couldn't get any business to clearly articulate why they need blockchain.
Note: AWS has a hosted blockchain that you can use. [1]
PS: If anyone has read that essay, please do share the link. I can't really locate it but that's a wonderful read.
one of the great benefits of AI so far has been the push for more plain text documentation and opening up API access via MCP. Let's enjoy it while it lasts until we are forced back into walled gardens and micro transactions
the AI umbrella has been helpful to my BigCorp to justify more machine learning work, or discrete optimisation and scheduling problems
agentic ai which is a huge buzz in enterprise feels more like workflow and rpa (again) and people misunderstanding that getting the happy flow working is only 20% of the job.
I mean I'm in a similar situation in that I'm a games developer and no AI system has been trained on the details of PS5/Xbox/Switch development since those are heavily NDA'd and not available to the public. So I learn by reading docs which are available to me as a registered developer, but AI doesn't have that ability and it hasn't been trained on this.
I sometimes use grok. But not much. Your confusion is strange. I never said the tech is a bubble(it can be used today, although in very limited manner, compared to how it is being sold to the public), just the financial aspect of it. If you'd be more educated in investing, economics or geopolitics you'd understand what is going on. I am not being hyperbolic here. Even Altman admitted AI is a bubble. It's really no secret to anyone. But bubbles will be ridden no matter what, all the way up, until they pop. So knowing it is a bubble does not change much. We just know what we can expect once it pops.
tl;dr I was merely answering the question the article proposes.
Simple fact: AI is extremely powerful, in the hands of experts who invested time in deeply understanding it, and in understanding how to actually use it well. Who are then willing to commit more time to build an actually sustainable solution.
Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement.
There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com.
Something I've noticed is LLMs seem to be able to answer questions on everything, in quite a lot of detail. But I can't seem to get them to actually do anything useful, you basically have to hand hold them the entire way to the point they don't really add value. I'm sure there is plenty of research in to this, but there does seem to be a big difference between being able to answer questions, and actual intelligence.
For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt.
AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own.
They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective.
And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex.
> AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own.
I am such a one, and AI isn't useful to me. The answers it gives me are routinely so bad, I can just answer my own questions with a search engine or product documentation faster than I can get the AI to give me something. Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about.
It also happens when you ask it to do simple things like create comments. At least 400 words and still it will regurgitate/synthesize, often with a "!", the content you're asking it to comment on.
Big benefit from coding Agents: for things to work it better have documentation.
Which humans usually aren't given, so anything which forces documentation is good.
Large language models are a deeply impressive technology, but they're not artificial general intelligences, because you need to supervise them. Like everything else that has been called 'artificial intelligence' since the 1950s, I think we'll find some niches that they're good for, and that'll be the end of the hype bubble.
The hype does serve a purpose, though: it motivates people to try to find more possible uses for LLMs. However, as with all experiments, we should expect most of these attempts to fail.
Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet.
I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.
Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.
My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well.
It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting.
> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves
Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.
You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.
I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions.
I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines.
Our new CTO was remarking that our engineering teams AI spend is too low. I believe we have already committed a lot of money but only using 5% of the subscription.
This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.
Wish my company did this. I would love to learn more about AI but the company is too cheap to buy subscriptions
Can you buy a subscription and see if it benefits you ?
At my job this would get you disciplined for leaking proprietary data to an unapproved vendor. We have to buy AI from approved vendors that keep our data partitioned from training data.
> They have already committed the money now having to justify it.
As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.
First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.
Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.
The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.
Agreed. I think that many companies force people to use AI in hopes that somebody will stumble upon a killer use case. They don't want competitors to get there first.
> but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
This makes no sense for coding subscriptions. Just how far behind can you be in skills by taking a wait and see position?
After all, it's not like this specific product needs more than a single day for the user to get up to speed.
I disagree, agentic coding is a very different skill set. When you are talking about maintaining massive corporate code bases it’s not a instant-gratification activity like vibe coding a small prototype, a lot of guardrails and frankly a new level of engagement in code review becomes necessary. Ultimately I think this will change the job enough that many folks won’t make the transition.
> Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
This doesn't make a huge amount of sense, because the stuff is changing so quickly anyway. It's far from clear that, in the hypothetical future where this stuff is net-useful in five years, experience with _today's_ tools will be of any real use at all.
> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
Yes, this is the correct answer.
Companies do not necessarily understand sunk cost fallacy.
> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage
But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.
yeah, I’ve been in so many companies where “sweetheart deals” force the use of some really shitty tech.
Both when the money has been actually committed and when it’s usage based.
I have found that companies are rarely rational and will not “leave money on the table”
That is how the sausage is made. Ironically this is what democratic institutions like county admins etc are ridiculed for due to more transparency compared to private sector.
> this is definitely not it.
> is probably because
I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.
Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."
>> this is definitely not it.
>> is probably because
> I don't mean to be contrary, but these statements stand in opposition
No, they don't. It's perfectly consistent to say one reason is certainly wrong without saying another much more likely reason is definitely right.
[dead]
Do you have any data to backup the claim: “vast majority of companies understand suck cost fallacy.”
I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.
>I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.
There was no need to post this.
As a small business owner in a non tech business (60 employees, $40M revenue), AI is definitely worth $20/month but not as I anticipated.
I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.
What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.
Which summarizes the one useful property of LLMs: a slightly better search engine which on top doesn't populate the first 5 result pages with advertisements - yet anyway ;)
No doubt in ten years chatgpt will mostly be telling you things it was paid to say.
The saddest part is we used to have highly functional search engines two decades ago, where you would results from subject matter experts.
Today it’s only the same SEO formatted crap without answer.
I am working on a solution.
> Not worth hiring an attorney to walk us through bankruptcy filings
The AI doesn't carry professional liability insurance, so this is about as good as asking one of the legal subreddits. It's probably fine in this case since the worst case is not getting the money that you were at risk of not getting anyway.
The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.
It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.
The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...
Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.
Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).
The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
(You're responding to an LLM-generated comment, btw.)
The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself. That in no way is the same as “an LLM-generated comment”.
It's popular now to level these accusations at text that contains emdashes.
An llm would “know” not to put spaces around an em dash. An en dash should have spaces.
The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM.
Well, I use an iPhone and “ is default on my keyboard.
Tell me, why should I not use a hyphen for hyphenated words?
I was schooled is British English where the spaced endash - is preferred.
Shall I go on?
> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.
But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.
> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.
Bold claim. Toxic positivism seems to be too common in AI evangelists.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.
This comes to mind: "MIT Media Lab/Project NANDA released a new report that found that 95% of investments in gen AI have produced zero returns" [0]
Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily.
0: https://fortune.com/2025/08/18/mit-report-95-percent-generat...
I wonder people ever read what they link.
> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.
The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as old as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.
"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.
If the theory is that 1% will be a unicorns that will make you a trillionaire, i think investors would be ok with that.
The real question is do those unicorns exist or is it all worthless.
Have to pay the power bill for the data centers for GAI. Might not be profitable.
Fun fact, the report was/ is so controversial, that the link to the NANDA paper linked in fortune has been put behind a Google Form you now need to complete prior to being able to access it.
Doubt the form has anything to do with how "controversial" it is. NANDA is using the paper's popularity to collect marketing data.
This reminds me of the internet in 2000. Lots of companies doing .COM stuff but many didn’t understand what they were doing and why they were doing it. But in the end the internet was a huge game changer. I see the same with AI. There will be a lot of money wasted but in the end AI will be huge transformation.
I completely agree. I think the financial bubble will also burst soon. Doesn't mean it won't keep on slowly eating the world.
https://archive.is/133z6
Thank you
AI isn't about what you are able to do with it. AI is about the fear of what your competitors can do with it.
I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete.
AI provides cover to lay people off, or else commit constructive dismissal.
Constructive dismissal and layoffs are mutually exclusive.
https://en.wikipedia.org/wiki/Constructive_dismissal
>In employment law, constructive dismissal occurs when an employee resigns due to the employer creating a hostile work environment.
No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.
AI is what you tell the board/investors is the reason for layoffs and attrition.
Layoffs and attrition happen for reasons that are not positive, AI provides a positive spin.
> No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.
No, but some are resigning when they're told their bonus is being cut because they didn't use enough AI.
Using AI makes me want to resign from life, it removes all the fun and joy from coding.
I absolutely will resign if my job becomes 100% generating and reviewing AI generated slop, having to review my coworker's AI slop has already made my job way less fun.
Agreed! The people who did not work hard but were kept employed ala “bullshit work” are being removed.
Eh, I have plenty of "bullshit work". Only that, actually, for the foreseeable future.
Building clusters six servers at a time... that last the order of weeks, appeasing "stakeholders" that are closer to steaks.
Whole lot of empty movement and minds behind these 'investments'. FTE that amounts to contracted, disposed, labor to support The Hype.
The problem with AI is how confidently wrong it is. In Lisbon I uploaded a picture of myself on some church steps and asked ChatGPT where that is. I came up with a place I was sure I’d never been. Then I asked if it could be part of some other place and it said sure it’s inside the main church. The pic was clearly outside. Next it gave me a random famous stair that is so clearly different a human could never be fooled. Each of these lies were extremely elaborate citing sources and describing all the things that matched. The only matching experience was with a taxi in Delhi some 20 years ago where the driver pretended to know where he was going and when I further questioned him he said the 40 story hotel I am looking for has been demolished 5 years after opening. At least he has monetary interest in lying to me so I enter his cab.
Well, as anecdotal data, have you folks noticed ads lately pushing Gemini/Claude/xx on both legacy media and online? If AI (and these products) is sooo great, why do these companies have to advertise to sell their wares?
And Google and Microsoft are hellbent on pushing AI into everything. Even if users don't want it. Nope, we're gonna throw the kitchen sink at you and see if it sticks.
In the non-tech world, nobody gives a shit about AI. People and businesses go about their daily lives without thinking about things like "Hmmm...maybe I could have prompted that LLM a different way..."
Computers being able to digest vision, audio and other input into text and back has tremendous value.
You can’t convince me otherwise, we just haven’t found a ‘killer app’ yet.
I believe this is the correct way of seeing things. We may not need a killer app though, since AI is not a platform but a core technology. It’s more about the evolution of IT infrastructure and SW systems. Non-tech companies/people don’t need to do anything really. AI will just come to them.
For most companies AI is a subscription service you sign up for. Because of great marketing campaigns, it has become a necessary tax. If you don't pay, the penalty is you lose value because it doesn't look like you are embracing the future. If you pay, well it's just a tax that you hope your employees will somehow benefit from.
If we ignore coding or tech industry for a min. Other companies Keep demanding new Report on certain things, and AI is doing that. Is it productive? probably not. But do execs loves it? Yes.
In a non-start up, bureaucratic companies, these report are there to make cover ups, or basically to cover everyone's ass so no one is doing anything wrong because the report said so.
Sounds like blockchain all over. Reminds me of an essay from two product managers in AWS that talked to clients all over US and couldn't get any business to clearly articulate why they need blockchain.
Note: AWS has a hosted blockchain that you can use. [1]
PS: If anyone has read that essay, please do share the link. I can't really locate it but that's a wonderful read.
[1]. https://aws.amazon.com/managed-blockchain/
Tim Bray wrote about it: https://www.tbray.org/ongoing/When/202x/2022/11/19/AWS-Block...
That's exactly it is! Thank you!
one of the great benefits of AI so far has been the push for more plain text documentation and opening up API access via MCP. Let's enjoy it while it lasts until we are forced back into walled gardens and micro transactions
the AI umbrella has been helpful to my BigCorp to justify more machine learning work, or discrete optimisation and scheduling problems
agentic ai which is a huge buzz in enterprise feels more like workflow and rpa (again) and people misunderstanding that getting the happy flow working is only 20% of the job.
For the type of work I typically do, AI is hopelessly terrible. Not too surprising because there is zero training data.
So how do you learn?
Trial and error?
I mean I'm in a similar situation in that I'm a games developer and no AI system has been trained on the details of PS5/Xbox/Switch development since those are heavily NDA'd and not available to the public. So I learn by reading docs which are available to me as a registered developer, but AI doesn't have that ability and it hasn't been trained on this.
Because AI is a financial bubble and it is the only thing holding up the entire US stock market. But the day of reckoning of near.
Are you using it? I genuinely don't understand how people who are experimenting with the tool can feel this way
It's not inconsistent to say there's a financial bubble and also genuinely think it's a new era for software development.
There aren't enough programmers to justify the valuations and capex
Dot com was a financial bubble, but the internet was still very usefull. Financial markets can become (often are) dislocated from reality.
I sometimes use grok. But not much. Your confusion is strange. I never said the tech is a bubble(it can be used today, although in very limited manner, compared to how it is being sold to the public), just the financial aspect of it. If you'd be more educated in investing, economics or geopolitics you'd understand what is going on. I am not being hyperbolic here. Even Altman admitted AI is a bubble. It's really no secret to anyone. But bubbles will be ridden no matter what, all the way up, until they pop. So knowing it is a bubble does not change much. We just know what we can expect once it pops.
tl;dr I was merely answering the question the article proposes.
Simple fact: AI is extremely powerful, in the hands of experts who invested time in deeply understanding it, and in understanding how to actually use it well. Who are then willing to commit more time to build an actually sustainable solution.
Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement.
There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com.
Something I've noticed is LLMs seem to be able to answer questions on everything, in quite a lot of detail. But I can't seem to get them to actually do anything useful, you basically have to hand hold them the entire way to the point they don't really add value. I'm sure there is plenty of research in to this, but there does seem to be a big difference between being able to answer questions, and actual intelligence.
For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt.
AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own.
They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective.
And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex.
> AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own.
I am such a one, and AI isn't useful to me. The answers it gives me are routinely so bad, I can just answer my own questions with a search engine or product documentation faster than I can get the AI to give me something. Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about.
It also happens when you ask it to do simple things like create comments. At least 400 words and still it will regurgitate/synthesize, often with a "!", the content you're asking it to comment on.
Big benefit from coding Agents: for things to work it better have documentation. Which humans usually aren't given, so anything which forces documentation is good.
Large language models are a deeply impressive technology, but they're not artificial general intelligences, because you need to supervise them. Like everything else that has been called 'artificial intelligence' since the 1950s, I think we'll find some niches that they're good for, and that'll be the end of the hype bubble.
The hype does serve a purpose, though: it motivates people to try to find more possible uses for LLMs. However, as with all experiments, we should expect most of these attempts to fail.