Claude Code is extremely easy to set up and use. I suspect its saturation among software professionals is at the majority of the addressable market.
What if there are no other killer apps for Enterprise? Only CC will produce the level of token churn that could drive huge profits for model providers.
The Enterprise market is not as substantial as the rapid success of CC makes seem.
What about "cowork", aiming to be the claude code of excel files and pdfs and screenshotting your desktop to tell you what's wrong?
Like, that feels like it's also a huge amount of token churn ("sure, I can search every xls file on your machine to find the 2023 invoice from that company"), and very early in its adaption curve.
Most people are still using AI as a webpage chatbot to ask questions to and copy+paste between, but running an "openclaw" like assistant, which can access your files, email, and opens you up to wild security attacks, that seems like a really big killer app.
Cowork to me also seems like it'll take longer to reach the broader market since the models are less good at "use the mouse and keyboard to do this repetitive task" than "write code", but I see it as having killer-app potential with lots of token churn.
I think The Verge said it the best. Taking advantage of these tools to the maximum requires you to have "software brain" which the average person does not have. They struggle to set up a simple automation in their smart home platform of choice. There is little reason to believe they will take the leap to use such tools to simplify daily tasks because it requires people to think about which daily tasks can be simplified and automated.
I don't think 'software brain' is required for non-coding tasks. Rather, it requires 'manager brain', the ability to delegate, direct, and review the output. Manager brain is more prevalent than software brain and likely learnable by many knowledge workers who don't yet have it.
I think you still need software brain, because ultimately, this stuff still has limitations driven by software constraints, and having the AI try to explain it to them doesn't necessarily help.
I think we all have had experiences with people treating their computers as magic boxes and not understanding why certain requests simply are not possible to satisfy.
A growing number of non-technical managers are now using Claude Code to build small custom software. A larger share will use Cowork to automate routine business tasks. Claude Cowork will become easier to use and more automated over time, as it learns the user's preferences, just like a good executive assistant does.
Granted, it's possible that a majority of people will not acquire proper 'manager brain' either and we'll see how that pans out. Evolutionarily, managerial skills are much more aligned with what many hunter-gatherers might learn as they mature and become more of an advisor than a doer.
Even if only 10-20% of people end up using multiple autonomous agents regularly for their work and business, that will change the economy. Contrast this with <1% of people who develop software professionally.
You also need the brain of not giving up after 2/3/10 tries. I don't know what the exact numbers are but if something doesn't work properly after the second or third try a huge percentage of people give up.
You have to recognize that it's a problem to delegate in the first place. One example I love to trot out is, do you have any toilet seats in your life that kinda slide around bit and don't seem securely attached? It's absolutely trivial to fix this, and it's really annoying when it happens, yet with shocking frequency I encounter people who've just been dealing with the annoyance because they didn't process it as something they could solve.
It's not that easy to fix, and it can be kinda gross, and once it happens once, it tends to happen again in fairly short order. I'm someone that's fixed those loose seats countless times, and continues to do so, but the gap between me noticing it and fixing it is consistently growing.
Yes. And that’s not a criticism of average people. Tools should fit the user not the other way around. Designs systematically removed shadows and visual clues. Developers render buttons off the screen requiring a scroll to submit. Hard to criticize the user under those circumstances. But there are people with art brains, and math brains, and software brains. So it may be the case that AI adoption is limited by how it expects the user to relate to the tool
The whole point of click and point (gui) was that one barely had to engage the brain vs using a terminal.
The ideal experience is where one’s resources are able to be allocated such that one can achieve some goal with minimal effort. We are very far away from this ideal with llm’s and absurd amounts of money has already been spent.
The point of AI is that it's supposed to be intelligent. Why silo it in an app? Instead of telling it what to automate, shouldn't it sit at the OS level, watch everything you do, and figure out what to automate by itself?
Most people don’t have good enough hardware to run a decent model. I’m not even sure if any local models can handle image input (but I’m by no means an expert in local models).
So if you’re going to need the data center to process it, then you run into the same issue Microsoft did when they announced the OS feature where they took screenshots of your desktop all the time for advanced search or whatever. People consider it to be a privacy issue.
Humans do not want something sitting at the OS level, watching everything you do. Microsoft, famously, tried this and the backlash was immediate and intense.
If you believe you can do better, then build it! I don't think the tide has changed though.
> shouldn't it sit at the OS level, watch everything you do, and figure out what to automate by itself?
Read that again and really ask yourself if you want a private company to have access to all that and the ability to do whatever it wants with your system at the OS level.
On a smartphone, you're trusting Apple or Google to make the OS. They already can do anything they want with your system. Do you read every line of code in every security update?
An AI that consumes every document on the system in response to a simple search request is going to be fired just as quickly as a human who does the same thing not long after replacements able to use conventional search tools to efficiently accomplish the same task are widely available.
Similarly, customers who rely on AI cowork tools will come to favor systems and applications that expose AI-friendly interfaces, which shouldn't be difficult to implement in most cases under the assumption that the models in question are already good at consuming API documentation and writing code (and, for that matter, writing API documentation, refactoring, and generating relatively straightforward wrapper code).
I have less faith in the market's ability to effectively respond to security threats in a timely fashion, alas.
"Push buttons for me" in the most common ways I see it used ("add this ticket to Jira so I don't have to") is a nice timesaver for being lazy but it's not a 10x multiplier to justify the subscribe-forever cost.
I think it's more likely that the companies that employ large numbers of people to perform manual push-the-button-then-the-other-button workflows will replace the tools that need button-pushing with other sorts of automation.
And outside of work I wouldn't spend any money on something to save myself the ten minutes of logging in to pay my credit cards or check my bank statements once a month or so. I have no real need for an always-running assistant and even the things that it seems most useful for today (beating unassisted humans to the punch for limited-quantity things) are only something it could help with as long as only a very few people have access.
> What about "cowork", aiming to be the claude code of excel files and pdfs and screenshotting your desktop to tell you what's wrong?
I’ve been using these types of functions for a while for some specific use cases, and it’s super useful for this. Eg go into my budgeting app and explain to me why a certain discrepancy between forecast and actual occurred, which would otherwise cost me a huge amount of time.
I’ve also been using Cowriter AI, which actively learns from what you’re doing by taking screenshots of your screen every few seconds.
These types of utilities are just starting, they’re underexplored, and will definitely burn lots of tokens (while creating value).
it's been pretty funny seeing people who did not predict Claude Code's success and previously said the whole sector was a nonsense dead end now saying, well okay there's one massively successful killer app, but what if that's the only one ever?
The scrutiny is because the actions of the company suggest that the company itself has no idea what another killer app could be. Let alone enough to reach a 1T valuation.
Claude Code is rare product that is both beneficial and economically addictive, where its use increases demand for itself, at least in the supply / demand range for code we are accustomed to. It makes making software so much easier that Claude coding custom software becomes a solution to all sorts of past annoyances. Maintaining the software is easy enough thanks to Claude code.
I have now witnessed first hand what the unexpected benefits might be. I expected CC to be a boon to overburdened teams, because it's now possible to spend $2 on compute and have it write a mostly-one-off tool that nobody would ever otherwise have the capacity or time for.
Sure, that's happening too, but to a lesser degree than I thought. CC with a number of "enterprise integrations" (really: corporate MCPs) is a pretty hefty force-multiplier for operations teams. "Go summarise and dissect this weird client request for me. Documentation is spread across at least $THESE_ENTERPRISE_DATA_SILOES." Saves a bunch of time pinging the different people across continents who happen to know intimate details. That was not entirely unexpected.
It's the technically minded but not necessarily otherwise technical people who keep surprising me in weird and wacky ways. People are building themselves and their immediate peers disposable dashboards. Who needs a service to pull data for a real-time display when CC can collect the necessary information and construct a local, static HTML file with all the info neatly in one place? I'm sure there will be a hangover because the compute cost for doing these in JIT fashion will surely feel like death by a thousand cuts at some point, but the ability to really quickly validate whether certain types of data aggregations are useful is proving to be ... a positive development.
I disagree about the ease of maintaining the software, though. You still need the skills to really understand what the code is doing, and with the original "why" possibly lost in the adrenaline haze, the maintenance effort floor has shifted.
There's just so much incredible stuff being made by really brilliant people that never would have had the chance before. And these tools are literally brand spanking new. We're just getting started.
Some shots are indeed impossible to one shot, but others can serendipitously turn out better than you wanted.
I'd say it averages 2.5 generations per shot. A lot of single one off one shots, and some (few) shots that just won't work no matter how hard you prompt.
That said, it's likely you'll find usable footage even in the losses. Videos are meant to be cut. A failed generation might still have salvageable contents you can cut to/from.
Veo 3 is one of the weaker video models, surprisingly!
Google and OpenAI are both really far behind the Chinese at this point. Perhaps Google will unveil something groundbreaking at Google I/O, but both companies have been trailing for well over a year at this point.
One of the reasons OpenAI gave up was not only were they losing money, but they were also ridiculously far behind (11th or below in the rankings).
The models most professionals use are Kling o3, Kling 3.0 and, more recently, Seedance 2.0. These are all Chinese models.
Seedance 2.0 stands out as an almost order of magnitude improvement over everything else in the industry. It's truly the SOTA model. It blows everything else out of the water, and it's truly remarkable to experience it in use.
On April 30th, Alibaba's new Happy Horse model rolls out. They poached folks from Kling to build it. It's supposedly 2-5x cheaper than Seedance 2.0, and its ELO scores rank it as the new highest performing model.
> Only CC will produce the level of token churn that could drive huge profits for model providers.
Are they actually driving any profit? I mean actual profit, not "tokens" or users or profit but ignoring inference costs, same ignoring training, R&D, etc. I'm not arguing against how useful it is, nor how popular, just the basic total spent - total earned.
Us devs have shiny object syndrome. We will use whatever we perceive to be be best at the moment and move on. People are already souring on Opus 4.6 due to what appears to be opaque changes to it by Anthropic. For any of these companies to be successful they need to get to a point where their models stop growing and compute gets multiples cheaper.
Missing the Claude Code market was the biggest swing and miss ever.
Too busy trying to make TikTok for preteens with $4/generation videos that lost their novelty the minute IP was off the table. Didn't even identify the professional market in video was the correct place to invest, like Kling and ByteDance did.
Chasing consumer killed their ascendency.
Sam is a ruthless leader and knows how to build an empire, but he's also a distracted leader who chases too many flights of fancy. Without a golden goose like Zuckerberg, every mistake is a knife wound.
Its pretty embarassing how they have blown the lead. Instead of finding a pathway toward selling tokens in volume (software production) they spread themselves thin and tried to hype up research, sora, web browser... blah blah.
It amazed me that no one picked up on Codex last summer when it was effectively unlimited. I must have burnt through £10k worth of inference whilst still paying £20 a month
Last summer it was good for certain tasks, but letting it run wild was a recipe for a huge mess where you'd spend more time unraveling it than writing the whole thing by hand. That was my experience, at least.
What is incredibly disgusting for me, is the idea that there can be only one winner in the stock market, which spits in he face of free market competition.
This is not an AI thing, this is a stocks thing, which Ive been complaining about incessantly.
If a given domain, like AI, has competition, that means you have to sell things at cost + margin, and rush ahead or be crushed by competitors. Will definitely make you good money, but wont make you a king.
This is not the kind of money people involved with these kinds of companies are looking for.
AI right now looks like a competiton, with many horses in the race, which are more or less building the same product.
It will hard to squeeze and enshittify this considering people can just jump to another vendor, thus if the current market structure were to prevail, investors would go.
Thus competition has to go.
Altman knows this, and tried to position OpenAI as the obvious winner in this competition, but I guess in the process he managed to alienate people, so now he's not doing so well.
HN is a bubble. I hear people from outside of Silicon Valley that only just started trying out Claude Code recently. There's still a ton of developers yet to jump on board.
Nothing is worth $852B in that space of time unless they are printing more than half of that in cash which to be clear they are not. They are burning it at that rate. Let's be clear. It's a valuable company, a valuable product, a valuable technology. It set the trend for the next phase of computer usage. But it's not worth $852B in that span of time and when it goes public that reality will bear down on them quickly.
It's a falling knife. Don't try catch it on the way down. That valuation might be justified in another 10 years.
Sure they have ... I don't know how many users but it's not like a social network. Instagram was valued $10B with 10 very VERY fast not because of it's tech or employees but mostly IMHO because of the number of locked in users ... because of OTHER users.
Here if one wants to move from OpenAI to Anthropic, they can and they do. You might have difficulty exporting history, context, etc but you make it.
Even basic email has more lock-in than any of the model provider. They did have some moat few years ago, arguably, but now no differentiator that would justify such a valuation.
They are no Meta/Google/Microsoft/Oracle not because of their size or technology but only because their customers can swap providers.
> Even basic email has more lock-in than any of the model provider.
History has proven the average person has very little ability to discern which products have lock-in.
Everyone was confidently predicting Uber would dominate over all the regional ride sharing apps because ride sharing is a commodity and subsidies were enough to shift user behavior.
The thesis from AI providers about lockin have always been coherent: increased personalization and learning of workflows over time would increasingly make using a new AI much worse than an AI that already knows you. If you look at the human virtual assistant world, stickiness is incredibly high once are happy with your onboarded assistant because onboarding a new person unavoidably sucks.
Is this thesis correct? We don’t know, it took Uber billions of burnt cash to discover their thesis was incorrect.
Index funds won't get in at ipo prices. They wait a year or so before including new stocks, so the price is guaranteed to have settled by then. OpenAI also isn't profitable yet so that's another point against them in terms of being included in index funds.
As someone working in the enterprise space with OAI, this still feels like we're in the top of the first inning.
Many teams remain anchored on equating AI with chat experiences, while a growing share of enterprise value is emerging from leasing compute clusters to run agentic workloads in containerized environments.
OpenAI has built a cloud-first architecture that supports this model. The desktop experience and applications are sexy, but enterprise usage will likely skew heavily toward asynchronous, background processing.
I know that people keep saying "we're early on here", but I take it as a negative signal that people keep thinking we are in the early innings here. Compared to previous generations of technology change, a great deal of time has passed, it should be a bit disconcerting that no one seems to have found a way to make money out of this yet.
Look at previous killer apps- they came out quickly and were raking in money very quickly. The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. Apple IPO'd in 1980 with a 21% operating margin! Netscape Navigator 1.0 released December 15th 1994, Amazon.com made its first sale July 16th 1995- 214 days later. AMZN IPO'd May 15th 1997, 883 days after Netscape 1.0 released to the public (they had raised <10 million dollars to that point, but chose not to have a profit because they kept re-investing all of their profit into expanding the business).
We are already 1232 days since ChatGPT 1.0. So we're about 50% farther along than either of those killer apps. No one has figured out as good a business model for Generative AI as either of those were.
To use the other great technology transformation of the past 50 years, cell phones, I have a bit of trouble figuring out the right comparison to ChatGPT 1.0. I can work backwards from today to ChatGPT 1.0 opening up to the public, that's about the difference from the iPhone 3G (the first one with an appstore, the real killer app) to the launch of the Motorola Razr, to give you an idea of how fast mobile technology moved.
Do note that the Razr and the iPhone, like Visicalc, the Apple II, and Netscape 1.0 were hugely profitable for their companies, in a way that no one has demonstrated with Generative AI. Amazon is a bit of a special case, but they were not raising money, they were just re-investing cash that was being thrown off not as profits but into expanding the business. I don't believe that any AI company is generating cashflow the way that Amazon was in 1997, and the other companies mentioned here were GAAP-profitable.
There is some revenue in copywriting, translation and generating images. But that is probably 20 per month per seat enterprise plans with limited use. With the possible cost of interface varying enough to have actual marginal costs...
Is it actually profitable? That the presumed market leader, Anthropic, changed their business model just today to kill off their buffet monthly plans and switch to a la carte for Enterprise makes me doubt they are making money off of selling tokens to software developers.
On top of that, the APIs/Tools/Function Calls into the real world don't exist yet. But consumer products are going to start eventually exposing functionality to these LLMs. By that time, I wonder if we'll all have an edge-inference box sitting in every one of our houses that we buy from a consumer products company like Apple or from Amazon, or directly from OpenAI or Anthropic. These little brains will be the low latency central nervous system of a lot of things in our homes, and gateways to the larger models in the cloud. Or at least that's how I imagine it sorting out in the future.
Previous generations of technological change of the calibre we are told AI will be also required major changes to the real world and new products to be built: new cell towers had to be constructed, fibre cables laid, data centers built, personal computers produced, warehouses established. And software needed to be fundamentally rewritten to support each of these generations too. And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
That's my biggest concern with it, I don't see the business case closing anywhere, and without businesses that actually make money all the technology in the world doesn't actually do anything.
> And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
Have you considered a simple answer to this inconsistency? The market and investors does not demand that these AI companies make a profit. The only reason companies are expected to make profits is because either those who own shares in the company expect it, or those willing to invest in a company expect it.
> There just isn't enough compute right now to realize the larger monetization strategies.
How can this be relevant? Why isn't the compute we have available right now sufficient for turning a profit?
Is this another one of those "We lose money on each sale but make it up in volume" things?
I mean, if much much larger investments are needed before current LLM providers can turn a profit, that's not a good indicator that they have any sort of sustainable business, is it?
Comparing the IPO market today to the IPO market in the late 90s is not very instructive. You could have IPO'd a lemonade stand in 1998 and raised $10 million.
I'm using that only for AMZN because they seem to have made a choice to not turn a profit and instead to expand their business. The other companies I mentioned were directly profitable by this point in their respective revolutions, except for Amazon, where I'm using the IPO as proof that they had a sustainable business, even if it wasn't precisely profitable- they were generating enough cash to be profitable, they just chose to reinvest it into the business. I don't see any evidence that any of the major Generative AI companies are in that position or the position that Apple, Netscape, Motorola etc. were in.
And that's the weird one, all of the other examples I provided were booking real profits by this point in their technology cycle.
I think that fact that IPOs have grown slower over the years is more about larger VC markets where they can fund valuations up to hundreds of billions rather than something to do with adoption.
As you note, Netscape and Amazon IPOed fairly quickly.
Google took 6 years (1998 to 2004)
Facebook took 8 years (2004 to 2012)
Alibaba Group took 15 years (1999 to 2014)
Claude Code is at $30B annual recurring revenue, and it launched in Feb 2025, and OpenAI at $25B (although they measure partner revenue differently). By comparison the iPhone make $630M revenue in the 12 months after it was launched.
> Claude Code is at $30B annual recurring revenue, and it launched in Feb 2025, and OpenAI at $25B (although they measure partner revenue differently). By comparison the iPhone make $630M revenue in the 12 months after it was launched.
What does revenue have to do with it? Companies usually want to IPO with a decent profit margin showing on the books, revenue doesn't usually come into it.
I was convinced they were going to go the openclaw or something similar route..pivoting into cybersec/enterprise makes sense if they are trying to copy anthropic, but it doesn't really telegraph any sort of differentiator
This is the key. These frontier model companies are funneling all of their time and resources into scaling, how could they possibly be researching the next phase of AI? Once scaling hits the limit the money is gonna dry up.
I suspect that Google is working on improving their models for coding behind the scenes. Hopefully they release something soon to compete with Codex and CC. To be candid, I use CC, but have not tried out Codex.
There’s roughly 8b people in the world and somewhere between 2-3b have never used the internet. If OpenAI manages to capture the 6b internet users growing at 100% per year, they have 3 years of user growth left max. Then what?
what makes gpt 5.4 bad to chat ? To me it seems smart and does the job albeit it’s a bit slow. I’m using it only/mostly with the “pro” /xhigh reasoning
The way it converses is the least human like out of all models. It communicates like its writing markdown documents instead of just conversing normally like every other model does. You ask a question and it spits out a design doc instead of just answering the question like a normal human would.
There does seem to be like a 1% chance (maybe 0.5%) that this turns into a WeWork situation. It's a product that users love, but the company leadership is so used to lying and deceiving and being loose with numbers that the IPO filing could be a pretty big shock. Either they'll have to tell the truth, which will be much less rosy than the lies, or they'll lie and turn everyone off.
just maths. if they're as a capable as each other then x product cannot be worth multiples above y unless there's a clear USP. Arguably OpenAi's is brand recognition but given Antrhopic's recent growth that's less certain than a quarter ago
I don't believe that either Anthropic or OpenAI are going to survive the AI valuation crunch. Google, Meta and Microsoft will because they're not AI-only companies. There are four reasons why I believe this:
1. I honestly don't think that AI is all that useful for anything other than suppressing labor costs and I don't expect that to change in the short to medium term;
2. I really don't think Anthropic or OpenAI can ever satisfy their stratospheric valuations. I foreesee no cash flow possible that will arrive quick enough to make that happen;
3. Hardware costs will devalue the trillions invested in AI data centers. By 2030 the GPUs will probably be at least 3x as good. Bear in mind, it's just over 4 years between the 3090 and 5090 and that's 3x TFLOPS; and
4. China or other actors will make sure that proprietary LLMs won't be dominant. DeepSeek was a shot across the bow. China in particularly won't want a US tech company to dominate this space. The increasing RAM in local, relatively cheap computers will make this more and more viable.
Bonus prediction: I think China will be making their own homegrown NVidia equivalent GPUs on homegrown EUV by 2030.
For some reason the latest Claude Desktop release from Anthropic threw off its Claude branding and charm to chase after bland Codex Desktop app look and feels.
What happens if OpenAI collapses at this point? Is it just too big to fail given defense contracts and Microsoft?
The Sora sunsetting marked a big shift towards enterprise focus and meeting Anthropic on the enterprise battlefield, but almost all engineers I work with or know are using Claude at this point exclusively.
There have been a stream of HN posts (I'm noticed this mainly in the past few weeks) implying some people prefer ChatGPT/Codex to Claude.
Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting, often stopping in the middle of a query. ChatGPT/Codex doesn't have this problem.
My 2 cents: Claude is more expensive, but it has something that Codex/GPT lacks that's not easy to quantify. Opus is probably a bigger model (my guess) and trained on code and technical writing (books?) of better qualify compared to GPT.
However, once you learn how to deal with the laziness (which can be dealt with some CLAUDE.md instructions and context docs), Claude shows a better taste for coding. It replicates patterns from the repo, writes more readable/maintainable code, follows instructions, captures implicit information.
GPT/Codex is not a bad model/agent, but it lacks something. It's amazing for code reviews, but it writes code with zero regard to your existing codebase or SOLID/DRY principles. It just likes to output code (a lot of it) that works for the task you gave it right now, with zero regard for maintenance later. And also over-uses defensive programming in a way that quickly makes the codebase unreadable for dynamic languages.
Claude is not perfect, I still have to steer it sometimes to prevent overengineering or duplicate code, but a lot less than when I try Codex (and the built-in /simplify does half of the work for me).
>>Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting, often stopping in the middle of a query.
The free version is pretty much unusable. Not a single query completes, You get only one query every 4 hours, given like 12 waking hours, you get 3 queries, none of which complete.
$20 plan gets you only a small distance from there.
Looks like the focus is entirely on Enterprise customers these days. They don't even bother with their regular users these days. CC is entirely a enterprise product.
> Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting
Utterly not my experience. I use opus near daily for long research sessions (not all agent based). Are you throwing in 100k input tokens to every query?
What the hell kind of queries are you running? I use Claude Pro all the time for asking questions, doing data analysis, writing side projects, and I very rarely get rate limited.
I use Claude Max 20x at work and I rarely hit 10% session utilization, which implies even using Claude to write code all day only uses 2x the Pro token limit.
Are you just telling it to try again when you get a response you don't like?
The amount of peripheral growth around it is even larger, tons of construction, utility company upgrades etc. There are more data centers under construction than there are currently operational data centers. So there is more than doubling of capacity coming from current buildout. If you include projects in the planning phase it is something like 4-6x expected capacity increase. The utility infrastructure buildout to meet this demand is equally huge.
The warning signs are already starting to show up though, projects are being stalled, not filled out, blaming it on delays from China etc, but the funding is still present, the construction keeps going on of the next building even as the last one sits vacant and offline. The sky high purchases of property from connected individuals by site developers continue, even as pushback mounts and many places are passing anti-datacenter ordinances.
Claude Code definitely has a head start, but there have been a few HN posts about a perceived nerfing of the intelligence and settings in the past month or so. Codex could capitalize on that weakness. They just introduced a $100 monthly 5x plan so they are at parity with the Claude Code plans. If Anthropic fiddles too much more with the settings then people will start to switch to Codex.
Codex is just better than Claude but Claude is faster and has the better UI for vscode. That’s why I use Claude as main coder with codex(5.4 with xhigh effort) as mcp reviewer etc. It is clear to me that codex is a better programmer but the UI and speed are too much of a con to use it exclusively. Claude is just clumsy
If DoD systems are running on OpenAI infrastructure, you can't just pause them for 6 months during an acquisition. This gets far more complex than just "liquidation of assets".
Because their assets would have been vastly overvalued. The bailout is when the government buys those assets at as close to that fictional valuation as they can, and likely then sells them back at their actual worth.
> Absolutely no reason for a bail out.
There's never been any reason for a bailout. It's just handing tax money to wealthy people who have made bad decisions.
At some point you reach a size when too many politicians and the people who own them have invested so much money that they're willing to take any size political hit in order to save themselves from personal losses when you fail.
It’s an absolutely hilarious/absurd valuation for a company that has absolutely no path to do anything other than lighting money on fire, forever. I’d call it nonsense, but Tesla’s valuation proves the market runs on shenanigans, at this point, so whatever.
If people want to meme OpenAI into a trillion dollar market cap, I guess let them?
Ah yes, the weekly "ChatGPT is definitely going to fail, for real!" post, with absolutely no substance whatsoever. Still, they know it will definitely be on the front page, regardless. Make sure you subscribe to their pub!
> "You have ChatGPT, a 1 billion-user business growing 50-100% a year, what are you doing talking about enterprise and code?" an early backer of OpenAI told FT. "It's a deeply unfocused company."
This is exactly the dynamic I've been worried about.
If you go to OpenAI's site to learn what they're all about, they're pretty clear about it: "ensure that artificial general intelligence benefits all of humanity", "Join us in shaping the future of technology". They think and I agree that ChatGPT is great, but the future of humanity does not depend on precisely how successful this one consumer chatbot is, and so it is not the company's focus. Anyone who understands OpenAI at even a basic level would recognize this, it's neither new nor subtle.
I'm not sure how to avoid the conclusion that OpenAI investors do not understand OpenAI and are just revenue growth junkies.
The deeper issue is structural, not just investor misunderstanding. OpenAI converted from a nonprofit-controlled entity to a PBC specifically to attract this kind of capital. When you take $6.6B from investors expecting returns, you create fiduciary pressures that are hard to keep out of strategic decisions regardless of what the mission statement says.
The 2023 board fight illustrated exactly this conflict in real time: the board tried to exercise mission-aligned oversight and was effectively overruled by capital. The new governance structure gave investors more influence, not less.
"We take the mission seriously" and "we need to justify an $852B valuation" can coexist for a while, but not forever. The investors may be revenue-focused, but they were invited in under terms that make their expectations structurally legitimate — which is what makes this more than just a perception problem.
> ensure that artificial general intelligence benefits all of humanity
Thus far based on their actions, a reasonable read would be that they believe “humanity” would be better off with fewer people. Whoever you think OpenAI is or was, you’d have to be willfully ignorant of the actions of those who run it to believe it and Sam now.
So now they are realizing that they are indeed in a bubble and OpenAI was extremely overvalued?
Anthropic is also overvalued. Their revenue is not even recurring. It’s now “Annualised Revenue” due to token spend.
These two companies are just vehicles of a pump and dump scheme. OpenAI is already off loading shares with “acquisitions” that do not make any sense because investors already think they are about to IPO and not worth the price.
Claude Code is extremely easy to set up and use. I suspect its saturation among software professionals is at the majority of the addressable market.
What if there are no other killer apps for Enterprise? Only CC will produce the level of token churn that could drive huge profits for model providers.
The Enterprise market is not as substantial as the rapid success of CC makes seem.
What about "cowork", aiming to be the claude code of excel files and pdfs and screenshotting your desktop to tell you what's wrong?
Like, that feels like it's also a huge amount of token churn ("sure, I can search every xls file on your machine to find the 2023 invoice from that company"), and very early in its adaption curve.
Most people are still using AI as a webpage chatbot to ask questions to and copy+paste between, but running an "openclaw" like assistant, which can access your files, email, and opens you up to wild security attacks, that seems like a really big killer app.
Cowork to me also seems like it'll take longer to reach the broader market since the models are less good at "use the mouse and keyboard to do this repetitive task" than "write code", but I see it as having killer-app potential with lots of token churn.
I think The Verge said it the best. Taking advantage of these tools to the maximum requires you to have "software brain" which the average person does not have. They struggle to set up a simple automation in their smart home platform of choice. There is little reason to believe they will take the leap to use such tools to simplify daily tasks because it requires people to think about which daily tasks can be simplified and automated.
I don't think 'software brain' is required for non-coding tasks. Rather, it requires 'manager brain', the ability to delegate, direct, and review the output. Manager brain is more prevalent than software brain and likely learnable by many knowledge workers who don't yet have it.
I think you still need software brain, because ultimately, this stuff still has limitations driven by software constraints, and having the AI try to explain it to them doesn't necessarily help.
I think we all have had experiences with people treating their computers as magic boxes and not understanding why certain requests simply are not possible to satisfy.
A growing number of non-technical managers are now using Claude Code to build small custom software. A larger share will use Cowork to automate routine business tasks. Claude Cowork will become easier to use and more automated over time, as it learns the user's preferences, just like a good executive assistant does.
Granted, it's possible that a majority of people will not acquire proper 'manager brain' either and we'll see how that pans out. Evolutionarily, managerial skills are much more aligned with what many hunter-gatherers might learn as they mature and become more of an advisor than a doer.
Even if only 10-20% of people end up using multiple autonomous agents regularly for their work and business, that will change the economy. Contrast this with <1% of people who develop software professionally.
You also need the brain of not giving up after 2/3/10 tries. I don't know what the exact numbers are but if something doesn't work properly after the second or third try a huge percentage of people give up.
You have to recognize that it's a problem to delegate in the first place. One example I love to trot out is, do you have any toilet seats in your life that kinda slide around bit and don't seem securely attached? It's absolutely trivial to fix this, and it's really annoying when it happens, yet with shocking frequency I encounter people who've just been dealing with the annoyance because they didn't process it as something they could solve.
It's not that easy to fix, and it can be kinda gross, and once it happens once, it tends to happen again in fairly short order. I'm someone that's fixed those loose seats countless times, and continues to do so, but the gap between me noticing it and fixing it is consistently growing.
You’ve never tried to train the average admin.
Basic forms can be a challenge. Even things like selecting a dropdown menu or pushing a button can be surprisingly hard.
Most people here have no idea what works for the majority of people - who don’t want to spend time figuring stuff out.
I’m sure many here live in delulu land wondering why everyone doesn’t find the open claw stuff as fascinating as they do.
Yes. And that’s not a criticism of average people. Tools should fit the user not the other way around. Designs systematically removed shadows and visual clues. Developers render buttons off the screen requiring a scroll to submit. Hard to criticize the user under those circumstances. But there are people with art brains, and math brains, and software brains. So it may be the case that AI adoption is limited by how it expects the user to relate to the tool
How do you delegate, direct, and validate results if you have no idea what you're looking at?
This is the same issue many managers of people have for the same reason.
The whole point of click and point (gui) was that one barely had to engage the brain vs using a terminal.
The ideal experience is where one’s resources are able to be allocated such that one can achieve some goal with minimal effort. We are very far away from this ideal with llm’s and absurd amounts of money has already been spent.
The point of AI is that it's supposed to be intelligent. Why silo it in an app? Instead of telling it what to automate, shouldn't it sit at the OS level, watch everything you do, and figure out what to automate by itself?
Most people don’t have good enough hardware to run a decent model. I’m not even sure if any local models can handle image input (but I’m by no means an expert in local models).
So if you’re going to need the data center to process it, then you run into the same issue Microsoft did when they announced the OS feature where they took screenshots of your desktop all the time for advanced search or whatever. People consider it to be a privacy issue.
Humans do not want something sitting at the OS level, watching everything you do. Microsoft, famously, tried this and the backlash was immediate and intense.
If you believe you can do better, then build it! I don't think the tide has changed though.
> shouldn't it sit at the OS level, watch everything you do, and figure out what to automate by itself?
Read that again and really ask yourself if you want a private company to have access to all that and the ability to do whatever it wants with your system at the OS level.
On a smartphone, you're trusting Apple or Google to make the OS. They already can do anything they want with your system. Do you read every line of code in every security update?
An AI that consumes every document on the system in response to a simple search request is going to be fired just as quickly as a human who does the same thing not long after replacements able to use conventional search tools to efficiently accomplish the same task are widely available.
Similarly, customers who rely on AI cowork tools will come to favor systems and applications that expose AI-friendly interfaces, which shouldn't be difficult to implement in most cases under the assumption that the models in question are already good at consuming API documentation and writing code (and, for that matter, writing API documentation, refactoring, and generating relatively straightforward wrapper code).
I have less faith in the market's ability to effectively respond to security threats in a timely fashion, alas.
"Push buttons for me" in the most common ways I see it used ("add this ticket to Jira so I don't have to") is a nice timesaver for being lazy but it's not a 10x multiplier to justify the subscribe-forever cost.
I think it's more likely that the companies that employ large numbers of people to perform manual push-the-button-then-the-other-button workflows will replace the tools that need button-pushing with other sorts of automation.
And outside of work I wouldn't spend any money on something to save myself the ten minutes of logging in to pay my credit cards or check my bank statements once a month or so. I have no real need for an always-running assistant and even the things that it seems most useful for today (beating unassisted humans to the punch for limited-quantity things) are only something it could help with as long as only a very few people have access.
Cowork is a dead end. Most people can’t operate onedrive.
Tools like Claude are best at answering things when the user understands the question.
Why did they even bother putting resources into that project? Bizarre.
It’s telling how scarce vision is.
It’s an incredibly useful product for the people who can use it.
It just isn’t the next Microsoft Office. A market of 10M people vs 2B!
> What about "cowork", aiming to be the claude code of excel files and pdfs and screenshotting your desktop to tell you what's wrong?
I’ve been using these types of functions for a while for some specific use cases, and it’s super useful for this. Eg go into my budgeting app and explain to me why a certain discrepancy between forecast and actual occurred, which would otherwise cost me a huge amount of time.
I’ve also been using Cowriter AI, which actively learns from what you’re doing by taking screenshots of your screen every few seconds.
These types of utilities are just starting, they’re underexplored, and will definitely burn lots of tokens (while creating value).
it's been pretty funny seeing people who did not predict Claude Code's success and previously said the whole sector was a nonsense dead end now saying, well okay there's one massively successful killer app, but what if that's the only one ever?
It’s the second killer app. The first was AI Chat. It was genuinely game changing and still is.
The scrutiny is because the actions of the company suggest that the company itself has no idea what another killer app could be. Let alone enough to reach a 1T valuation.
The whole sector is still quite likely a nonsense dead end.
Claude Code is rare product that is both beneficial and economically addictive, where its use increases demand for itself, at least in the supply / demand range for code we are accustomed to. It makes making software so much easier that Claude coding custom software becomes a solution to all sorts of past annoyances. Maintaining the software is easy enough thanks to Claude code.
I have now witnessed first hand what the unexpected benefits might be. I expected CC to be a boon to overburdened teams, because it's now possible to spend $2 on compute and have it write a mostly-one-off tool that nobody would ever otherwise have the capacity or time for.
Sure, that's happening too, but to a lesser degree than I thought. CC with a number of "enterprise integrations" (really: corporate MCPs) is a pretty hefty force-multiplier for operations teams. "Go summarise and dissect this weird client request for me. Documentation is spread across at least $THESE_ENTERPRISE_DATA_SILOES." Saves a bunch of time pinging the different people across continents who happen to know intimate details. That was not entirely unexpected.
It's the technically minded but not necessarily otherwise technical people who keep surprising me in weird and wacky ways. People are building themselves and their immediate peers disposable dashboards. Who needs a service to pull data for a real-time display when CC can collect the necessary information and construct a local, static HTML file with all the info neatly in one place? I'm sure there will be a hangover because the compute cost for doing these in JIT fashion will surely feel like death by a thousand cuts at some point, but the ability to really quickly validate whether certain types of data aggregations are useful is proving to be ... a positive development.
I disagree about the ease of maintaining the software, though. You still need the skills to really understand what the code is doing, and with the original "why" possibly lost in the adrenaline haze, the maintenance effort floor has shifted.
> Claude Code is rare product that is both beneficial and economically addictive,
I'm in the film and engineering spaces, and I can honestly say the same about image and video models.
There is so much fun in all of these tools, and the productivity gains are insane.
I shoot film, but I never would have been able to do anything like this before:
https://www.youtube.com/watch?v=HDdsKJl92H4
https://www.youtube.com/watch?v=oqoCWdOwr2U
Today, I saw AI OR DIE with this banger:
https://www.youtube.com/watch?v=CNbmoVdirxw
Gossip Goblin is doing incredible work as usual. Dude is a savant and would have killed it in Hollywood if he'd had a chance before:
https://www.youtube.com/watch?v=-Rzl7nUdEs4
Corridor Crew is leaning in and building new tools:
https://www.youtube.com/watch?v=Y3Dfw969itU
There's just so much incredible stuff being made by really brilliant people that never would have had the chance before. And these tools are literally brand spanking new. We're just getting started.
That's my view too. I love what people are doing with this stuff. I really want to get a decent rig to start doing this stuff locally someday.
Thanks, these are a real trip, especially loved Pi Hard.
Uh, how is this possible? Is this all Veo 3? How are they getting such fantastic continuity across clips?
They aren't one shotting these.
You would be surprised.
Some shots are indeed impossible to one shot, but others can serendipitously turn out better than you wanted.
I'd say it averages 2.5 generations per shot. A lot of single one off one shots, and some (few) shots that just won't work no matter how hard you prompt.
That said, it's likely you'll find usable footage even in the losses. Videos are meant to be cut. A failed generation might still have salvageable contents you can cut to/from.
Editors are the super powered folks in AI video.
Veo 3 is one of the weaker video models, surprisingly!
Google and OpenAI are both really far behind the Chinese at this point. Perhaps Google will unveil something groundbreaking at Google I/O, but both companies have been trailing for well over a year at this point.
One of the reasons OpenAI gave up was not only were they losing money, but they were also ridiculously far behind (11th or below in the rankings).
The models most professionals use are Kling o3, Kling 3.0 and, more recently, Seedance 2.0. These are all Chinese models.
Seedance 2.0 stands out as an almost order of magnitude improvement over everything else in the industry. It's truly the SOTA model. It blows everything else out of the water, and it's truly remarkable to experience it in use.
On April 30th, Alibaba's new Happy Horse model rolls out. They poached folks from Kling to build it. It's supposedly 2-5x cheaper than Seedance 2.0, and its ELO scores rank it as the new highest performing model.
https://artificialanalysis.ai/video/leaderboard/text-to-vide...
https://artificialanalysis.ai/video/leaderboard/image-to-vid...
> Only CC will produce the level of token churn that could drive huge profits for model providers.
Are they actually driving any profit? I mean actual profit, not "tokens" or users or profit but ignoring inference costs, same ignoring training, R&D, etc. I'm not arguing against how useful it is, nor how popular, just the basic total spent - total earned.
Us devs have shiny object syndrome. We will use whatever we perceive to be be best at the moment and move on. People are already souring on Opus 4.6 due to what appears to be opaque changes to it by Anthropic. For any of these companies to be successful they need to get to a point where their models stop growing and compute gets multiples cheaper.
Missing the Claude Code market was the biggest swing and miss ever.
Too busy trying to make TikTok for preteens with $4/generation videos that lost their novelty the minute IP was off the table. Didn't even identify the professional market in video was the correct place to invest, like Kling and ByteDance did.
Chasing consumer killed their ascendency.
Sam is a ruthless leader and knows how to build an empire, but he's also a distracted leader who chases too many flights of fancy. Without a golden goose like Zuckerberg, every mistake is a knife wound.
They get exactly what they deserve imo.
Its pretty embarassing how they have blown the lead. Instead of finding a pathway toward selling tokens in volume (software production) they spread themselves thin and tried to hype up research, sora, web browser... blah blah.
Again - they get what they deserve.
What evidence is there that he knows how to build an empire outside of fundraising?
The 750+ million users of ChatGPT might count for something...
If they are able to monetise them.
It amazed me that no one picked up on Codex last summer when it was effectively unlimited. I must have burnt through £10k worth of inference whilst still paying £20 a month
Last summer it was good for certain tasks, but letting it run wild was a recipe for a huge mess where you'd spend more time unraveling it than writing the whole thing by hand. That was my experience, at least.
It’s still great. I’ve pumped out 5 apps this month on the $200.
What is incredibly disgusting for me, is the idea that there can be only one winner in the stock market, which spits in he face of free market competition.
This is not an AI thing, this is a stocks thing, which Ive been complaining about incessantly.
If a given domain, like AI, has competition, that means you have to sell things at cost + margin, and rush ahead or be crushed by competitors. Will definitely make you good money, but wont make you a king.
This is not the kind of money people involved with these kinds of companies are looking for.
AI right now looks like a competiton, with many horses in the race, which are more or less building the same product.
It will hard to squeeze and enshittify this considering people can just jump to another vendor, thus if the current market structure were to prevail, investors would go.
Thus competition has to go.
Altman knows this, and tried to position OpenAI as the obvious winner in this competition, but I guess in the process he managed to alienate people, so now he's not doing so well.
But who knows what the future will bring?
I don't think it's true there can only be one winner. Lots of industries have multiple successful large companies.
HN is a bubble. I hear people from outside of Silicon Valley that only just started trying out Claude Code recently. There's still a ton of developers yet to jump on board.
CC kinda sucks compared to opencode & kimi / minimax. It’s slow and annoying and the UI is subpar.
[dead]
Nothing is worth $852B in that space of time unless they are printing more than half of that in cash which to be clear they are not. They are burning it at that rate. Let's be clear. It's a valuable company, a valuable product, a valuable technology. It set the trend for the next phase of computer usage. But it's not worth $852B in that span of time and when it goes public that reality will bear down on them quickly.
It's a falling knife. Don't try catch it on the way down. That valuation might be justified in another 10 years.
> justified in another 10 years.
Hard to imagine when they don't have any moat.
Sure they have ... I don't know how many users but it's not like a social network. Instagram was valued $10B with 10 very VERY fast not because of it's tech or employees but mostly IMHO because of the number of locked in users ... because of OTHER users.
Here if one wants to move from OpenAI to Anthropic, they can and they do. You might have difficulty exporting history, context, etc but you make it.
Even basic email has more lock-in than any of the model provider. They did have some moat few years ago, arguably, but now no differentiator that would justify such a valuation.
They are no Meta/Google/Microsoft/Oracle not because of their size or technology but only because their customers can swap providers.
> Even basic email has more lock-in than any of the model provider.
History has proven the average person has very little ability to discern which products have lock-in.
Everyone was confidently predicting Uber would dominate over all the regional ride sharing apps because ride sharing is a commodity and subsidies were enough to shift user behavior.
The thesis from AI providers about lockin have always been coherent: increased personalization and learning of workflows over time would increasingly make using a new AI much worse than an AI that already knows you. If you look at the human virtual assistant world, stickiness is incredibly high once are happy with your onboarded assistant because onboarding a new person unavoidably sucks.
Is this thesis correct? We don’t know, it took Uber billions of burnt cash to discover their thesis was incorrect.
Falling knife or not, if you own an index fund, or if your 401k owns one, you're buying a piece of it at IPO prices. The exit scam is almost complete.
Index funds won't get in at ipo prices. They wait a year or so before including new stocks, so the price is guaranteed to have settled by then. OpenAI also isn't profitable yet so that's another point against them in terms of being included in index funds.
NASDAQ just changed some rules recently concerning exactly this.
https://finance.yahoo.com/news/new-rule-could-fast-track-spa...
As someone working in the enterprise space with OAI, this still feels like we're in the top of the first inning.
Many teams remain anchored on equating AI with chat experiences, while a growing share of enterprise value is emerging from leasing compute clusters to run agentic workloads in containerized environments.
OpenAI has built a cloud-first architecture that supports this model. The desktop experience and applications are sexy, but enterprise usage will likely skew heavily toward asynchronous, background processing.
I know that people keep saying "we're early on here", but I take it as a negative signal that people keep thinking we are in the early innings here. Compared to previous generations of technology change, a great deal of time has passed, it should be a bit disconcerting that no one seems to have found a way to make money out of this yet.
Look at previous killer apps- they came out quickly and were raking in money very quickly. The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. Apple IPO'd in 1980 with a 21% operating margin! Netscape Navigator 1.0 released December 15th 1994, Amazon.com made its first sale July 16th 1995- 214 days later. AMZN IPO'd May 15th 1997, 883 days after Netscape 1.0 released to the public (they had raised <10 million dollars to that point, but chose not to have a profit because they kept re-investing all of their profit into expanding the business).
We are already 1232 days since ChatGPT 1.0. So we're about 50% farther along than either of those killer apps. No one has figured out as good a business model for Generative AI as either of those were.
To use the other great technology transformation of the past 50 years, cell phones, I have a bit of trouble figuring out the right comparison to ChatGPT 1.0. I can work backwards from today to ChatGPT 1.0 opening up to the public, that's about the difference from the iPhone 3G (the first one with an appstore, the real killer app) to the launch of the Motorola Razr, to give you an idea of how fast mobile technology moved.
Do note that the Razr and the iPhone, like Visicalc, the Apple II, and Netscape 1.0 were hugely profitable for their companies, in a way that no one has demonstrated with Generative AI. Amazon is a bit of a special case, but they were not raising money, they were just re-investing cash that was being thrown off not as profits but into expanding the business. I don't believe that any AI company is generating cashflow the way that Amazon was in 1997, and the other companies mentioned here were GAAP-profitable.
Most people don’t want to accept and believe that the only viable revenue stream is selling tokens in relation to software development.
All the other stuff is nice… but you will continue to be money losing and eventually die.
Now you can’t come out and say this because there’s a whole bunch of investments that depend on hype - think about the robotics nonsense.
There is some revenue in copywriting, translation and generating images. But that is probably 20 per month per seat enterprise plans with limited use. With the possible cost of interface varying enough to have actual marginal costs...
Is it actually profitable? That the presumed market leader, Anthropic, changed their business model just today to kill off their buffet monthly plans and switch to a la carte for Enterprise makes me doubt they are making money off of selling tokens to software developers.
I never commented on profitability, only revenue.
And I’m referring to selling tokens to enterprises that produce software.
"previous killer apps" - exactly. That's the point. Everyone is anchored in AI as being the next desktop app. It's not.
We're only using 1% of what these models will ultimately do when they're running 24/7 as utilities serving new economic models.
There just isn't enough compute right now to realize the larger monetization strategies.
On top of that, the APIs/Tools/Function Calls into the real world don't exist yet. But consumer products are going to start eventually exposing functionality to these LLMs. By that time, I wonder if we'll all have an edge-inference box sitting in every one of our houses that we buy from a consumer products company like Apple or from Amazon, or directly from OpenAI or Anthropic. These little brains will be the low latency central nervous system of a lot of things in our homes, and gateways to the larger models in the cloud. Or at least that's how I imagine it sorting out in the future.
Previous generations of technological change of the calibre we are told AI will be also required major changes to the real world and new products to be built: new cell towers had to be constructed, fibre cables laid, data centers built, personal computers produced, warehouses established. And software needed to be fundamentally rewritten to support each of these generations too. And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
That's my biggest concern with it, I don't see the business case closing anywhere, and without businesses that actually make money all the technology in the world doesn't actually do anything.
> And yet the companies doing that in those previous generations managed to produce huge profits significantly faster than Generative AI has.
Have you considered a simple answer to this inconsistency? The market and investors does not demand that these AI companies make a profit. The only reason companies are expected to make profits is because either those who own shares in the company expect it, or those willing to invest in a company expect it.
More likely people will delegate their agents to run in the cloud.
Edge AI on iPhone, however... many potential applications around vision, hearing, interpreting your surroundings in real-time.
> There just isn't enough compute right now to realize the larger monetization strategies.
How can this be relevant? Why isn't the compute we have available right now sufficient for turning a profit?
Is this another one of those "We lose money on each sale but make it up in volume" things?
I mean, if much much larger investments are needed before current LLM providers can turn a profit, that's not a good indicator that they have any sort of sustainable business, is it?
Profit is not the goal in large transformational tech cycles.
See Bezos' playbook for Amazon. They weren't profitable for years.
Comparing the IPO market today to the IPO market in the late 90s is not very instructive. You could have IPO'd a lemonade stand in 1998 and raised $10 million.
I'm using that only for AMZN because they seem to have made a choice to not turn a profit and instead to expand their business. The other companies I mentioned were directly profitable by this point in their respective revolutions, except for Amazon, where I'm using the IPO as proof that they had a sustainable business, even if it wasn't precisely profitable- they were generating enough cash to be profitable, they just chose to reinvest it into the business. I don't see any evidence that any of the major Generative AI companies are in that position or the position that Apple, Netscape, Motorola etc. were in.
And that's the weird one, all of the other examples I provided were booking real profits by this point in their technology cycle.
I think that fact that IPOs have grown slower over the years is more about larger VC markets where they can fund valuations up to hundreds of billions rather than something to do with adoption.
As you note, Netscape and Amazon IPOed fairly quickly.
Google took 6 years (1998 to 2004)
Facebook took 8 years (2004 to 2012)
Alibaba Group took 15 years (1999 to 2014)
Claude Code is at $30B annual recurring revenue, and it launched in Feb 2025, and OpenAI at $25B (although they measure partner revenue differently). By comparison the iPhone make $630M revenue in the 12 months after it was launched.
> Claude Code is at $30B annual recurring revenue, and it launched in Feb 2025, and OpenAI at $25B (although they measure partner revenue differently). By comparison the iPhone make $630M revenue in the 12 months after it was launched.
What does revenue have to do with it? Companies usually want to IPO with a decent profit margin showing on the books, revenue doesn't usually come into it.
> Companies usually want to IPO with a decent profit margin showing on the books, revenue doesn't usually come into it.
Untrue.
Read what the OP said about Amazon.
And Figma - the most recent high-profile IPO I could think of - isn't profitable.
I was convinced they were going to go the openclaw or something similar route..pivoting into cybersec/enterprise makes sense if they are trying to copy anthropic, but it doesn't really telegraph any sort of differentiator
its telegraphing that they have no vision
This is the key. These frontier model companies are funneling all of their time and resources into scaling, how could they possibly be researching the next phase of AI? Once scaling hits the limit the money is gonna dry up.
AGI is not gonna come from these companies
The duopoly slowly turning into a monopoly won't be pretty.
I suspect that Google is working on improving their models for coding behind the scenes. Hopefully they release something soon to compete with Codex and CC. To be candid, I use CC, but have not tried out Codex.
I’ve tried them all and have come to slightly prefer Codex (with CC close second).
Google has the benefit of being insanely profitable though.
"You have ChatGPT, a 1 billion-user business growing 50-100% a year, what are you doing talking about enterprise and code?"
The ironic part about this is GPT models are by far the worst models to chat too.
I think I rather talk to a wall than GPT-5.4. It so unpleasant. I feel bad for anyone who only experience with AI is ChatGPT.
There’s roughly 8b people in the world and somewhere between 2-3b have never used the internet. If OpenAI manages to capture the 6b internet users growing at 100% per year, they have 3 years of user growth left max. Then what?
They do what Facebook and Google did. Ads.
ChatGPT already has ads.
what makes gpt 5.4 bad to chat ? To me it seems smart and does the job albeit it’s a bit slow. I’m using it only/mostly with the “pro” /xhigh reasoning
In my experience, within one reply ChatGPT (compared to Claude) tends to:
* generate a lot of text
* answer at least a few of what it thinks might be follow up questions
* restate its original answer a few times
* suggest a follow up “if you want, I can turn that into…”
It feels very tedious and noisy.
Yeah, this perfectly sums it up.
Especially the “If you want” at the end of every single reply.
The way it converses is the least human like out of all models. It communicates like its writing markdown documents instead of just conversing normally like every other model does. You ask a question and it spits out a design doc instead of just answering the question like a normal human would.
But why would you want it to converse like a human? What I need is concise information not a buddy to talk to. If anything I see it as a feature
I think it’s the default system prompts of ChatGPT. I do think it’s better with the “Professional” tone with “Less Emoji” but that’s just me.
I completely disagree. I like the responses from ChatGPT, Gemini just seems off and I feel like I can’t get what I want from it.
[dead]
There does seem to be like a 1% chance (maybe 0.5%) that this turns into a WeWork situation. It's a product that users love, but the company leadership is so used to lying and deceiving and being loose with numbers that the IPO filing could be a pretty big shock. Either they'll have to tell the truth, which will be much less rosy than the lies, or they'll lie and turn everyone off.
Probably won't happen. But not definitely.
just maths. if they're as a capable as each other then x product cannot be worth multiples above y unless there's a clear USP. Arguably OpenAi's is brand recognition but given Antrhopic's recent growth that's less certain than a quarter ago
I don't believe that either Anthropic or OpenAI are going to survive the AI valuation crunch. Google, Meta and Microsoft will because they're not AI-only companies. There are four reasons why I believe this:
1. I honestly don't think that AI is all that useful for anything other than suppressing labor costs and I don't expect that to change in the short to medium term;
2. I really don't think Anthropic or OpenAI can ever satisfy their stratospheric valuations. I foreesee no cash flow possible that will arrive quick enough to make that happen;
3. Hardware costs will devalue the trillions invested in AI data centers. By 2030 the GPUs will probably be at least 3x as good. Bear in mind, it's just over 4 years between the 3090 and 5090 and that's 3x TFLOPS; and
4. China or other actors will make sure that proprietary LLMs won't be dominant. DeepSeek was a shot across the bow. China in particularly won't want a US tech company to dominate this space. The increasing RAM in local, relatively cheap computers will make this more and more viable.
Bonus prediction: I think China will be making their own homegrown NVidia equivalent GPUs on homegrown EUV by 2030.
For some reason the latest Claude Desktop release from Anthropic threw off its Claude branding and charm to chase after bland Codex Desktop app look and feels.
Maybe they think OpenAI is doing something right?
What happens if OpenAI collapses at this point? Is it just too big to fail given defense contracts and Microsoft?
The Sora sunsetting marked a big shift towards enterprise focus and meeting Anthropic on the enterprise battlefield, but almost all engineers I work with or know are using Claude at this point exclusively.
Anyone seeing differently?
> Anyone seeing differently?
There have been a stream of HN posts (I'm noticed this mainly in the past few weeks) implying some people prefer ChatGPT/Codex to Claude.
Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting, often stopping in the middle of a query. ChatGPT/Codex doesn't have this problem.
My 2 cents: Claude is more expensive, but it has something that Codex/GPT lacks that's not easy to quantify. Opus is probably a bigger model (my guess) and trained on code and technical writing (books?) of better qualify compared to GPT.
HOWEVER, it has a flaw that makes some people prefer Codex: out of the box, it's lazy: https://x.com/i/status/2044126543287300248
However, once you learn how to deal with the laziness (which can be dealt with some CLAUDE.md instructions and context docs), Claude shows a better taste for coding. It replicates patterns from the repo, writes more readable/maintainable code, follows instructions, captures implicit information.
GPT/Codex is not a bad model/agent, but it lacks something. It's amazing for code reviews, but it writes code with zero regard to your existing codebase or SOLID/DRY principles. It just likes to output code (a lot of it) that works for the task you gave it right now, with zero regard for maintenance later. And also over-uses defensive programming in a way that quickly makes the codebase unreadable for dynamic languages.
Claude is not perfect, I still have to steer it sometimes to prevent overengineering or duplicate code, but a lot less than when I try Codex (and the built-in /simplify does half of the work for me).
That's crazy talk. I am on the $20 plan and I do hit the limit occasionally, but I get a few hours of usage before I do.
the rate limit is clearly stochastic. they've made their availability a skinner box and everyone is falling for it.
>>Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting, often stopping in the middle of a query.
The free version is pretty much unusable. Not a single query completes, You get only one query every 4 hours, given like 12 waking hours, you get 3 queries, none of which complete.
$20 plan gets you only a small distance from there.
Looks like the focus is entirely on Enterprise customers these days. They don't even bother with their regular users these days. CC is entirely a enterprise product.
> Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting
Utterly not my experience. I use opus near daily for long research sessions (not all agent based). Are you throwing in 100k input tokens to every query?
skinner box
What the hell kind of queries are you running? I use Claude Pro all the time for asking questions, doing data analysis, writing side projects, and I very rarely get rate limited.
I use Claude Max 20x at work and I rarely hit 10% session utilization, which implies even using Claude to write code all day only uses 2x the Pro token limit.
Are you just telling it to try again when you get a response you don't like?
I get rate limited after about 1-2 hours having it generate, troubleshoot, and fix things running on k8s (Opus)
We have Claude Teams at work and I don't think I've had issues there.
The amount of peripheral growth around it is even larger, tons of construction, utility company upgrades etc. There are more data centers under construction than there are currently operational data centers. So there is more than doubling of capacity coming from current buildout. If you include projects in the planning phase it is something like 4-6x expected capacity increase. The utility infrastructure buildout to meet this demand is equally huge.
The warning signs are already starting to show up though, projects are being stalled, not filled out, blaming it on delays from China etc, but the funding is still present, the construction keeps going on of the next building even as the last one sits vacant and offline. The sky high purchases of property from connected individuals by site developers continue, even as pushback mounts and many places are passing anti-datacenter ordinances.
Nothing of note would happen if OpenAI collapsed. The same prompts will work with claude or gemini and the outputs will be good enough.
I've also noted that 90% of technical users I encounter are on claude or mostly-claude via cursor (switching models here-and-there).
Claude Code definitely has a head start, but there have been a few HN posts about a perceived nerfing of the intelligence and settings in the past month or so. Codex could capitalize on that weakness. They just introduced a $100 monthly 5x plan so they are at parity with the Claude Code plans. If Anthropic fiddles too much more with the settings then people will start to switch to Codex.
Codex is just better than Claude but Claude is faster and has the better UI for vscode. That’s why I use Claude as main coder with codex(5.4 with xhigh effort) as mcp reviewer etc. It is clear to me that codex is a better programmer but the UI and speed are too much of a con to use it exclusively. Claude is just clumsy
Seeing the same, Claude with every engineer. Even some non technical people moved to Claude from ChatGPT recently.
All our engineers use Claude but all the AI features in our app are built on OpenAI models
At work there’s only codex right now (no approval yet for anthropic - OpenAI access was easier/faster through Microsoft)
It was a pretty straightforward transition going from mostly using claude code, to now exclusively codex
The world would barely notice if both OpenAI and Anthropic “fail”. There is a lot of competition in this space
> too big to fail
genuine question, what do you think these words mean?
I think it's code for "the government will have to bail them out".
Seems like Sam was angling for that with some of his China vs USA rhetoric
Why would they need a bail out? Their assets can be sold off, they can be taken over or be absorbed by another American entity.
Absolutely no reason for a bail out.
It may hurt the ego of Altman and Brockman - but that's their problem.
If DoD systems are running on OpenAI infrastructure, you can't just pause them for 6 months during an acquisition. This gets far more complex than just "liquidation of assets".
Because their assets would have been vastly overvalued. The bailout is when the government buys those assets at as close to that fictional valuation as they can, and likely then sells them back at their actual worth.
> Absolutely no reason for a bail out.
There's never been any reason for a bailout. It's just handing tax money to wealthy people who have made bad decisions.
The contract with the Pentagon is a good first step. Being a government contractor is pretty fail safe.
At some point you reach a size when too many politicians and the people who own them have invested so much money that they're willing to take any size political hit in order to save themselves from personal losses when you fail.
[dead]
[dead]
and claude is actually meh too!
It’s an absolutely hilarious/absurd valuation for a company that has absolutely no path to do anything other than lighting money on fire, forever. I’d call it nonsense, but Tesla’s valuation proves the market runs on shenanigans, at this point, so whatever.
If people want to meme OpenAI into a trillion dollar market cap, I guess let them?
Let them. And have pension funds and 401Ks automatically go for the ride too.
Ah yes, the weekly "ChatGPT is definitely going to fail, for real!" post, with absolutely no substance whatsoever. Still, they know it will definitely be on the front page, regardless. Make sure you subscribe to their pub!
To be fair, your comment doesn't have much substance as well
> "You have ChatGPT, a 1 billion-user business growing 50-100% a year, what are you doing talking about enterprise and code?" an early backer of OpenAI told FT. "It's a deeply unfocused company."
This is exactly the dynamic I've been worried about.
If you go to OpenAI's site to learn what they're all about, they're pretty clear about it: "ensure that artificial general intelligence benefits all of humanity", "Join us in shaping the future of technology". They think and I agree that ChatGPT is great, but the future of humanity does not depend on precisely how successful this one consumer chatbot is, and so it is not the company's focus. Anyone who understands OpenAI at even a basic level would recognize this, it's neither new nor subtle.
I'm not sure how to avoid the conclusion that OpenAI investors do not understand OpenAI and are just revenue growth junkies.
They’re investors. More or less by definition they only care about revenue growth.
The deeper issue is structural, not just investor misunderstanding. OpenAI converted from a nonprofit-controlled entity to a PBC specifically to attract this kind of capital. When you take $6.6B from investors expecting returns, you create fiduciary pressures that are hard to keep out of strategic decisions regardless of what the mission statement says.
The 2023 board fight illustrated exactly this conflict in real time: the board tried to exercise mission-aligned oversight and was effectively overruled by capital. The new governance structure gave investors more influence, not less.
"We take the mission seriously" and "we need to justify an $852B valuation" can coexist for a while, but not forever. The investors may be revenue-focused, but they were invited in under terms that make their expectations structurally legitimate — which is what makes this more than just a perception problem.
> ensure that artificial general intelligence benefits all of humanity
Thus far based on their actions, a reasonable read would be that they believe “humanity” would be better off with fewer people. Whoever you think OpenAI is or was, you’d have to be willfully ignorant of the actions of those who run it to believe it and Sam now.
OAI has zero focus. How many acquisitions (value destructive it seems) and projects have they killed?
Whats comical is Steve Jobs preached the notion of focus decades ago.
Why can't people follow simple advice from someone who already acquired the scar tissue? Its literally madness.
Sam shouldve been fired and stayed fired. He's great at raising money, but running the firm? Absolute basket case of a CEO in that regard.
So now they are realizing that they are indeed in a bubble and OpenAI was extremely overvalued?
Anthropic is also overvalued. Their revenue is not even recurring. It’s now “Annualised Revenue” due to token spend.
These two companies are just vehicles of a pump and dump scheme. OpenAI is already off loading shares with “acquisitions” that do not make any sense because investors already think they are about to IPO and not worth the price.
Also, one more thing… and it is called Deepseek.