Not egregious API spending, but ChatGPT Pro was been one of the best investments our company has paid for.
It is fantastic at reasonable scale ports / refactors, even with complicated subject matter like insurance. We have a project at work where Pro has saved us hours of time just trying to understand the over complicated that is currently in place.
For context, it’s a salvage project with a wonderful mix of Razor pages and a partial migration to Vue 2 / Vuetify.
It’s best with logic, but it doesn’t do great with understanding the particulars of UI.
How are you getting these results? Even with grounding in sources, careful context engineering and whatever technique comes to your mind we are just getting sloppy junk out of all models we have tried.
The sketchy part is that LLMs are super good at faking confidence and expertise all while randomly injected subtle but critical hallucinations. This ruins basically all significant output. Double-checking and babysitting the results is a huge time and energy sink. Human post-processing negates nearly all benefits.
Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".
I genuinely think that biggest issue LLM tools is that most people expect magic because first attempts at some simple things feel magical. however, they take insane amount of time to get expertise in. what is confusing is that I think SWEs spent immense amounts of time in general learning the tools of the trade but this seems to escape a lot of people when it comes to LLMs. on my team, every developer is using LLMs all day, every day. on average based on sprint retros each developer spends no less than an hour each day experimenting/learning/reading… how to make them work. the realization we made early is that when it comes to LLMs there are two large groups:
- group that see them as invaluable tools capable of being an immense productivity multiplier
- group that tried things here and there and gave up
we collectively decided that we want to be in the first group and were willing to put time to be in that group.
I'm persisting, have been using LLMs quite a bit for the last year, they're now where I start with any new project. Throughout that time I've been doing constant experimentation and have made significant workflow improvements throughout.
I've found that they're a moderate productivity increase, i.e. on a par with, say, using a different language, using a faster CI system, or breaking down some bureaucracy. Noticeable, worth it, but not entirely transformational.
I only really get useful output from them when I'm holding _most_ of the context that I'd be holding if writing the code, and that's a limiting factor on how useful they can be. I can delegate things that are easy, but I'm hand-holding enough that I can't realistically parallelise my work that much more than I already do (I'm fairly good at context switching already).
How are you measuring increased productivity? Honest question, because I've seen teams claim more code, but I've also seen teams say they're seeing more unnecessary churn (which is more code).
I'm interested in business outcomes, is more code or perceived velocity translating into benefits to the business? This is really hard to measure though because in pretty much any startup or growing company you'll see better business outcomes, but it's hard to find evidence for the counterfactual.
same as we have before LLMs for a decade - story points. we move faster now, we have automated stuff we could never automate before. same project, largely same team since 2016, we just get a lot more shit done, a lot more
hehe not snarky at all - great question. this was heavily discussed but in order to measure productivity gains (we are phasing this out now) we kept the estimations the same as before. as my colleague put it you don’t estimate based on “10x developer” so we applied the same concept. now that everyone is “on board” we are phasing this out
Thanks, I'm probably a kook but I've never wanted to put any non-product, user-visible feature-related tasks on the board with story points (tests, code cleanup, etc) and just folded that into the related user work (mainly to avoid some product person thinking they "own" that and can make technical decisions).
So the product velocity didn't exactly go up, but you are now producing less technical debt (hopefully) with a similar velocity, sounds reasonable.
dont you think it would be better off getting that expertise in actual system design, software engineering and all the programming related fields. by involving chat GPT to make code, we ll eventually lose the skill to sit and craft code like we used to do all these years. after all the brain s neural pathways only remember what you put to work daily
- lots of experimentation - specifically I have spent hours and hours doing the exact same feature (my record is 23 times).
- if something “doesn’t work” I create a task immediately to investigate and understand it. even the smallest thing that bother me I will spend hours to figure out why it might have happened (this is sometimes frustrating) and how to prevent it from happening again (this is fun)
My collegue describes the process as Javascript developer trying to learn Rust while tripping on mushrooms :)
Depends a lot. Use it for one off scripts, particularly for anything Microsoft 365 related (expanding Sharepoint drives, analyzing AWS usage, general IT stuff). Where there is a lot of heavy context based business logic it will fail since there’s too much context for it to be successful.
I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.
It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.
In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.
> Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".
Most likely by trying to get a promotion or bonus now and getting the hell out of Dodge before anyone notices those subtle landmines left behind :-)
Cynical, but maybe not wrong. We are plenty familiar with ignoring technical debt and letting it pile up. Dodgy LLM code seems like more of that.
Just like tech debt, there's a time for rushing. And if you're really getting good results from LLMs, that's fabulous.
I don't have a final position on LLM's but it has only been two days since I worked with a colleague who definitely had no idea how to proceed when they were off the "happy path" of LLM use, so I'm sure there are plenty of people getting left behind.
Wow the bad faith is quite strong here. As it turns out, small to mid sized insurance companies have some ridiculously poorly architected front ends.
Not everyone is the biggest cat in town with infinite money and expertise. I have no intention of leaving anytime soon, so I have confidence that the code that was generated by the AI (after confirming with our guy who is the insurance OG) is solid improvement over what was before.
A lot of programmers that say that LLMs are awesome tend to be inexperienced, not good programmers, or just gloss over the significant amount of extra work that using LLMs requires.
Programmers tend to overestimate their knowledge of non-programming domains, so the OP is probably just not understanding that there are serious issues with the LLM's output for complicated subject matters like insurance.
It’s not like anyone is going in debt to pay for gpu’s though. So it’s probably ok. Now if banks start selling 30 year mortgages for gpu’s, I might get a little worried.
Oracle is going to use debt to finance the buildout of AI cloud infrastructure to meet their obligations to customers. They’re the first hyperscaler to do so. Made the news two weeks ago.
Yikes. Oracle just issued 40 year mortgages , I mean bonds. Ok, you can be a little worried now. There balance sheet looks a lot like Coreweave or mstr. I guess the market will allow it for some more time. Nvidia gpu treasury companies can sell a dollar worth of future gpu for $2 for some time I guess. Of course I won’t be buying oracle or mstr… but bitcoin and gpu’s are still fine.
People act like big tech didn't have a mountain of cash they didn't know what to do with. Each of the big players has around 100 billion that just sitting there doing nothing.
A lot of us use it in a very structured manner for code and other areas. It absolutely is value for money. I don't really get what people keep complaining about. I think the complaints mostly come from people trying to embed LLMs and expecting a human like output.
Decision support, coding, and for structured outputs I love it. I know its not human, and i write instructions that are specific to the way it reasons.
This is sort of useful, and it is how I have been using LLMs.
I don't think the companies betting on AI are burning mountains of cash because they think it will be a moderately useful tool for decision support, coding and such. They are betting this will be "The Future™" in their search of perpetual growth.
Fiber is a decades long investment into hardware- one that I would argue we hardly needed. Google fiber started with the question, what would people do with super high speed? The answer was stream higher quality videos and that's about it. In fact, by the time fiber became widespread, many had moved off of PCs to do the majority of their Internet use via cell phones.
With that said, the fiber will be good for many years. None of the LLM models or hardware will be useful in more than a few years, with everything being replaced to newer and better on a continual basis. They're stepping stones, not infrastructure.
We reemplaced one tech that was used by literally the whole world, pair copper wires, with something orders of magnitude better and future proof. My pc literally cant handle the bandwidth of my fiber connection.
Where I live (Germany), lots of people have vDSL at advertised speeds of 100 Mib/s, using pair copper wires. Not saying that fiber is not better, it obviously is, and hence the government is subsidizing large-scale fiber buildouts. But as it stands right now, I'm confident that for 99% of consumers, vDSL is indeed enough.
In the 90s and 2000s, I remember our (as in: tech nerds') argument to policy-makers being "just give people more bandwidth and they will find a way to use it", and in that period, that was absolutely true. In the 2000s, lots of people got access to broadband internet, and approximately five milliseconds later, YouTube launched.
But the same argument now falls apart because we have the hindsight of seeing lots of people with hundreds of megabits or even gigabit connections... and yet the most bandwidth-demanding thing most of them do is video streaming. I looked at the specs for GeForce Now, and it says that to stream the highest quality (a 5K video feed at 120Hz), you should have 65 Mib/s downstream. You can literally do that with a vDSL line. [1] Sure, there are always people with special usecases, but I don't recall any tech trend in the last 10 years that was stunted because not enough consumers had the bandwidth required to adopt it.
[1] Arguably, a 100 Mib/s line might end up delivering less than that, but I believe Nvidia have already factored this into their advertised requirements. They say that you need 25 Mib/s to sustain a 1080p 60fps stream, but my own stream recordings in the same format are only about 5 Mib/s. They might encode with higher quality than I do, but I doubt it's five times more bitrate.
In that, it's closest to the semiconductor situation.
Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.
While it has its uses I have yet to see a single use case, or combination of use cases, that warrants the insane spending. Not to mention the environmental damage and wide spread theft and copyright infringement required to make it work.
The people funding this seem to believe that firstly text inference and gradient descent can synthesize a program that can operate on information tasks as good or better than humans, secondly that the only way of generating the configuration data for those programs to work is by powering vast farms of processors doing matrix arithmetic but requiring the worlds most complex supply chain tethered to a handful of geopolitically volatile places, thirdly that those farms have power demands comparable to our biggest metropolises, and finally if they succeed, they'll have unlocked economic power amplification that hasn't been seen since James Watt figured out how to move water out of coal mines a bit quicker.
Oh and the really fucky part is half of them just want to get a bit richer, but the other half seem to be in a cult that thinks AI's gross disruption of human economies and our environment is actually the most logically ethical thing to do.
After seeing kids abuse llms for companionship I think many new gen will grow up shelling for sub if companies decide to wholey gatelock behind paywall. Where this world is heading, its cheaper than therapy. It's going to be indispensible while old men yell at clouds, think cell phone plan not spotify or netflix once companies start squeezing.
It's almost tiresome to keep citing Betteridge's law of headlines, but editors at legacy publications keep it relevant. If there was any compelling evidence, they wouldn't have to phrase it as a hypothetical.
Based on AGI 2026, they convinced US Government to block high-end GPU sales to China. They said, we only needed 1-2 more years to hold them off. Then AGI and OpenAI/US rules the world. Is this still the plan? /s
If AGI does not materialize in 2026, I think there might be trouble, as China develops alternative GPUs and NVIDIA loses that market.
"Solve quantum physics" meaning generating closed-form solutions to the Schrodinger equation for atoms of any composition? Of arbitrary molecules? Good luck with that... Even for the hydrogen atom, the textbook said "so it happens that <some polynomial function I'd never heard of before> just so happens to solve this equation", instead of the derivations one would normally expect. I doubt we have even invented the math to solve the equations much above the hydrogen atom, assuming that a closed-form solution is even theoretically possible.
I think Altman has been getting mentored by Musk. I think we'll get full self-driving Teslas before quantum mechanics is "solved", though, and I am not expecting that in the foreseeable future.
Not egregious API spending, but ChatGPT Pro was been one of the best investments our company has paid for.
It is fantastic at reasonable scale ports / refactors, even with complicated subject matter like insurance. We have a project at work where Pro has saved us hours of time just trying to understand the over complicated that is currently in place.
For context, it’s a salvage project with a wonderful mix of Razor pages and a partial migration to Vue 2 / Vuetify.
It’s best with logic, but it doesn’t do great with understanding the particulars of UI.
How are you getting these results? Even with grounding in sources, careful context engineering and whatever technique comes to your mind we are just getting sloppy junk out of all models we have tried.
The sketchy part is that LLMs are super good at faking confidence and expertise all while randomly injected subtle but critical hallucinations. This ruins basically all significant output. Double-checking and babysitting the results is a huge time and energy sink. Human post-processing negates nearly all benefits.
Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".
I genuinely think that biggest issue LLM tools is that most people expect magic because first attempts at some simple things feel magical. however, they take insane amount of time to get expertise in. what is confusing is that I think SWEs spent immense amounts of time in general learning the tools of the trade but this seems to escape a lot of people when it comes to LLMs. on my team, every developer is using LLMs all day, every day. on average based on sprint retros each developer spends no less than an hour each day experimenting/learning/reading… how to make them work. the realization we made early is that when it comes to LLMs there are two large groups:
- group that see them as invaluable tools capable of being an immense productivity multiplier
- group that tried things here and there and gave up
we collectively decided that we want to be in the first group and were willing to put time to be in that group.
I'm persisting, have been using LLMs quite a bit for the last year, they're now where I start with any new project. Throughout that time I've been doing constant experimentation and have made significant workflow improvements throughout.
I've found that they're a moderate productivity increase, i.e. on a par with, say, using a different language, using a faster CI system, or breaking down some bureaucracy. Noticeable, worth it, but not entirely transformational.
I only really get useful output from them when I'm holding _most_ of the context that I'd be holding if writing the code, and that's a limiting factor on how useful they can be. I can delegate things that are easy, but I'm hand-holding enough that I can't realistically parallelise my work that much more than I already do (I'm fairly good at context switching already).
I have been in teams that do this and in teams that dont.
I have not see any tangible difference in the output of both.
year-over-year we are at around 45% in increased productivity and this trajectory is on an upward slope
How are you measuring increased productivity? Honest question, because I've seen teams claim more code, but I've also seen teams say they're seeing more unnecessary churn (which is more code).
I'm interested in business outcomes, is more code or perceived velocity translating into benefits to the business? This is really hard to measure though because in pretty much any startup or growing company you'll see better business outcomes, but it's hard to find evidence for the counterfactual.
same as we have before LLMs for a decade - story points. we move faster now, we have automated stuff we could never automate before. same project, largely same team since 2016, we just get a lot more shit done, a lot more
So something like: automate unit tests, where the tests are X points where you'd not have done these before?
Not snarking, but if they are automated away, then isn't this like 0 story points for effort/complexity?
hehe not snarky at all - great question. this was heavily discussed but in order to measure productivity gains (we are phasing this out now) we kept the estimations the same as before. as my colleague put it you don’t estimate based on “10x developer” so we applied the same concept. now that everyone is “on board” we are phasing this out
Thanks, I'm probably a kook but I've never wanted to put any non-product, user-visible feature-related tasks on the board with story points (tests, code cleanup, etc) and just folded that into the related user work (mainly to avoid some product person thinking they "own" that and can make technical decisions).
So the product velocity didn't exactly go up, but you are now producing less technical debt (hopefully) with a similar velocity, sounds reasonable.
This reads like the bullshit bulletpoints people write on their CV.
comments like this give me warm and fuzzy feeling that theoretically we compete for same jobs - no worries about job security for forseeable future :)
Someones ego got hurt.
talking to yourself in third person? :)
You keep comming back to this fights online because is the only real interactions you can have with people outside work.
You will live the rest of your life like that. Because nobody likes you. Enjoy.
dont you think it would be better off getting that expertise in actual system design, software engineering and all the programming related fields. by involving chat GPT to make code, we ll eventually lose the skill to sit and craft code like we used to do all these years. after all the brain s neural pathways only remember what you put to work daily
Where are you finding the best material for reading/learning?
- everything that simon writes (https://simonwillison.net/)
- anything that goes deep into issues (I seldom read “i love llms” type posts like this is great: https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/)
- lots of experimentation - specifically I have spent hours and hours doing the exact same feature (my record is 23 times).
- if something “doesn’t work” I create a task immediately to investigate and understand it. even the smallest thing that bother me I will spend hours to figure out why it might have happened (this is sometimes frustrating) and how to prevent it from happening again (this is fun)
My collegue describes the process as Javascript developer trying to learn Rust while tripping on mushrooms :)
What are you trying to use LLMs for and what model are you using?
Depends a lot. Use it for one off scripts, particularly for anything Microsoft 365 related (expanding Sharepoint drives, analyzing AWS usage, general IT stuff). Where there is a lot of heavy context based business logic it will fail since there’s too much context for it to be successful.
I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.
It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.
In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.
> Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".
Most likely by trying to get a promotion or bonus now and getting the hell out of Dodge before anyone notices those subtle landmines left behind :-)
Cynical, but maybe not wrong. We are plenty familiar with ignoring technical debt and letting it pile up. Dodgy LLM code seems like more of that.
Just like tech debt, there's a time for rushing. And if you're really getting good results from LLMs, that's fabulous.
I don't have a final position on LLM's but it has only been two days since I worked with a colleague who definitely had no idea how to proceed when they were off the "happy path" of LLM use, so I'm sure there are plenty of people getting left behind.
Wow the bad faith is quite strong here. As it turns out, small to mid sized insurance companies have some ridiculously poorly architected front ends.
Not everyone is the biggest cat in town with infinite money and expertise. I have no intention of leaving anytime soon, so I have confidence that the code that was generated by the AI (after confirming with our guy who is the insurance OG) is solid improvement over what was before.
A lot of programmers that say that LLMs are awesome tend to be inexperienced, not good programmers, or just gloss over the significant amount of extra work that using LLMs requires.
Programmers tend to overestimate their knowledge of non-programming domains, so the OP is probably just not understanding that there are serious issues with the LLM's output for complicated subject matters like insurance.
Could you give an example of a prompt?
You are a top stackoverflow contributor with 20 years of experience in...
I meant an example of the prompts he was attempting, in case it helped provide advice.
It’s not like anyone is going in debt to pay for gpu’s though. So it’s probably ok. Now if banks start selling 30 year mortgages for gpu’s, I might get a little worried.
Oracle is going to use debt to finance the buildout of AI cloud infrastructure to meet their obligations to customers. They’re the first hyperscaler to do so. Made the news two weeks ago.
Yikes. Oracle just issued 40 year mortgages , I mean bonds. Ok, you can be a little worried now. There balance sheet looks a lot like Coreweave or mstr. I guess the market will allow it for some more time. Nvidia gpu treasury companies can sell a dollar worth of future gpu for $2 for some time I guess. Of course I won’t be buying oracle or mstr… but bitcoin and gpu’s are still fine.
People act like big tech didn't have a mountain of cash they didn't know what to do with. Each of the big players has around 100 billion that just sitting there doing nothing.
Well, Apple spends some of that pre-paying TSMC for their next node in exchange for exclusivity...
I don't know if that was epic sarcasm, but companies are doing exactly that. CoreWeave has taken on something like $30b of debt against the value of their GPUs. https://www.forbes.com/sites/rashishrivastava/2025/09/22/cor...
They aren't the only company doing this.
A lot of us use it in a very structured manner for code and other areas. It absolutely is value for money. I don't really get what people keep complaining about. I think the complaints mostly come from people trying to embed LLMs and expecting a human like output.
Decision support, coding, and for structured outputs I love it. I know its not human, and i write instructions that are specific to the way it reasons.
This is sort of useful, and it is how I have been using LLMs.
I don't think the companies betting on AI are burning mountains of cash because they think it will be a moderately useful tool for decision support, coding and such. They are betting this will be "The Future™" in their search of perpetual growth.
Is it possible all this capital would be better deployed creating value through jobs that leverage human creativity?
Meat-based LLMs trained for billions of years are underrated! Too bad they need healthcare (and sleep).
No. The wet dream of the elites is to get rid of the pesky underclass that provides labor.
They have a visceral hatred of workers. The sooner people accept that as reality, the better.
(1999) - "Spending on Amazon warehouses Is at Epic Levels. Will It Ever Pay Off?"
Amazon weren't spending a single digit percentage of GDP on GPUs with a shelf life measured in just a few years though.
but we collectively there was a single digit spend on things like fiber that ended up paying off for the public later
The on going costs via power consumption are on a completely different scale
(2000) - “Spending on Kozmo warehouses is at epic levels. Will it ever pay off?”
I believe the relevant term here is “survivorship bias”.
I'd suggest a better analogy would be telecommunications fiber[1].
[1] https://internethistory.org/wp-content/uploads/2020/01/OSA_B...
Fiber is a decades long investment into hardware- one that I would argue we hardly needed. Google fiber started with the question, what would people do with super high speed? The answer was stream higher quality videos and that's about it. In fact, by the time fiber became widespread, many had moved off of PCs to do the majority of their Internet use via cell phones.
With that said, the fiber will be good for many years. None of the LLM models or hardware will be useful in more than a few years, with everything being replaced to newer and better on a continual basis. They're stepping stones, not infrastructure.
We reemplaced one tech that was used by literally the whole world, pair copper wires, with something orders of magnitude better and future proof. My pc literally cant handle the bandwidth of my fiber connection.
We did not need it? Did you ever used DSL?
What is AI replacing? People?
> Did you ever use DSL?
Where I live (Germany), lots of people have vDSL at advertised speeds of 100 Mib/s, using pair copper wires. Not saying that fiber is not better, it obviously is, and hence the government is subsidizing large-scale fiber buildouts. But as it stands right now, I'm confident that for 99% of consumers, vDSL is indeed enough.
In the 90s and 2000s, I remember our (as in: tech nerds') argument to policy-makers being "just give people more bandwidth and they will find a way to use it", and in that period, that was absolutely true. In the 2000s, lots of people got access to broadband internet, and approximately five milliseconds later, YouTube launched.
But the same argument now falls apart because we have the hindsight of seeing lots of people with hundreds of megabits or even gigabit connections... and yet the most bandwidth-demanding thing most of them do is video streaming. I looked at the specs for GeForce Now, and it says that to stream the highest quality (a 5K video feed at 120Hz), you should have 65 Mib/s downstream. You can literally do that with a vDSL line. [1] Sure, there are always people with special usecases, but I don't recall any tech trend in the last 10 years that was stunted because not enough consumers had the bandwidth required to adopt it.
[1] Arguably, a 100 Mib/s line might end up delivering less than that, but I believe Nvidia have already factored this into their advertised requirements. They say that you need 25 Mib/s to sustain a 1080p 60fps stream, but my own stream recordings in the same format are only about 5 Mib/s. They might encode with higher quality than I do, but I doubt it's five times more bitrate.
Is not similar at all.
Even the smallest and poorest countries in the world invested in their fiber networks.
Only China and the US have money to create models.
In that, it's closest to the semiconductor situation.
Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.
[dead]
No
(Violins playing) and now what?
Agreed
Someone should create a tracker that measures the number of bearish AI takes that make the front page of HN each day.
I look forward to the cheap compute flooding the market when the music stops.
People still waiting for GPUs to be cheap after the blockchain bubble.
We created the LLM bubble to prop up those investments.
Touche. I was just about to comment on snapping up the cheap gpus
While it has its uses I have yet to see a single use case, or combination of use cases, that warrants the insane spending. Not to mention the environmental damage and wide spread theft and copyright infringement required to make it work.
The people funding this seem to believe that firstly text inference and gradient descent can synthesize a program that can operate on information tasks as good or better than humans, secondly that the only way of generating the configuration data for those programs to work is by powering vast farms of processors doing matrix arithmetic but requiring the worlds most complex supply chain tethered to a handful of geopolitically volatile places, thirdly that those farms have power demands comparable to our biggest metropolises, and finally if they succeed, they'll have unlocked economic power amplification that hasn't been seen since James Watt figured out how to move water out of coal mines a bit quicker.
Oh and the really fucky part is half of them just want to get a bit richer, but the other half seem to be in a cult that thinks AI's gross disruption of human economies and our environment is actually the most logically ethical thing to do.
I tougth what you wrote was crazy wasteful, then I remember the world runs on js.
After seeing kids abuse llms for companionship I think many new gen will grow up shelling for sub if companies decide to wholey gatelock behind paywall. Where this world is heading, its cheaper than therapy. It's going to be indispensible while old men yell at clouds, think cell phone plan not spotify or netflix once companies start squeezing.
Yes
And..?
It's almost tiresome to keep citing Betteridge's law of headlines, but editors at legacy publications keep it relevant. If there was any compelling evidence, they wouldn't have to phrase it as a hypothetical.
Related:
Cost of AGI Delusion
https://news.ycombinator.com/item?id=45395661
AI Investment Is Starting to Look Like a Slush Fund
https://news.ycombinator.com/item?id=45393649
Are we still getting AGI in 2026, per OpenAI?
Based on AGI 2026, they convinced US Government to block high-end GPU sales to China. They said, we only needed 1-2 more years to hold them off. Then AGI and OpenAI/US rules the world. Is this still the plan? /s
If AGI does not materialize in 2026, I think there might be trouble, as China develops alternative GPUs and NVIDIA loses that market.
Altman says in a few years Chat GPT 8 will solve quantum physics
"Solve quantum physics" meaning generating closed-form solutions to the Schrodinger equation for atoms of any composition? Of arbitrary molecules? Good luck with that... Even for the hydrogen atom, the textbook said "so it happens that <some polynomial function I'd never heard of before> just so happens to solve this equation", instead of the derivations one would normally expect. I doubt we have even invented the math to solve the equations much above the hydrogen atom, assuming that a closed-form solution is even theoretically possible.
I think Altman has been getting mentored by Musk. I think we'll get full self-driving Teslas before quantum mechanics is "solved", though, and I am not expecting that in the foreseeable future.
My mistake, he did not say solve quantum physics.
He did say if Chat GTP 8 creates a theory of quantum gravity... I can't... that will mean we have reached AGI.
https://m.youtube.com/watch?v=TMoz3gSXBcY
I mean, I'll take it if it comes true.
insert Rick & Morty "Show me what you got" gif here
[dead]
Pay off for who? After learning about "The Gospel" [0], does anyone else wonder if spending on AI is actually just an arms race?
[0] https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...