> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
These are probably the same people who are "surprised" when 100 offshore agency dredgings don't magically do the app 10x faster than 10 expensive onshore workers.
To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.
It's almost like the people in charge of these businesses have no goddamn clue what they're actually selling, how it works or why it's good (or isn't).
It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.
It's almost like that's a consequence of not enforcing anti-trust vigorously enough, allowing capital to accrete in Big Tech, and creating extremely large tech companies whose primary innovation comes from acquiring rather than building.
The reason a competitive ecosystem of tech companies is effective has less to do with market hand magic and more to do with big companies being dumb and conservative, largely as a consequence of their leader selection criteria.
Do you think you could do a better job than a CEO of public company [x] from a technical standpoint - in other words, omitting the connections and public-facing charisma that they typically bring as part of the package?
I genuinely do, but kind of paradoxically also suspect I'm wrong. It's simply that it's something so far outside my domain that I just can't really appreciate their skills honed over many years of practice and training, because all I get to externally see are their often ridiculous limitations, failures, and myopia.
I imagine this is, in many ways, how people who have no understanding of e.g. software, let alone software development, see software engineers. I don't think it's uncharitable, it's just human nature. Imagine if we were the ones hiring CEOs. 'That guys a total asshat, and we can get ten guys in India - hard working, smart guys, standouts in 1.4 billion people - for the same price.' Go go go.
Alternatively, when your outsourcing agency finds they have accidentally assigned you an actually good engineer, term used loosely, that you're not paying for (they know this happens when the engineer gets up to speed with your codebase at all), "your guy" is replaced with another guy who inherits the name, the email address and the SSH key.
And if the agency doesn't do that, the good engineer will figure out he's being underpaid as slop-for-hire cannon fodder and move on his own accord.
No engineer is smart and overall capable enough to be called a 10x one and yet doesn't realize their price in western value. And we still talk about corporate cogs, the truly brilliant simply start their own gigs
This is no different from the personal computer, and it is to be expected.
The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.
But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.
Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.
No "oof" at all. This is how investing in new transformative business processes works.
Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web. Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.
My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.
[0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.
> This is no different from the personal computer, and it is to be expected.
What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.
With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.
You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)
But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.
The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.
Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....
...
The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.
What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?
Who's going to pay for a Personal Computer? Your accountant.
I highly doubt that the return in investment was seen immediately for personal computers. Do you have any evidence? Can you show me a company that adopted personal computers and immediately increased its profits? I’ll change my mind.
I'm sorry but you're asking me here to dig up decades old data to justify my claim that "The spreadsheet software has an immediately identifiable ROI".
I am not going to do that. If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.
An apple II cost $1300. VisiCalc cost $200. An accountant in that time would've cost ~10x that annually and would either spend quite a bit more than 10% doing the rote work, or hire dedicated people for it.
>If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.
Reality is complicated and messy. There are many hurdles to overcome, many people to convince and many logistics to handle. You can't just replace accountants with computers - it takes time. You can understand why I find it hard to believe that a huge jump like the one with software can take time as well.
> What are you talking about? The return on investment from computers was immediate and extremely identifiable.
It is well-documented, and called the "productivity paradox of computers" if you want to look it up. It was identified in 1987, and economic statistics show that personal computing didn't become a net positive for the economy until around 1995-1997.
And like I said, it's very dependent on the individual company. But consider how many businesses bought computers and didn't use them productively. Where it was a net loss because the computers were expensive and the software was expensive and the efficiency gained wasn't worth the cost -- or worse, they weren't a good match and efficiency actually dropped. Think of how many expensive attempted migrations from paper processes to early databases failed completely.
It's well documented. It's also quite controversial and economists still dispute it to this day.
It's economic analysis of the entire economy, from the "outside" (statistics) inward. My point is that the individual business case was financially solvent.
Apple Computer did not need to "change the world" it needed to sell computers at a profit, enough of them to cover their fixed costs, and do so without relying on other people just setting their money on fire. (And it succeeded on all three counts.) Whether or not they were a minute addition to the entire economy or a gigantic one is irrelevant.
Similarly with AI. AI does not need to "increase aggregate productivity over the entire economy", it needs to turn a profit or it dies. Whether or not it can keep the boomer pension funds from going insolvent is a question for economics wonks. Ultimately the aggregate economic effects follow from the individual one.
Thus the difference. PCs had a "core of financial solvency" nearly immediately. Even if they weren't useful for 99.9% of jobs that 0.1% would still find them useful enough to buy and keep the industry alive. If the hype were to run out on such an industry, it shrinks to something sustainable. (Compare: Consumer goods like smartwatches, which were hyped for a while, and didn't change the world but maintained a suitable core audience to sustain the industry)
With AI, even AI companies struggle to pitch such a core, nevermind actually prove it.
The productivity paradox isn't disputed by any mainstream economists. What is debated is its exact timing, size, and exactly which parts of businesses are most responsible (i.e. was eventual growth mostly about computers improving existing processes, or computers enabling brand-new processes like just-in-time supply chains)? The underlying concept is generally considered sound and uncontroversial.
I don't really understand what point you're trying to make. It seems like you're complaining that CapEx costs are higher in GenAI than they were in personal computing? But lots of industries have high CapEx. That's what investors are for.
The only point I've made is that "95% of organizations are getting zero return" is to be expected in the early days of a new technology, and that the personal computer is a reasonable analogy here. The subject here is companies that use the tech, not companies creating the tech. The investment model behind the core tech has nothing to do with the profitability of companies trying to use it or build on it. The point is that it takes a lot of time and trial and error to figure out how to use a new tech profitably, and we are currently in very early days of GenAI.
The contortions people will go through to defend a technology or concept they like blows my mind. Irrational exuberance is one thing, but denial of history in order to lower the bar for the next big thing really irritates me for some reason.
Computing was revolutionary, both at enterprise and personal scale (separately). I would say smartphones were revolutionary. The internet was revolutionary, though it did take a while to get going at scale.
Blockchain was not revolutionary.
I think LLM-based AI is trending towards blockchain, not general purpose computing. In order for it to be revolutionary, it needs to objectively and quantifiably add value to the lives (professionally or personally) of a significant piece of the population. I don't see how that happens with LLMs. They aren't reliable enough and don't seem to have any path towards reasoning or understanding.
> GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.
They are not seeing the structural "disruptions" that were present for previous technological shifts.
Changes over which time window? AI projects in enterprises can’t be longer than 2 years, which is practically in testing the water phase, of course there are very few projects of the disruption nature exist yet.
For companies competing in the same niche, the same low hanging fruits will be automated first if they invest in ML. So within the niche there is no comparative advantage.
I'm wondering, if the return is that the employees get 20 minutes extra free time per day, is that a good, quantifiable return? Would anyone consider as a "return" anything that you can't put on your balance sheet?
AI is already so much better than 99% of customer support employees.
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
I don't want AI customer support. I want open documentation so I can ask AI if I want or ask human support if it's not resolvable with available documentation
All my interactions with any AI support so far is repeatedly saying "call human" until it calls human
> AI is already so much better than 99% of customer support employees.
99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.
Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.
AI seems like a dream for some of these companies to offer even worse customer service, though.
Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.
The backend can. But what’s exposed to customers will be a very very small subset of that capability. Hence why only the csrs can perform that function.
The business undoubtedly did a crude cost/benefit analysis where the cost to expose and maintain that public interface vastly outstrips the cost for the few people that have to call in and change their name.
I think you missed the point of the parent. All these things are speed bumps and the "reasons" for having them are mostly incidental, as the main reason is to avoid the expense of having any more customer support personnel / infrastructure than is absolutely necessary to function.
Except AI support agents are only using content that is already available in support knowledge bases, making the entire exercise futile and redundant. But sure, they're eloquent while wasting your time.
There are AI agents that train from knowledge bases but also keep improving on actual conversations. For example, our Mava bot actually learns from mods directly within Discord servers. So it's not about replacing human mods but assist them so they can take better care of users in the end.
I don't see how this is any different than enriching knowledge bases from feedback and experience. You just find yourself duplicating all the information, locking yourself in your AI vendor and investing in a technology that doesn't add anything to what you had before. It's utterly nonsensical.
"Only" kind of misses the benefit though. I'm very bearish on "AI", but this is an absolutely perfect use case for LLMs. The issue is that if you describe a problem in natural language on any search engine, your results are going to be garbage unless you randomly luckboxed into somebody asking, with near identical verbiage, the question on some Q&A site.
That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.
Speaking of which, could we apply vector embeddings to search engines (where crawled pages get indexed by their vector embeddings rather than raw text) and use that for better fuzzy search results even without an LLM in the mix?
(Might be a naïve question, I'm at the edge of my understanding)
I would agree, but I’ve spent the last ten years or so working with outsourced tech support and I guarantee you, a lot of people call us just because they can’t be bothered to look for themselves.
If i am calling support, it is probably because I already scoured the resources.
Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.
I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).
Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.
It’s even worse than you think. I work with Amazon Connect. Now the human agent doesn’t have to search the Knowledge Base manually, likely answers will automatically be shown to the agent based on the conversation that you are having. They are just regurgitating what you can find for yourself.
But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.
When asking customers how well they were helped by the customer support system (via CSAT score), I've found industry-standard AI support agents will generally perform worse than a well-trained human support team. AI agents are fine at handling some simple queries, e.g. basic product and order informatino, but support issues are often biased towards high complexity, because otherwise they could solve it in a more automated. I'm sure it depends on the industry, and whether the customer's issue is truly novel.
Improves brand reputation? I don't think I've seen a single case where someone is glad to talk to an LLM/chat bot instead of an actual person. Personally, I think less of any company that uses them. I've never seen one be actually useful, and they seem to only really regurgitate links to FAQ pages or give the most generic answers possible while I fight to get a customer service number so I can actually solve the problem at hand.
It isn’t empowered to do anything you can’t already do in the UI, so it is useless to me.
Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.
MS customer service is perhaps the lowest bar available. One look at their tech support forums tells you that most of what they post is canned garbage that is no help to anyone.
AI is not better than a good customer service team, or even an above-average one. It is better than a broken customer service team, however. As others have noted, 99% is hyperbolic BS.
> IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.
Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?
I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.
We might discover that AI capacity is easier to grow in these markets too.
Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.
This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.
>They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom
Hard to honestly say if China is low wage. On one hand, their wages have risen as the work force has shrunk now for a few years and tasks are being outsourced to other countries. On the other hand, their currency is pegged meaning that the earning power of the workers should be much higher so that they can afford the things they are making and transition to a consumer driven economy.
They are very much devaluing their currency. This is all the rage and I expect a currency devaluation race as the US tries to deal with crushing government liabilities.
It's not just China and the USA. Pretty much all countries want to devalue their currency to improve their balance of trade in a race to the bottom. Logically not everyone can win the race.
I don’t fully agree. Yes, AI can be seen as a cheaper outsourcing option, but there’s also a plausible future where companies lean more on outsourced engineers who are good at wielding AI effectively, to replace domestic mid-level roles. In other words, instead of nullifying outsourcing, AI might actually amplify it by raising the leverage of offshore talent.
Consider the kinds of jobs that are popular with outsourcing right now.
Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)
They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.
AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)
But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
> But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.
Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.
Also consider the mental health crisis among outsourced content moderation staff that have to appraise all kinds of depravity on a daily basis. This got some heavy reporting a year or two ago, in particular from Facebook. These folks for all their suffering are probably being culled right now.
You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.
In a vacuum, sure. But when you take two resources of similar ability and amplify their output, it makes those resources closer in cost per output, and in turn amplifies the risk factors for choosing the cheaper by cost resource. So locality, availability, communication, culture, etc, become more important.
I see it the other way around. An internal person with real domain knowledge can use AI far more effectively than an outsourced team. Domain knowledge is what matters now, and companies don’t want to pay for outsiders to learn it on their dime. AI let's the internal team be small enough that it's a better idea to keep things in house.
> AI and robotics nullify any comparative advantages low wage countries had
If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?
> could take back manufacturing dominance from countries with low wages (like China)?
The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
Some part of China have higher average salaries than some Eastern European countries.
The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.
Now if we do speak about India and the low skill IT jobs there. The story is completely different.
> The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine
But it's not like China had the skills, tooling and supply chain to begin with....and it's not like the US suddenly stopped having all those things. There are reasons manufacturing moved out of the US and it was not "They are soooo much better at all the things over there!"
Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.
One of the comments on the video puts it way better than I could:
@cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.
Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.
That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."
> it was not "They are soooo much better at all the things over there!"
Though I think we should also disabuse ourselves of the idea that this can't ever be the case.
An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.
Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.
Hence "had the skills" and "was not". They are not making claims about the present day, they are talking about why the shift happened in the first place and who brought it about.
Good point. When I commented, the sentence I quoted was the final sentence of their comment essentially leaving it more abstract. Though my comment barely interacts with their point anyways.
Sorry. I was typing, got distracted and submitted before I meant to. I thought I had edited pretty quickly, normally I put an edit tag if I think too much time had elapsed.
Manufacturing isn’t one uniform block of the economy that is either won or lost. US manufacturers focus on high quality, high precision, and high price orders. China excels at factories that will take small orders and get something shipped.
The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs
I would say, it pays more consistently than small jobs. As by nature small jobs are not generally continuous, most often piecemeal.
The other factors are:
In any sort of manufacturing, the only time you are making money is when the equipment is making product.
If you are stopped for a change over or setup you are losing money.
Changing over contains risk of improper setup, where you lose even more money since you produce unusable product.
Where I live, the local machine shops support themselves in two way:
1. Consistent volume work for an established customer.
2. Emergency work for other manufacturing sites: repair or reverse engineering and creating parts to support equipment(fast turn around and high cost)
They are willing to do small batches but lead times will be long since they have to work it into their production schedules.
And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise
Manufacturing expertise that have been transferred from the West over the last 40 years. Knowledge and expertise are fluid, they can go both way, they can be transferred to other countries as well, India, Vietnam, etc. The world doesn’t stand still.
Western engineers worked relentlessly on knowledge transfer to China to do so, it might be easy to bring back with the 10x industrial subsidies that the CCP provided to do so.
Probably not because America lacks the blue collar skills necessary to build and service the kind of manufacturing infrastructure needed to do what you're describing.
Depends where you draw the line. I would expect countries like China will continue to leverage AI to extend their lead in areas like low cost manufacturing. Some of the very low cost Chinese vendors I use are already using AI tools to review submitted pieces with mixed results, but they’re only going to get better at it.
it's wierd because where before, i've never had a offshore "VA" nor did I think they'd be useful. But after AI, I can just get the VA a subscription to Chatgpt and have them do the initial draft of whatever i need. ChatGPT get 80% of the way, VA gets the next 10 (Copying where i need it, removing obvious stuff that shouldn't be client facing, etc.), i only have to polish the last 10%.
Yes, I agree. And it is not that AI is any good, but those outsourcing shops are most of the time not adding any value, all the contrary takes time to babysit them. Some of this even look like an elaborate scam, someone in the organization launder money through this companies somehow, otherwise I don’t understand how they are useful. Obviously there some good ones, but in my experience is not the norm.
That would explain a lot, actually. If so, it'll be interesting to see what happens to the overall software economy when that revenue stream dries up. My wife grew up in Mexico on a border town and told me that the nightclubs in her town were amazing; when she moved to the US, she was disappointed by how drab the nightclubs here were. Later she found out that the border town nightclubs were so extravagant because they were laundering drug money. When they cracked down on the money laundering, the nightclubs reverted back to their natural "drab" state of relying on actual customers to pay the bills.
Yeah I think this will be a noticeable trend moving forward. We've frozen backfills in our offshore subsidiaries for the same reason; the quality is nonexistent and onshore resources spend hours every day fixing what the offshore people break.
I wonder if AI automation will even lead to a recession in total software engineering revenue.
At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.
I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'
What you are saying is not intuitive. Software engineers are a cost to software companies. With automation the profits would increase so I’m not sure how it can lead to recession.
Something not being intuitive doesn't make it untrue - if AI makes engineers 10x as productive it means that we need 1/10th the engineers to produce as much software as we do - it might induce demand but demand might not keep up with the production. SW Engineering might become a buyers market instead of a sellers market.
One example I mentioned is SaaS whose value proposition is that it's cheaper than to hire a dedicated guy to do it - if AI can do it, then that software has no more reason to exist.
Trying to understand this and please correct me if I am wrong:
A is producing something of value 100.
That is complex to configure so B comes along and they say: Buy from me at 150 and you will get both the product and the configuration.
C comes and say: there are multiple products like this so I created a marketplace where I do some offering that in the end will cost you 160 but you can switch providers whenever you want.
Now I am a customer of C and I buy at 160:
C gets 160 retains 10 but total revenue is 160
B gets 150 retains 50 but total revenue is 150
A gets the 100
Here is the question: How big is GDP in this case?
I think it is 160.
Now A adds LLM for about 4 extra that can do what B and C can (allegedly) removing the intermediaries and so now the GDP is 104.
This is technically correct but missing some details.
The real GDP after accounting for cost of living has not changed much because while GDP has decreased, cost of living has also decreased (because A is now priced at 104 instead of 160).
But it’s even better because we have this extra money that we previously spent on C. In theory we will spend this extra money somewhere else and drive demand there. The workers put out of employment due to LLM will move to that sector to fulfill it.
Now the GDP not only increased but also cost of living reduced.
Yes exactly. There's the joke of one economist paying the other $100 to dig a hole, then the other one giving back the money to the first one to fill it back up, thereby increasing the GDP by $200.
Historically imporvments in programmer productivity (e.g. via better languages, tooling and hardware) didn't correlate to a decrease in demand for programemrs, but quite the opposite.
Imo historically there was no connection between the two - demand for programmers increased, while at the same time, better tools came along.
I remember Bill Gates once said (sometime in the 2000s) that his biggest gripe, is during his decades in the software industry, despite dramatic improvements in computing power and software tools, there has only been a modest increase in productivity.
I started out programming in C for DOS, and once you got used to how things were done, you were just as productive.
The stuff frameworks and other stuff help with, is 50% of the job at max, which means due to Amdahls law, productivity can at most double.
In fact, I'd argue productivity actually got reduced (comparing my output now, vs back then). I blame this on 2 factors:
- Distractions, it's so easy to d*ck around the internet, instead of doing what you need to do. I have a ton of my old SVN/CVS repos, and the amount of progress I made was quite respectable, even though I recall being quite lazy.
- Tooling actually got worse in many ways. I used to write programs that ran on the PC, you could debug those with breakpoints, look into the logs as txt, deployment consisted of zipping up the exe/uploading the firmware to the uC. Nowadays, you work with CI/CD, cloud, all sorts of infra stuff, debugging consists of logging and reading logs etc. I'm sure I'm not really more productive.
This is completely different - said as someone who has been in the industry professionally for 30 years and as a hobbyist before then for a decade.
There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.
That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.
I've done a vibe coding hobby project where I simply give AI instructions on what I want, using a persona-based approach for the agent to generate or fix the code.
It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?
I enjoyed Andrej Karpathy's talk about software in the era of AI.
That has long been my personal theory as well, though I never had a way of firmly backing it up with evidence, though this article hardly does that either.
But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.
I can see this. We employ a lot of off shore for what we call support engineering. Things like jdk upgrades or cert updates. It’s grunt work that lets higher paid engineers utilize their time on business value work. As AI continues to grow in scope, it will surely commandeer much of this. Employing a human is more expensive when compared to compute for these tasks at a certain scale.
Through the CI/CD deployment pipeline that all code changes get deployed through. Primary engineering team reviews code and ensures things are tested appropriately.
If it requires a managed change, engineering team helps them draft the execution and schedule.
Skills would be similar to IT or dev ops but with expectation that they can code.
The Indian IT sector is almost certainly going to be decimated (at least in its current form), and we haven’t really wrapped our heads around what that means for the world’s fourth-largest economy.
The move that I’m fighting in my company now is hiring bargain basement Indian outsourced heads who are very obviously vibe coding slop. It’s a raw deal for us since we’re paying extra for a meat wrapper around an LLM coding agent, but I’m sure it’s a boon for the outsourcing company who can easily put one vibe-coding head on three or four engagements in parallel. It’s hard to imagine LLM coding technologies not being enthusiastically adopted by all of the outsourcers given the economic incentives to do so.
Whether or not they end up losing business long term, it seems like a nice grift for as long as they can pull it off.
I have hired “outsourced, offshore workers”. Anyone who has similar experience knows the challenges of finding quality talent. Generally, you don’t know what you’re going to get until you’ve already written the first check. Sometimes it is good quality, but lots of the time it is “acceptable” quality or poor quality that needs cleaning (or re-hiring), and 5% of the time it is absolute garbage. Since the costs are typically 1/10th to 1/20th that of US engineer, you can afford to make a few mistakes. However, I can see a future where I can hire local (US) and outsource to AI with oversight within the same budget.
One thing I recently realized is that the evolution and discussions of AI very closely mirrors those of offshoring, when offshoring first started off. Back then too discussions were about:
1) The quality of work produced being sub-par, with many instances of expensive, failed projects, leading to predictions of the death of offshoring.
2) Unwillingness of offshore teams to clarify or push back on requirements.
3) Local job displacement.
What people figured out soon enough was that offshoring was not as easy as "throwing some high-level requirements over the wall and getting back a fully functional project." Instead the industry realized that there needed to be technically-competent, business domain-savvy counterparts in the client company that would work closely with the offshore team, setting concrete and well-scoped milestones, establishing best practices, continously monitoring progress, providing guidance, removing blockers, and encouraging pushback on requirements, even revisiting them if needed.
Offshore teams on their part became culturally more comfortable with questioning requirements and engaging in 2-way discssions. Eventually offshore companies built up the business domain knowledge such that client companies could outsource higher- and higher-level work.
All successful outsourcing projects followed this model, and it spread quickly across the industry, which was why the predictions of the death of offshoring never materialized. In fact the practice has only continued to grow.
It's very interesting how much the same strategies apply to working with AI. A lot of the "how to code effectively with AI" articles basically offer the exact same advice.
On the job displacement side, however, the story may be very different.
With outsourcing, job displacement didn't turn out to be much of a concern because a) by delegating lower-level grunt work to offshore teams, local employees were then freed up to do higher-level, more innovative work; and b) until software has "eaten the whole world" the amount of new work is essentially unbounded.
With AI though, the job displacement could be much more real and long-lasting. The pace at which AI has improved is mind-boggling. Now the technically-competent, business-domain savvy expert could potentially get all the outsourced work done by themselves through an army of agents with very little human support, either local or offshore. Until the rest of the workforce can upskill themselves to the level of "technically-competent, business domain-savvy expert" their job is at risk.
"How many such roles does the world need?", and "How can junior employees get to that level without on-the-job experience?", are very open questions.
AI is going to force the issue of having to deal with the inequity in our economic system. And my belief is that this confrontation will be violent and many people are going to die.
The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.
AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.
Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.
Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.
It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.
Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.
IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.
What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?
You write as though "the wealthy" are a unified group acting in concert. They're not; they're just like everyone else in that regard, acting in their own, mostly short to medium term best interest. Seems like a pretty ordinary tragedy of the commons type of situation.
Oh I strongly disagree. If there's one thing the wealthy have is an intense class solidarity. They're fully aware of the power of class solidarity. You might see conflicts on the fringes but when the shit hits the fan, they will absolutely stick together.
They're so aware of the power of class solidarity that they've designed society to ensure that there is no class solidarity among the working class. All of the hot button social issues are intentionally divisive to avoid class solidarity.
It's greed and short-term thinking, We shouldn't be surprised by this because we see companies do it all the time. How many times have you thought an employer or some company in the news is operating on a time horizon no further than the next quarterly results?
To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.
You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.
On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.
The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.
> personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.
I agree on time frames. This system can limp on for decades yet. Or fall apart in 5 years (though probably not).
As for the tech gap, I disagree.
The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).
Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.
Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.
Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.
The ownership class and the labor class both suffer from a coordination problem.
The former from the coordination problem of extracting wealth but not fast enough that it solves the coordination problem for the labor class who, like you said, have strike first and revolt second as their battles of last resort.
The ownership class can voluntarily reduce wealth inequality, and they have before, but as history progresses and time marches on, so do the memories fade of what happens when they don't, pushing them closer and closer to options they don't want to admit work.
There are many ways to attack this assertion. For example:
1. The stagnation or decline in real wages in the developed world in recent decades;
2. Increasing homelessness as a consequence of the housing affordability crisis;
3. How global poverty has increased in the last century under capitalism. This surprises some because defenders claim the opposite. China is singlehandedly responsible to massive decrease in extreme poverty in the 20th century.
Maybe you're looking through the lens of tech. After all, we all have Internet-connected supercomputers in our pockets. While that's true, we're also working 3 jobs to pay for a 1 bedroom apartment where once a single job meant you had a house and enough to eat.
> India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.
What the Wikipedia search won't tell you is that the methodologies and poverty guidelines used in making some of these claims are rather questionable. While real progress has undeniably been made, the extent is greatly exaggerated:
I'm genuinely glad for the people in India. But that progress doesn't reduce the feeling of inequality here in the U.S.
Dismissing people with arguments doesn't work either. It doesn’t eliminate the feeling of inequality or change people's perspective about absolute vs relative wealth.
Why? Because the promise used to justify labor - that hard work will be rewarded - was deeply believed. The contradiction becomes visible when the wealthy hold 36,000 times more wealth than the average person[1]. No one can work 36,000 times harder or longer than someone else, so the belief is no longer tenable.
That leaves us with two choices: either acknowledge that "hard work alone" was never the full story, or take real steps to fix inequality. Pointing to poverty reduction in other countries doesn’t resolve this. It simply makes people feel unheard and resentful.
Average billionaire has $7B in wealth. Median individual U.S. wealth $190,000.
This is not the appropriate way to respond when the poster was clearly incorrect in their main points. Dismissing people with arguments is the rational thing to do.
Your first mistake is thinking hard work matters. No it doesn't and it shouldn't. Only work that provides value should matter - you don't deserve more money just for working 10x hard but when it doesn't matter to anyone.
Your entire comment hinges on a zero sum line of thinking and I don't abide by it. Things have improved for everyone as I have said above but I also acknowledged that inequality is increasing. Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.
> Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.
I literally acknowledged that prosperity has increased for people in other parts of the world.
Why don't you rewrite my comment so that it's acceptable to you and then we'll discuss that?
If we acknowledge that everyone is more prosperous now than before (which completely contradicts the post I was responding to) what is your point? Inequality? I think it is a problem but not so much if everyone is getting prosperous in the mean time.
Yes, as I pointed out in my original comment, inequality is my point.
If unaddressed - ie by dismissal - it doesn't go away. It simply festers. It will fester until it ruptures. Ignoring it or minimizing it doesn't make it go away.
Sure and I think solving inequality must be weighed along with increasing prosperity. Both have to be considered because often increasing one means increasing other - increasing taxes too much and there are no incentives to work and prosperity reduces. We need to find the right balance between both.
I do acknowledge that inequality can have unforeseen consequences and worth talking about and tackling today but only by considering the right tradeoffs.
I investigated the first link with ChatGPT. All the percentiles have increased except 10th percentile. But they do not account for after tax wages and other benefits and transfers.
https://www.cbo.gov/publication/59510 shows this. Bottom 20% wages after accounting for benefits and taxes have significantly increased. If you want to answer the question: are the bottom 20% materially more well off at 1960's than now - this is your answer. Hourly wages without accounting for benefits is missing a crucial element so not really indicative of reality.
Caveat: this shows the bottom quintile (20th percentile) and after looking at the data it appears to be a change of ~60% of real disposable income from 1978 to 2020. 10th percentile would be similar.
TL;DR: if you use real disposable income that accounts for taxes and benefits (what really matters) the wages have not stagnated for anyone but increased a lot - by almost 60%.
you're putting in a lot of work (well, i guess you're farming out the work to a third party service) to prove a portion of your argument with a metric that ignores inflation (including whatever you want to call what's happening right now). why? why is it so important to you to try to dispel a notion that is nearly-universally shared among scholars, experts and those actually experiencing ill effects due to the rise in costs of living compared to their income?
It’s not excluding inflation which means you didn’t put any effort into actual investigation. You just googled for what you wanted and posted three links without reading it.
It’s very telling that instead of refuting my point you instead choose to derail the discussions into a personal attack. Were you discussing in good faith you would try to understand what I said and reply to it.
It’s not universal at all that people are less prosperous now.
Why don’t you do good faith research and try to answer whether the bottom earners are actually better off now than before? You will come to the same conclusion.
Additionally we can point out the problems of inequality and governmental capture by elite interests (and they are problems) but then the jump to "government will do it better than these greedy people" is a big one and I don't see much evidence for it.
Whether or not you are correct about the concrete details here, it is laughable for regular people[1] to bicker about whose job will be replaced first when the people who profit from that are just sitting on their ass, ready to get labor for nothing instead of relatively little.
[1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy
The people who profit from it are very much not sitting on their ass. It is easy to dismiss them as a way to reinstate your ideology but the reality is they too are working hard because it is a volatile time for them as well. They have to keep up and employ the new technology appropriately or they will lose to their competition.
You’re right. They are working in the sense that they are competing with others to come out as the top parasites. Not to mention that working against laborers takes effort as well. But they are not working in the sense that people bicker about AI “taking jobs”; providing useful labor.
Competing against others to come out at the top _is_ useful labor. The best one wins usually and as a consumer you want the best products to come on top.
That’s not the case at all. Competition is integral to the system working. We have many laws to protect the market so that competition is viable like anti trust etc.
Competition is why you have good products. Can you explain to me what incentivizes Apple to make functional and impressive iPhones instead of selling us barely working phones without cameras?
In hindsight, remote working is an obvious stepping stone to offshoring, which itself is an inevitable milestone toward full automation. It is the work we do in in-person collaboration which will keep the moat high against AI disintermediation.
Doubt. The meaningful work in person is organisational, and it's only marginally better onsite due to whiteboard > Excalidraw. Who does what, how it'll all interact, architecture, etc. If an LLM can code the difficult bits and doesn't fall apart once the project isn't a brand new proof of concept, it'll surely be able to pick the correct pattern and tooling and/or power through the mediocre/bad decision.
Have you used a whiteboard recently? It sucks. Writing anything significant takes forever, there’s no undo or redo, difficult to save and version. There’s just no way it’s better.
It takes forever to make a beautiful diagram, but the usual flow is that you have your presentation for the base idea, and then when the questions come, you can all grab a marker and start making a mess on the boards around the room. We also have one in the dev room, which is nice for smaller topics.
It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
I do the same with Lucid App shared on Zoom. I have the base diagram in Lucid and I start making changes during the meeting and adding sticky notes docs.
> It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.
We have some of that, but it's not the whiteboards. The dev one gets used multiple times a day in a room with only developers. No management, no power structure around.
It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.
> remote working is an obvious stepping stone to offshoring
This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.
> which itself is an inevitable milestone toward full automation
But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.
While I agree with remote work to offshoring. I’m not sure about the next step.
I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.
In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.
The actual report (which this article doesn't link to; bad Axios):
https://nanda.media.mit.edu/ai_report_2025.pdf
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
Oof
These are probably the same people who are "surprised" when 100 offshore agency dredgings don't magically do the app 10x faster than 10 expensive onshore workers.
To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.
It's almost like the people in charge of these businesses have no goddamn clue what they're actually selling, how it works or why it's good (or isn't).
It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.
It's almost like that's a consequence of not enforcing anti-trust vigorously enough, allowing capital to accrete in Big Tech, and creating extremely large tech companies whose primary innovation comes from acquiring rather than building.
The reason a competitive ecosystem of tech companies is effective has less to do with market hand magic and more to do with big companies being dumb and conservative, largely as a consequence of their leader selection criteria.
Microsoft missing web and mobile.
Intel missing mobile and GPU.
Google missing productizing AI.
that meme with the oversized pants and penny loafers comes to mind.
I have tried to digest why this is done. It is not because they believe they are 10x faster.
It is because they think it will 10x their chances of getting a really good engineer for 1/10th as cheap.
At least that is my theory. maybe i am wrong. i try to be charitable.
Do you think you could do a better job than a CEO of public company [x] from a technical standpoint - in other words, omitting the connections and public-facing charisma that they typically bring as part of the package?
I genuinely do, but kind of paradoxically also suspect I'm wrong. It's simply that it's something so far outside my domain that I just can't really appreciate their skills honed over many years of practice and training, because all I get to externally see are their often ridiculous limitations, failures, and myopia.
I imagine this is, in many ways, how people who have no understanding of e.g. software, let alone software development, see software engineers. I don't think it's uncharitable, it's just human nature. Imagine if we were the ones hiring CEOs. 'That guys a total asshat, and we can get ten guys in India - hard working, smart guys, standouts in 1.4 billion people - for the same price.' Go go go.
I think there is confusion because coding is easy, software engineering is hard.
Alternatively, when your outsourcing agency finds they have accidentally assigned you an actually good engineer, term used loosely, that you're not paying for (they know this happens when the engineer gets up to speed with your codebase at all), "your guy" is replaced with another guy who inherits the name, the email address and the SSH key.
And if the agency doesn't do that, the good engineer will figure out he's being underpaid as slop-for-hire cannon fodder and move on his own accord.
No engineer is smart and overall capable enough to be called a 10x one and yet doesn't realize their price in western value. And we still talk about corporate cogs, the truly brilliant simply start their own gigs
This is no different from the personal computer, and it is to be expected.
The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.
But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.
Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.
No "oof" at all. This is how investing in new transformative business processes works.
> transformative business processes
Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web. Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.
My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.
[0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.
> This is no different from the personal computer, and it is to be expected.
What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.
With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.
You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)
But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.
The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.
Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....
...
The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.
What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?
Who's going to pay for a Personal Computer? Your accountant.
I highly doubt that the return in investment was seen immediately for personal computers. Do you have any evidence? Can you show me a company that adopted personal computers and immediately increased its profits? I’ll change my mind.
I'm sorry but you're asking me here to dig up decades old data to justify my claim that "The spreadsheet software has an immediately identifiable ROI".
I am not going to do that. If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.
An apple II cost $1300. VisiCalc cost $200. An accountant in that time would've cost ~10x that annually and would either spend quite a bit more than 10% doing the rote work, or hire dedicated people for it.
>If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.
Reality is complicated and messy. There are many hurdles to overcome, many people to convince and many logistics to handle. You can't just replace accountants with computers - it takes time. You can understand why I find it hard to believe that a huge jump like the one with software can take time as well.
> What are you talking about? The return on investment from computers was immediate and extremely identifiable.
It is well-documented, and called the "productivity paradox of computers" if you want to look it up. It was identified in 1987, and economic statistics show that personal computing didn't become a net positive for the economy until around 1995-1997.
And like I said, it's very dependent on the individual company. But consider how many businesses bought computers and didn't use them productively. Where it was a net loss because the computers were expensive and the software was expensive and the efficiency gained wasn't worth the cost -- or worse, they weren't a good match and efficiency actually dropped. Think of how many expensive attempted migrations from paper processes to early databases failed completely.
It's well documented. It's also quite controversial and economists still dispute it to this day.
It's economic analysis of the entire economy, from the "outside" (statistics) inward. My point is that the individual business case was financially solvent.
Apple Computer did not need to "change the world" it needed to sell computers at a profit, enough of them to cover their fixed costs, and do so without relying on other people just setting their money on fire. (And it succeeded on all three counts.) Whether or not they were a minute addition to the entire economy or a gigantic one is irrelevant.
Similarly with AI. AI does not need to "increase aggregate productivity over the entire economy", it needs to turn a profit or it dies. Whether or not it can keep the boomer pension funds from going insolvent is a question for economics wonks. Ultimately the aggregate economic effects follow from the individual one.
Thus the difference. PCs had a "core of financial solvency" nearly immediately. Even if they weren't useful for 99.9% of jobs that 0.1% would still find them useful enough to buy and keep the industry alive. If the hype were to run out on such an industry, it shrinks to something sustainable. (Compare: Consumer goods like smartwatches, which were hyped for a while, and didn't change the world but maintained a suitable core audience to sustain the industry)
With AI, even AI companies struggle to pitch such a core, nevermind actually prove it.
The productivity paradox isn't disputed by any mainstream economists. What is debated is its exact timing, size, and exactly which parts of businesses are most responsible (i.e. was eventual growth mostly about computers improving existing processes, or computers enabling brand-new processes like just-in-time supply chains)? The underlying concept is generally considered sound and uncontroversial.
I don't really understand what point you're trying to make. It seems like you're complaining that CapEx costs are higher in GenAI than they were in personal computing? But lots of industries have high CapEx. That's what investors are for.
The only point I've made is that "95% of organizations are getting zero return" is to be expected in the early days of a new technology, and that the personal computer is a reasonable analogy here. The subject here is companies that use the tech, not companies creating the tech. The investment model behind the core tech has nothing to do with the profitability of companies trying to use it or build on it. The point is that it takes a lot of time and trial and error to figure out how to use a new tech profitably, and we are currently in very early days of GenAI.
The contortions people will go through to defend a technology or concept they like blows my mind. Irrational exuberance is one thing, but denial of history in order to lower the bar for the next big thing really irritates me for some reason.
Computing was revolutionary, both at enterprise and personal scale (separately). I would say smartphones were revolutionary. The internet was revolutionary, though it did take a while to get going at scale.
Blockchain was not revolutionary.
I think LLM-based AI is trending towards blockchain, not general purpose computing. In order for it to be revolutionary, it needs to objectively and quantifiably add value to the lives (professionally or personally) of a significant piece of the population. I don't see how that happens with LLMs. They aren't reliable enough and don't seem to have any path towards reasoning or understanding.
The document actually debunks this take:
> GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.
They are not seeing the structural "disruptions" that were present for previous technological shifts.
Changes over which time window? AI projects in enterprises can’t be longer than 2 years, which is practically in testing the water phase, of course there are very few projects of the disruption nature exist yet.
I think it just turns into table stakes.
For companies competing in the same niche, the same low hanging fruits will be automated first if they invest in ML. So within the niche there is no comparative advantage.
It's pay big tech or fall behind.
I'm wondering, if the return is that the employees get 20 minutes extra free time per day, is that a good, quantifiable return? Would anyone consider as a "return" anything that you can't put on your balance sheet?
Archived at https://web.archive.org/web/20250818145714/https://nanda.med...
Not Found
The requested URL was not found on this server. Apache/2.4.62 (Debian) Server at nanda.media.mit.edu Port 443
AI is already so much better than 99% of customer support employees.
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
I don't want AI customer support. I want open documentation so I can ask AI if I want or ask human support if it's not resolvable with available documentation
All my interactions with any AI support so far is repeatedly saying "call human" until it calls human
This is such a HN comment lol.
Customer support is when all the documentation already failed and you need a human.
[dead]
> AI is already so much better than 99% of customer support employees.
99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.
Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.
AI seems like a dream for some of these companies to offer even worse customer service, though.
Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.
>> 99% seems like a pulled-out-of-your-butt number
I don't agree. AI support is as useless as real customer support. But it is more polite, calm, with clear voice, etc. Much better, isn't it?
This is great but most customer support is actually designed as a “speed bump” for customers.
Cancel account- have them call someone.
Withdraw too much - make it a phone call.
Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
Etc.
>Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
That doesn't make much sense. Either your system can handle it or it can't. Putting a support agent in front isn't going to change that.
The backend can. But what’s exposed to customers will be a very very small subset of that capability. Hence why only the csrs can perform that function.
The business undoubtedly did a crude cost/benefit analysis where the cost to expose and maintain that public interface vastly outstrips the cost for the few people that have to call in and change their name.
I think you missed the point of the parent. All these things are speed bumps and the "reasons" for having them are mostly incidental, as the main reason is to avoid the expense of having any more customer support personnel / infrastructure than is absolutely necessary to function.
Except AI support agents are only using content that is already available in support knowledge bases, making the entire exercise futile and redundant. But sure, they're eloquent while wasting your time.
There are AI agents that train from knowledge bases but also keep improving on actual conversations. For example, our Mava bot actually learns from mods directly within Discord servers. So it's not about replacing human mods but assist them so they can take better care of users in the end.
I don't see how this is any different than enriching knowledge bases from feedback and experience. You just find yourself duplicating all the information, locking yourself in your AI vendor and investing in a technology that doesn't add anything to what you had before. It's utterly nonsensical.
"Only" kind of misses the benefit though. I'm very bearish on "AI", but this is an absolutely perfect use case for LLMs. The issue is that if you describe a problem in natural language on any search engine, your results are going to be garbage unless you randomly luckboxed into somebody asking, with near identical verbiage, the question on some Q&A site.
That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.
"Instead of making search smarter we just decided to make everyone stupider"
Why invest in making users more savvy when you can dumb down everything to 5 year old level eh
Speaking of which, could we apply vector embeddings to search engines (where crawled pages get indexed by their vector embeddings rather than raw text) and use that for better fuzzy search results even without an LLM in the mix?
(Might be a naïve question, I'm at the edge of my understanding)
Why stop there? The LLM can synthesize the results and spare you the work.
I would agree, but I’ve spent the last ten years or so working with outsourced tech support and I guarantee you, a lot of people call us just because they can’t be bothered to look for themselves.
Getting instant answers without having to deploy any effort isn't going to make the problem go away, it's going to make us dependent on the solution.
> a lot of people call us just because they can’t be bothered to look for themselves
if the service offered is "support" then why is a phone call less acceptable than reading documentation?
Most of my questions are answerable from the support knowledge base.
If i am calling support, it is probably because I already scoured the resources.
Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.
I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).
Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.
It’s even worse than you think. I work with Amazon Connect. Now the human agent doesn’t have to search the Knowledge Base manually, likely answers will automatically be shown to the agent based on the conversation that you are having. They are just regurgitating what you can find for yourself.
But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.
When asking customers how well they were helped by the customer support system (via CSAT score), I've found industry-standard AI support agents will generally perform worse than a well-trained human support team. AI agents are fine at handling some simple queries, e.g. basic product and order informatino, but support issues are often biased towards high complexity, because otherwise they could solve it in a more automated. I'm sure it depends on the industry, and whether the customer's issue is truly novel.
I think the main problem is access, not quality.
I.e. AI isn't allowed to offer me a refund because my order never arrived. For that, I have to spend 20 minutes on the phone with Mike from India.
Improves brand reputation? I don't think I've seen a single case where someone is glad to talk to an LLM/chat bot instead of an actual person. Personally, I think less of any company that uses them. I've never seen one be actually useful, and they seem to only really regurgitate links to FAQ pages or give the most generic answers possible while I fight to get a customer service number so I can actually solve the problem at hand.
It isn’t empowered to do anything you can’t already do in the UI, so it is useless to me.
Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.
A chatbot for those sorts of queries that are easily answerable is great in most scenarios though to "keeps the phone lines clear"
The trouble is when they gatekeep you from saying "I know what I'm doing, let me talk to someone"
> AI is already so much better than 99% of customer support employees.
i have yet to experience this. unfortunately i fear it's the best i can hope for, and i worry for those in support positions.
MS customer service is perhaps the lowest bar available. One look at their tech support forums tells you that most of what they post is canned garbage that is no help to anyone.
AI is not better than a good customer service team, or even an above-average one. It is better than a broken customer service team, however. As others have noted, 99% is hyperbolic BS.
IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.
> IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.
Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?
I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.
We might discover that AI capacity is easier to grow in these markets too.
> Then why hasn't it yet?
Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.
This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.
>They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom
Right. And AI is here to fix that!
Hard to honestly say if China is low wage. On one hand, their wages have risen as the work force has shrunk now for a few years and tasks are being outsourced to other countries. On the other hand, their currency is pegged meaning that the earning power of the workers should be much higher so that they can afford the things they are making and transition to a consumer driven economy.
They are very much devaluing their currency. This is all the rage and I expect a currency devaluation race as the US tries to deal with crushing government liabilities.
It's not just China and the USA. Pretty much all countries want to devalue their currency to improve their balance of trade in a race to the bottom. Logically not everyone can win the race.
> We might discover that AI capacity is easier to grow in these markets too.
If only because someone else has to build all the nuclear reactors that supply the data centers with electricity. /s
I don’t fully agree. Yes, AI can be seen as a cheaper outsourcing option, but there’s also a plausible future where companies lean more on outsourced engineers who are good at wielding AI effectively, to replace domestic mid-level roles. In other words, instead of nullifying outsourcing, AI might actually amplify it by raising the leverage of offshore talent.
Consider the kinds of jobs that are popular with outsourcing right now.
Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)
They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.
AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)
But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
> But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.
Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.
Also consider the mental health crisis among outsourced content moderation staff that have to appraise all kinds of depravity on a daily basis. This got some heavy reporting a year or two ago, in particular from Facebook. These folks for all their suffering are probably being culled right now.
You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.
Still brain damage, but less. A good thing?
In a vacuum, sure. But when you take two resources of similar ability and amplify their output, it makes those resources closer in cost per output, and in turn amplifies the risk factors for choosing the cheaper by cost resource. So locality, availability, communication, culture, etc, become more important.
I see it the other way around. An internal person with real domain knowledge can use AI far more effectively than an outsourced team. Domain knowledge is what matters now, and companies don’t want to pay for outsiders to learn it on their dime. AI let's the internal team be small enough that it's a better idea to keep things in house.
> AI and robotics nullify any comparative advantages low wage countries had
If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?
> could take back manufacturing dominance from countries with low wages (like China)?
The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
Some part of China have higher average salaries than some Eastern European countries.
The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.
Now if we do speak about India and the low skill IT jobs there. The story is completely different.
> The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine
China dominance in manufacturing, at least in tech, it's not based on cheap labor, but rather in skills, tooling and supply chain advantages.
Tim Cook explains it better that I could ever do:
https://www.youtube.com/watch?v=2wacXUrONUY
But it's not like China had the skills, tooling and supply chain to begin with....and it's not like the US suddenly stopped having all those things. There are reasons manufacturing moved out of the US and it was not "They are soooo much better at all the things over there!"
Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.
One of the comments on the video puts it way better than I could:
@cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.
Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.
That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."
> it was not "They are soooo much better at all the things over there!"
Though I think we should also disabuse ourselves of the idea that this can't ever be the case.
An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.
Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.
Hence "had the skills" and "was not". They are not making claims about the present day, they are talking about why the shift happened in the first place and who brought it about.
Good point. When I commented, the sentence I quoted was the final sentence of their comment essentially leaving it more abstract. Though my comment barely interacts with their point anyways.
Sorry. I was typing, got distracted and submitted before I meant to. I thought I had edited pretty quickly, normally I put an edit tag if I think too much time had elapsed.
I was just blaming it on that. In reality my comment was making a trivial claim rather than a good observation.
Manufacturing isn’t one uniform block of the economy that is either won or lost. US manufacturers focus on high quality, high precision, and high price orders. China excels at factories that will take small orders and get something shipped.
The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs
I would say, it pays more consistently than small jobs. As by nature small jobs are not generally continuous, most often piecemeal.
The other factors are: In any sort of manufacturing, the only time you are making money is when the equipment is making product.
If you are stopped for a change over or setup you are losing money. Changing over contains risk of improper setup, where you lose even more money since you produce unusable product.
Where I live, the local machine shops support themselves in two way: 1. Consistent volume work for an established customer. 2. Emergency work for other manufacturing sites: repair or reverse engineering and creating parts to support equipment(fast turn around and high cost)
They are willing to do small batches but lead times will be long since they have to work it into their production schedules.
Hard disagree. You can't just one day wake up and double your energy infrastructure.. China is way ahead.
China has more robots per capita than the US
And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise
Manufacturing expertise that have been transferred from the West over the last 40 years. Knowledge and expertise are fluid, they can go both way, they can be transferred to other countries as well, India, Vietnam, etc. The world doesn’t stand still.
I don't get why I was downvoted. I didn't say anything that contradicts what you just said.
Western engineers worked relentlessly on knowledge transfer to China to do so, it might be easy to bring back with the 10x industrial subsidies that the CCP provided to do so.
And the US is already starting to do it, for example partnering with South Korea or Japan to rebuild American shipbuilding.
Probably not because America lacks the blue collar skills necessary to build and service the kind of manufacturing infrastructure needed to do what you're describing.
Depends where you draw the line. I would expect countries like China will continue to leverage AI to extend their lead in areas like low cost manufacturing. Some of the very low cost Chinese vendors I use are already using AI tools to review submitted pieces with mixed results, but they’re only going to get better at it.
it's wierd because where before, i've never had a offshore "VA" nor did I think they'd be useful. But after AI, I can just get the VA a subscription to Chatgpt and have them do the initial draft of whatever i need. ChatGPT get 80% of the way, VA gets the next 10 (Copying where i need it, removing obvious stuff that shouldn't be client facing, etc.), i only have to polish the last 10%.
They will still be the cheaper countries to run your ai models and robotics factory by a longshot compared to the western world.
Lemme know when robots will make your sneakers and T-Shirts and pick fruits from fields at a competitive price to third world slave labor.
Yes, I agree. And it is not that AI is any good, but those outsourcing shops are most of the time not adding any value, all the contrary takes time to babysit them. Some of this even look like an elaborate scam, someone in the organization launder money through this companies somehow, otherwise I don’t understand how they are useful. Obviously there some good ones, but in my experience is not the norm.
> launder money through this companies
That would explain a lot, actually. If so, it'll be interesting to see what happens to the overall software economy when that revenue stream dries up. My wife grew up in Mexico on a border town and told me that the nightclubs in her town were amazing; when she moved to the US, she was disappointed by how drab the nightclubs here were. Later she found out that the border town nightclubs were so extravagant because they were laundering drug money. When they cracked down on the money laundering, the nightclubs reverted back to their natural "drab" state of relying on actual customers to pay the bills.
You are not wrong. Sometimes I have seen outsourcing relationships that I am sure are suspect in some way.
It may just be incompetence in large organisations though. Things get outsourced because nobody wants to manage them.
Yeah I think this will be a noticeable trend moving forward. We've frozen backfills in our offshore subsidiaries for the same reason; the quality is nonexistent and onshore resources spend hours every day fixing what the offshore people break.
https://archive.today/dcz9V
Original title "AI is already displacing these jobs" tweaked using context from first paragraph to be less clickbaity.
"You'll never guess which jobs AI is about to replace!"
haha
Makes sense, current llms seem to be at a similar level considering quality and supervision.
I wonder if AI automation will even lead to a recession in total software engineering revenue.
At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.
I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'
What you are saying is not intuitive. Software engineers are a cost to software companies. With automation the profits would increase so I’m not sure how it can lead to recession.
Something not being intuitive doesn't make it untrue - if AI makes engineers 10x as productive it means that we need 1/10th the engineers to produce as much software as we do - it might induce demand but demand might not keep up with the production. SW Engineering might become a buyers market instead of a sellers market.
One example I mentioned is SaaS whose value proposition is that it's cheaper than to hire a dedicated guy to do it - if AI can do it, then that software has no more reason to exist.
They used the word in an irregular way. They meant a decline in software company revenue, not an economic recession.
You might well see more software profits if costs go down but less revenue. Depends on Jevon's paradox really
No this doesn’t make sense either. Why would Amazon‘s profits go down if their engineers are cheaper?
In AWS's case - if AI can replicate what AWS offers as a value add - then you migh go with a cheaper cloud provider.
Like, you have the option of either using AWS RDS, or hiring a DBA and devops who administer your DB, and set up backups, replication and networking.
If AI (or a regular dev with the help of AI) can do that, it might mean your company decides to take the administrative burden on, and save the money.
More middlemen = more revenue/GDP, right?
I don’t think middle men are counted in GDP because GDP only counts final value and not intermediate.
Trying to understand this and please correct me if I am wrong:
A is producing something of value 100. That is complex to configure so B comes along and they say: Buy from me at 150 and you will get both the product and the configuration.
C comes and say: there are multiple products like this so I created a marketplace where I do some offering that in the end will cost you 160 but you can switch providers whenever you want.
Now I am a customer of C and I buy at 160: C gets 160 retains 10 but total revenue is 160 B gets 150 retains 50 but total revenue is 150 A gets the 100
Here is the question: How big is GDP in this case?
I think it is 160.
Now A adds LLM for about 4 extra that can do what B and C can (allegedly) removing the intermediaries and so now the GDP is 104.
Am I wrong with this?
This is technically correct but missing some details.
The real GDP after accounting for cost of living has not changed much because while GDP has decreased, cost of living has also decreased (because A is now priced at 104 instead of 160).
But it’s even better because we have this extra money that we previously spent on C. In theory we will spend this extra money somewhere else and drive demand there. The workers put out of employment due to LLM will move to that sector to fulfill it.
Now the GDP not only increased but also cost of living reduced.
Yes exactly. There's the joke of one economist paying the other $100 to dig a hole, then the other one giving back the money to the first one to fill it back up, thereby increasing the GDP by $200.
Historically imporvments in programmer productivity (e.g. via better languages, tooling and hardware) didn't correlate to a decrease in demand for programemrs, but quite the opposite.
Imo historically there was no connection between the two - demand for programmers increased, while at the same time, better tools came along.
I remember Bill Gates once said (sometime in the 2000s) that his biggest gripe, is during his decades in the software industry, despite dramatic improvements in computing power and software tools, there has only been a modest increase in productivity.
I started out programming in C for DOS, and once you got used to how things were done, you were just as productive.
The stuff frameworks and other stuff help with, is 50% of the job at max, which means due to Amdahls law, productivity can at most double.
In fact, I'd argue productivity actually got reduced (comparing my output now, vs back then). I blame this on 2 factors:
- Distractions, it's so easy to d*ck around the internet, instead of doing what you need to do. I have a ton of my old SVN/CVS repos, and the amount of progress I made was quite respectable, even though I recall being quite lazy.
- Tooling actually got worse in many ways. I used to write programs that ran on the PC, you could debug those with breakpoints, look into the logs as txt, deployment consisted of zipping up the exe/uploading the firmware to the uC. Nowadays, you work with CI/CD, cloud, all sorts of infra stuff, debugging consists of logging and reading logs etc. I'm sure I'm not really more productive.
This is completely different - said as someone who has been in the industry professionally for 30 years and as a hobbyist before then for a decade.
There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.
That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.
I've done a vibe coding hobby project where I simply give AI instructions on what I want, using a persona-based approach for the agent to generate or fix the code.
It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?
I enjoyed Andrej Karpathy's talk about software in the era of AI.
https://www.youtube.com/watch?v=LCEmiRjPEtQ
As an aside, his talk isn't about using AI to write code, it's about using AI instead of code itself.
I work with Claude more-or-less how I worked with our Indian colleagues; the difference is Claude is improving over time.
That has long been my personal theory as well, though I never had a way of firmly backing it up with evidence, though this article hardly does that either.
But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.
I can see this. We employ a lot of off shore for what we call support engineering. Things like jdk upgrades or cert updates. It’s grunt work that lets higher paid engineers utilize their time on business value work. As AI continues to grow in scope, it will surely commandeer much of this. Employing a human is more expensive when compared to compute for these tasks at a certain scale.
i worked for a company that did this. then they opened an office in india and put us employees on call for india to squeeze em out haha
Can I ask how the offshore team manages to deploy these changes? What skills are required for such a role in support engineering?
Through the CI/CD deployment pipeline that all code changes get deployed through. Primary engineering team reviews code and ensures things are tested appropriately.
If it requires a managed change, engineering team helps them draft the execution and schedule.
Skills would be similar to IT or dev ops but with expectation that they can code.
With a ci/cd in place and a reviewer from the devs it’s probably very less value that the offshore employees are providing.
Moreover these kind of upgrades sometimes involve unforeseen regressions which again can’t be solved by these employees.
The value is their salary being dramatically less.
“AI reshoring” is what I call it. Makes perfect sense.
The Indian IT sector is almost certainly going to be decimated (at least in its current form), and we haven’t really wrapped our heads around what that means for the world’s fourth-largest economy.
https://www.youtube.com/watch?v=CK-gnW3f-q0
Indian IT sector in nutshell : https://youtu.be/LWfFD4DiUiA?si=VCpWVx_cKKCqCBxH&t=496
what this person has started is being done by WITCH companies at largest scale and most fraudulent way possible
The move that I’m fighting in my company now is hiring bargain basement Indian outsourced heads who are very obviously vibe coding slop. It’s a raw deal for us since we’re paying extra for a meat wrapper around an LLM coding agent, but I’m sure it’s a boon for the outsourcing company who can easily put one vibe-coding head on three or four engagements in parallel. It’s hard to imagine LLM coding technologies not being enthusiastically adopted by all of the outsourcers given the economic incentives to do so.
Whether or not they end up losing business long term, it seems like a nice grift for as long as they can pull it off.
So, aside from trust the biggest barrier is lack of adaptability?
I have hired “outsourced, offshore workers”. Anyone who has similar experience knows the challenges of finding quality talent. Generally, you don’t know what you’re going to get until you’ve already written the first check. Sometimes it is good quality, but lots of the time it is “acceptable” quality or poor quality that needs cleaning (or re-hiring), and 5% of the time it is absolute garbage. Since the costs are typically 1/10th to 1/20th that of US engineer, you can afford to make a few mistakes. However, I can see a future where I can hire local (US) and outsource to AI with oversight within the same budget.
Pretend for a moment that capital investors do any work. Can AI replace that work?
The investment bankers I know worked harder than any software developer to get where they are.
I'm sure parasites work very hard to survive.
And yet have produced nothing of value for society.
In theory they efficiently allocate capital and resources to drive increased standards of living. In theory.
In theory.
Capitalists serve a useful function according to capitalist theory.
Funny how that happens.
They do a lot of work but a lot of market research can be automated too.
Yep! The low value outsourcing firms like Indian WITCH companies have been heavily leveraging LLMs and laying off employees as a result.
High value product work remains safe from AI automation for now, but it was also safe from offshoring so long as domestic capacity existed.
Source: https://nanda.media.mit.edu/ai_report_2025.pdf (https://news.ycombinator.com/item?id=44941374)
Or err, since that's been taken down: https://web.archive.org/web/20250818145714/https://nanda.med...
This makes a ton of sense. The interaction is similar: write specs, give orders, wait, review and fix results.
Im skeptical it's even replacing those.
One thing I recently realized is that the evolution and discussions of AI very closely mirrors those of offshoring, when offshoring first started off. Back then too discussions were about:
1) The quality of work produced being sub-par, with many instances of expensive, failed projects, leading to predictions of the death of offshoring.
2) Unwillingness of offshore teams to clarify or push back on requirements.
3) Local job displacement.
What people figured out soon enough was that offshoring was not as easy as "throwing some high-level requirements over the wall and getting back a fully functional project." Instead the industry realized that there needed to be technically-competent, business domain-savvy counterparts in the client company that would work closely with the offshore team, setting concrete and well-scoped milestones, establishing best practices, continously monitoring progress, providing guidance, removing blockers, and encouraging pushback on requirements, even revisiting them if needed.
Offshore teams on their part became culturally more comfortable with questioning requirements and engaging in 2-way discssions. Eventually offshore companies built up the business domain knowledge such that client companies could outsource higher- and higher-level work.
All successful outsourcing projects followed this model, and it spread quickly across the industry, which was why the predictions of the death of offshoring never materialized. In fact the practice has only continued to grow.
It's very interesting how much the same strategies apply to working with AI. A lot of the "how to code effectively with AI" articles basically offer the exact same advice.
On the job displacement side, however, the story may be very different.
With outsourcing, job displacement didn't turn out to be much of a concern because a) by delegating lower-level grunt work to offshore teams, local employees were then freed up to do higher-level, more innovative work; and b) until software has "eaten the whole world" the amount of new work is essentially unbounded.
With AI though, the job displacement could be much more real and long-lasting. The pace at which AI has improved is mind-boggling. Now the technically-competent, business-domain savvy expert could potentially get all the outsourced work done by themselves through an army of agents with very little human support, either local or offshore. Until the rest of the workforce can upskill themselves to the level of "technically-competent, business domain-savvy expert" their job is at risk.
"How many such roles does the world need?", and "How can junior employees get to that level without on-the-job experience?", are very open questions.
Good riddance.
For now!
AI is going to force the issue of having to deal with the inequity in our economic system. And my belief is that this confrontation will be violent and many people are going to die.
The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.
AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.
Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.
Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.
It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.
No war but class war.
Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.
IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.
What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?
You write as though "the wealthy" are a unified group acting in concert. They're not; they're just like everyone else in that regard, acting in their own, mostly short to medium term best interest. Seems like a pretty ordinary tragedy of the commons type of situation.
Oh I strongly disagree. If there's one thing the wealthy have is an intense class solidarity. They're fully aware of the power of class solidarity. You might see conflicts on the fringes but when the shit hits the fan, they will absolutely stick together.
They're so aware of the power of class solidarity that they've designed society to ensure that there is no class solidarity among the working class. All of the hot button social issues are intentionally divisive to avoid class solidarity.
It's greed and short-term thinking, We shouldn't be surprised by this because we see companies do it all the time. How many times have you thought an employer or some company in the news is operating on a time horizon no further than the next quarterly results?
To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.
You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.
On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.
The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.
> personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.
I agree on time frames. This system can limp on for decades yet. Or fall apart in 5 years (though probably not).
As for the tech gap, I disagree.
The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).
Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.
Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.
Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.
The ownership class and the labor class both suffer from a coordination problem.
The former from the coordination problem of extracting wealth but not fast enough that it solves the coordination problem for the labor class who, like you said, have strike first and revolt second as their battles of last resort.
The ownership class can voluntarily reduce wealth inequality, and they have before, but as history progresses and time marches on, so do the memories fade of what happens when they don't, pushing them closer and closer to options they don't want to admit work.
These are common simple Marxist points you are bringing up.
Your point hinges on: declining material conditions.
It is completely false - the conditions are pretty great for everyone. People have good wages relatively but sure inequality is increasing.
Since your main point is incorrect I don’t think your other points follow.
There are many ways to attack this assertion. For example:
1. The stagnation or decline in real wages in the developed world in recent decades;
2. Increasing homelessness as a consequence of the housing affordability crisis;
3. How global poverty has increased in the last century under capitalism. This surprises some because defenders claim the opposite. China is singlehandedly responsible to massive decrease in extreme poverty in the 20th century.
Maybe you're looking through the lens of tech. After all, we all have Internet-connected supercomputers in our pockets. While that's true, we're also working 3 jobs to pay for a 1 bedroom apartment where once a single job meant you had a house and enough to eat.
Your first and last points are egregiously incorrect. A simple google search will tell you this.
Extreme poverty throughout the world has dramatically reduced. In Western Europe it came down from 50% to less than 1% through the 20th century.
India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.
Wages has been increasing in china, India as well as USA after accounting for inflation. It’s sort of stagnant in Europe.
> India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.
What the Wikipedia search won't tell you is that the methodologies and poverty guidelines used in making some of these claims are rather questionable. While real progress has undeniably been made, the extent is greatly exaggerated:
https://www.project-syndicate.org/commentary/indian-governme...
I'm genuinely glad for the people in India. But that progress doesn't reduce the feeling of inequality here in the U.S.
Dismissing people with arguments doesn't work either. It doesn’t eliminate the feeling of inequality or change people's perspective about absolute vs relative wealth.
Why? Because the promise used to justify labor - that hard work will be rewarded - was deeply believed. The contradiction becomes visible when the wealthy hold 36,000 times more wealth than the average person[1]. No one can work 36,000 times harder or longer than someone else, so the belief is no longer tenable.
That leaves us with two choices: either acknowledge that "hard work alone" was never the full story, or take real steps to fix inequality. Pointing to poverty reduction in other countries doesn’t resolve this. It simply makes people feel unheard and resentful.
Average billionaire has $7B in wealth. Median individual U.S. wealth $190,000.
This is not the appropriate way to respond when the poster was clearly incorrect in their main points. Dismissing people with arguments is the rational thing to do.
Your first mistake is thinking hard work matters. No it doesn't and it shouldn't. Only work that provides value should matter - you don't deserve more money just for working 10x hard but when it doesn't matter to anyone.
Your entire comment hinges on a zero sum line of thinking and I don't abide by it. Things have improved for everyone as I have said above but I also acknowledged that inequality is increasing. Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.
> Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.
I literally acknowledged that prosperity has increased for people in other parts of the world.
Why don't you rewrite my comment so that it's acceptable to you and then we'll discuss that?
If we acknowledge that everyone is more prosperous now than before (which completely contradicts the post I was responding to) what is your point? Inequality? I think it is a problem but not so much if everyone is getting prosperous in the mean time.
Yes, as I pointed out in my original comment, inequality is my point.
If unaddressed - ie by dismissal - it doesn't go away. It simply festers. It will fester until it ruptures. Ignoring it or minimizing it doesn't make it go away.
Sure and I think solving inequality must be weighed along with increasing prosperity. Both have to be considered because often increasing one means increasing other - increasing taxes too much and there are no incentives to work and prosperity reduces. We need to find the right balance between both.
I do acknowledge that inequality can have unforeseen consequences and worth talking about and tackling today but only by considering the right tradeoffs.
Productivity in the US has gone up steadily since WWII (“only work that provides value should matter”) but wages have stagnated since the 70’s.
Wages have _not_ stagnated since the 70's. Nor has it stagnated since the 2000's.
Can you provide a source to backup your claim?
https://www.epi.org/publication/charting-wage-stagnation/
https://www.cnbc.com/2022/07/19/heres-how-labor-dynamism-aff...
https://www.americanbar.org/groups/crsj/resources/human-righ...
I investigated the first link with ChatGPT. All the percentiles have increased except 10th percentile. But they do not account for after tax wages and other benefits and transfers.
https://www.cbo.gov/publication/59510 shows this. Bottom 20% wages after accounting for benefits and taxes have significantly increased. If you want to answer the question: are the bottom 20% materially more well off at 1960's than now - this is your answer. Hourly wages without accounting for benefits is missing a crucial element so not really indicative of reality.
Caveat: this shows the bottom quintile (20th percentile) and after looking at the data it appears to be a change of ~60% of real disposable income from 1978 to 2020. 10th percentile would be similar.
TL;DR: if you use real disposable income that accounts for taxes and benefits (what really matters) the wages have not stagnated for anyone but increased a lot - by almost 60%.
you're putting in a lot of work (well, i guess you're farming out the work to a third party service) to prove a portion of your argument with a metric that ignores inflation (including whatever you want to call what's happening right now). why? why is it so important to you to try to dispel a notion that is nearly-universally shared among scholars, experts and those actually experiencing ill effects due to the rise in costs of living compared to their income?
It’s not excluding inflation which means you didn’t put any effort into actual investigation. You just googled for what you wanted and posted three links without reading it.
It’s very telling that instead of refuting my point you instead choose to derail the discussions into a personal attack. Were you discussing in good faith you would try to understand what I said and reply to it.
It’s not universal at all that people are less prosperous now.
Why don’t you do good faith research and try to answer whether the bottom earners are actually better off now than before? You will come to the same conclusion.
Additionally we can point out the problems of inequality and governmental capture by elite interests (and they are problems) but then the jump to "government will do it better than these greedy people" is a big one and I don't see much evidence for it.
Whether or not you are correct about the concrete details here, it is laughable for regular people[1] to bicker about whose job will be replaced first when the people who profit from that are just sitting on their ass, ready to get labor for nothing instead of relatively little.
[1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy
The people who profit from it are very much not sitting on their ass. It is easy to dismiss them as a way to reinstate your ideology but the reality is they too are working hard because it is a volatile time for them as well. They have to keep up and employ the new technology appropriately or they will lose to their competition.
You’re right. They are working in the sense that they are competing with others to come out as the top parasites. Not to mention that working against laborers takes effort as well. But they are not working in the sense that people bicker about AI “taking jobs”; providing useful labor.
Competing against others to come out at the top _is_ useful labor. The best one wins usually and as a consumer you want the best products to come on top.
Maybe this is very Vulgar Marxist but that seems a bit like crediting gangsters competing to shake down construction businesses for building bridges.
That’s not the case at all. Competition is integral to the system working. We have many laws to protect the market so that competition is viable like anti trust etc.
Competition is why you have good products. Can you explain to me what incentivizes Apple to make functional and impressive iPhones instead of selling us barely working phones without cameras?
[dead]
H1Bs are replacing workers.
H1B is a type of visa, the holders of which are workers.
In hindsight, remote working is an obvious stepping stone to offshoring, which itself is an inevitable milestone toward full automation. It is the work we do in in-person collaboration which will keep the moat high against AI disintermediation.
Doubt. The meaningful work in person is organisational, and it's only marginally better onsite due to whiteboard > Excalidraw. Who does what, how it'll all interact, architecture, etc. If an LLM can code the difficult bits and doesn't fall apart once the project isn't a brand new proof of concept, it'll surely be able to pick the correct pattern and tooling and/or power through the mediocre/bad decision.
> The meaningful work in person is organisational
And also gaining information about the domain from the business and the business requirements for the system or feature.
The invisible hand will provide the answer!
Have you used a whiteboard recently? It sucks. Writing anything significant takes forever, there’s no undo or redo, difficult to save and version. There’s just no way it’s better.
It takes forever to make a beautiful diagram, but the usual flow is that you have your presentation for the base idea, and then when the questions come, you can all grab a marker and start making a mess on the boards around the room. We also have one in the dev room, which is nice for smaller topics.
It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
I do the same with Lucid App shared on Zoom. I have the base diagram in Lucid and I start making changes during the meeting and adding sticky notes docs.
> It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.
My diagrams are already in Lucid with notes
That just sounds like office theatre.
We have some of that, but it's not the whiteboards. The dev one gets used multiple times a day in a room with only developers. No management, no power structure around.
It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.
> remote working is an obvious stepping stone to offshoring
This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.
> which itself is an inevitable milestone toward full automation
But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.
While I agree with remote work to offshoring. I’m not sure about the next step.
I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.
In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.
[dead]