I struggle with the "good guys vs bad guys" framing here.
Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
That’s because there is no nuance to how AI was built/trained and how it operates. As someone else put it: It is a rude technology that deserves a rude response
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
No amount of framing (unless written into law) would stop small indie devs from doing this. AI is just too efficient, making too much sense economically. People who are willing to starve for their ideology is always the minority.
Even artisans who build hand-made wooden furniture use power tools today. The tools that make economical sense will prevail one way or another.
I think most people don't have an issue with the models themselves, just the big service providers who are up to some very shady and possibly illegal stuff.
Personally I'd rather a future where everyone used local models.
I think that presumes that an LLM is capable of "unblocking a tricky C# problem" that a group of humans cannot with research. LLMs don't understand the code they output; they just regurgitate code that's already in their training set (the proof is in the pudding; see the many posts on HN about these LLM coding assistants outputting copyrighted code token for token).
So if the tricky C# problem isn't already in their data set, the output of the LLM is, at best, random crap. Even the worst human effort would exceed the output of the LLM, and that is the average case for any "tricky" problem. LLMs are fundamentally only useful on the most common types of problems that are can better be addressed by using frameworks, plugins, or APIs.
(And on that note: every programmer I've met who says that LLM coding agents 10x'd their output is the type of programmer that would have been PIP'd or fired 10 years ago for incompetence. We used to call them "code monkeys" for obvious reasons. Junior programmers think that LLM coding agents are awesome because they don't have the experience or skill to understand just how bad the output of LLM coding agents is, and the few that survive in the industry long enough to become senior programmers will laugh at their younger selves at how much of an unmaintainable mess they made vibe coding.)
That's very hard if not impossible in current dev environment to do by yourself. You simply can't learn every part of every tool and if it's something only used rarely there is little reason to learn too much about it. You also want to build and ship in some reasonable timeframe.
> You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.
> Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
One could probably think of dozens of reasonable arguments for avoiding LLM use, but this one is awful. If LLMs actually are able to get more work done with fewer people aka "firing people" that would be wonderful for humankind. If you disagree and like getting less work done with more people, you are welcome to forego tractors, dishwashers, the steam engine, and all the rest.
Yeah. This has been an interesting cultural shift, especially with “the kids”.
I’ve had at least a few people passionately tell me that using (non-generative, non-LLM) AI to assist with social network content moderation, is unethical, because it takes away jobs from people. Mind you, these are jobs in which people are exposed to CSAM, gore, etc. A fact that does not dissuade people of this view.
There are certainly some sensible arguments against using “AI” for content moderation. This is not one of them.
It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.
Such a dramatic shift from the music piracy generation a mere decade or two ago.
It’s especially intriguing as a non-American.
Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”
I can't speak to outside the US, but here companies have gotten much more worker-hostile in the last 30 years and generally the economy has not delivered a bunch of wonderful new jobs to replace the ones that the information age already eliminated (let alone the ones that people are trying to eliminate now). A lot of new job growth is in relatively lower-paying and lower-stability roles.
Forty years ago I would've had a personal secretary for my engineering job, and most likely a private office. Now I get to manage more things myself in addition to being expected to be online 24x7 - so I'm not even convinced that eliminating those jobs improve things for the people who now get to self-serve instead of being more directly assisted.
I can talk as a non American: it's the same everywhere in the countries where no new jobs are created. It's maybe less visible when the law is more protective, but at the end a social security net is still a net and works only as long as the rest is sturdy
Last time I checked, most people needed a "jerb" to buy food, have a shelter or provide their children. So when the promise is to lose this "jerb" they are fully in the right to be scared.
They took our jerbs is a perfectly valid argument for people which face ruin without a jerb.
Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.
This is the bit I get frustrated by - the need for jerbs at all.
If we manage to replace all the workers with AI - that's awesome! We will obviously have to work out a system for everyone to get shelter, and food, and so on. But that post-scarcity utopia of everyone being able to do whatever they want with their time and not have to work, that's the goal, right? That's where we want to be.
Jerbs are an interim nightmare that we have had to do to get from subsistence agriculture to post-scarcity abundance, they're not some intrinsic part of human existence.
That's the optimistic take. The pessimistic one is that we, people who need to work jobs to survive, are not an intrinsic part of human existence and will be obsolete and/or left to die once we no longer have an economic purpose.
I can't see that being a realistic outcome. We're a long, long way from that, if it is possible. Billionaires are only billionaires because people buy their company shares. If no-one has any money and we're consigned to scrape in the dust for food, what will billionaires do? Who will buy their products, their shares?
Somehow there is always this huge leap between "Strong AI" -> stuff happens -> "about 10k people live in cloud cities and everyone else lives in the dirt".
Money is a tool that has no value by itself. Billionaires are billionaires because they get a much bigger part of the work their group is producing (the group can be one company, a region, a country or the whole world depending on how you see things). If AI does the work instead of people, it will change nothing for them.
You can be optimistic (it will self regulate and everyone will benefit from AI) or pessimistic (only the billionaire class will benefit from AI). But in any case, there will be no need to sell products or share if there is a class of artificial slaves that can replace workers
But right now, there is no way in hell we're going to get any kind of support for people who lost their jobs to AI. Not in the US, at least.
Look at the current administration. Do you think they would even consider providing anything like UBI?
They actively want to take us down the cyberpunk dystopia route (or even the Christofascist regressive dystopia route...). They want us to become serfs to technofeudal overlords. Or just die, and decrease the surplus population.
I think you (and many others) are overestimating the degree to which everyone does, in fact, know that (everyone should, but not everyone does...), while simultaneously underestimating the degree to which the people in charge right now think they're the absolute most specialest people. Or, in some cases, literally God's chosen.
Furthermore, they really, really want to be absolute rulers being treated like (the popular conception of) medieval lords by all of us, the peasants. They deeply believe that we are beneath them; that we do not deserve to have the means to thrive or even survive if they do not explicitly grant it to us; that our natural state is that of supplication, and theirs is that of power and control.
UBI would give that up. It would give us the unconditional means to live, regardless of their approval. And that they cannot abide.
Yup. The problem was never with the technology replacing work, it was always with the social aspect of deploying it, that ends up pulling the rug under people whose livelihood depend on exchanging labor for money.
The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.
The problem to fix isn't automation, but preventing it from destroying people's lives at scale.
I personally wish for the time when AI can replace everything I can do (at least in my current field). I'm not sure how exactly I'll feel about it then, but it'd be a technological advancement I'd enjoy seeing in my lifetime.
One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.
I would like for an AI to do my work, unfortunately I have to buy food and pay my rent.
The Industrial Revolution caused a great deal of damage. It was a net positive in the long term because new jobs were created to replace those that were lost, but it took decades and enormous violence. Now, the promise of AI is that it will be more efficient than any human being. If this becomes a reality, there will be, by definition, no new jobs created for the people replaced by AI.
Does this argument still work if LLMs end up increasing unemployment and making it a lot harder for graduates to find good jobs? Who is it good for in that case, the shareholders? It's nice if humans can always create more jobs, but that's not what the tech bros are promising investors. They're making claims about how AI is going to seriously reduce the need for human labor. Programming, writing and art are just the starting ground for what's coming, if their predictions are anywhere close to being correct.
Because consumer demand is infinite, the only way to majorly and permanently increase unemployment is to completely automate all labor or maybe almost all. We have been automating away jobs for hundreds of years and unemployment is still ~4%
unemployment figures are a joke. What matters is how much able bodied people that could work, actually do have a job. And this number tells a completely different history from the feel good libertarian narratives such as this one.
> This comment pops up a few times, often from programmers. Unfortunately, because of how messy the term AI now is, the same concerns still apply. Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.
If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.
Strongly agree. It's like saying using a knife to prepare dinner is immoral because some people stab other people. I'm highly skeptical of AI, but that particular argument makes the whole article fall pretty flat to me.
When IDEs entered the market, they were the talk of the town. Improve your programming speed. Blazingly fast refactoring. A graphical debugger and logging information. Intellisense, inline documentation and the whole nine yards.
Many developers never bothered with IDEs. We were happy using Vim, Emacs and many of us continue to do so today.
It's not surprising the first "innovation" was agentic programming with a modified IDE.
I'm sure many people will enjoy their new IDEs. I don't enjoy it. I enjoy doing things a different way.
In essence we have an ownership problem. If I own the AI, I can do my work in couple of hours and then some and then have rest of the day off to enjoy things I like. If the company owns AI - I'm out of work. The difference between a world of plenty and beauty vs the world of misery for many of us - is who owns the AI.
> then have rest of the day off to enjoy things I like
But that's not what companies expect from you, even if you owns AI. They expect you to output more, and when you do, someone else is probably out of work.
There is a difference here, though. Compilers and the languages they enable aren't out-and-out ripping off previous creations.
Lots of folks are mad about how the power of these tools comes from training things they put out in the open but didn't intend to be used to enrich or exclude others like this technology is enabling.
Interesting times ahead... it's so powerful people who ignore it are going to get left behind to some degree. (I say this as someone who actively avoids kubernetes and it does give off the vibe I've been left behind compared to my peers who do resume driven development.)
You can apply this same logic to books and all learning and every piece of media and code you ever absorb. It's not theft to observe and incorporate public data. If it is.. lol.
No it's not. AI is a lossy snapshot of existing internet content. It's not going to innovate by virtue of what it is. It's a Chinese room using human language to pretend to be one of us. It's disgusting.
I wish.
I have just witnessed a engineer on our (small) team push a 4k line change to prod at the middle of the night.
His message was: "lets merge and check it after".
AI can help good team become better, but for sure it will make bad teams worse.
I don’t really see how this is an AI issue. We use AI all the time for code generation but if you put this on my desk with specific instructions to be light on review and it’s not a joke, I’m probably checking to see if you’re still on probation because that’s an attitude that’s incompatible with making good software.
People with this kind of attitude existed long before AI and will continue to exist.
Totally, and im not saying otherwise.
I'm saying that it takes the same amount of work to become a good engineering team even with AI.
But it takes exponentially less work to bacome worse team. If they say C++ makes it much more easier to shut yourself in the foot, in a similar manner LLMs are hard to aim. If your team can aim properly, you are going to hit more targets more quickly, but if and when you miss, the entire team is in wheelchairs.
Try to comply to an infosec standard. Typically one of many compliance controls are "every change must be reviewed and approved by another person". So no one can push on their own.
I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.
There’s a weird thing going on - I can see value in using LLM’s to put something together so you can see it rather than investing time to do it properly initially.
Thats the gist of it.
I've been trying to tell the founders that if we invest 2x more time on proper planning we will get 20x more outcomes in return.
It's as simple as that, its not about just writing stuff and pushing, its about understanding the boundaries of what you make, how it talks with other stuff, and what are the compromises you are willing to take in return for faster speeds.
I'm pretty certain it will be. It's a society thing. Much in the same way you have "artisan" items or "organic" food, a similar thing seems like an obvious development of ai in society. Very easy to apply a value and morality system to something like the implications of ai.
I'd be more concerned when AI companies decide its time to make a profit. The more effective its supposed to be, the more they can justify charging for it.
programming is just turning calories/energy into text. some of you are just not that efficient at it, some of you produce garbage when you do. it's only been three years, there is still low hanging fruit on the new branches.
I suspect a subpopulation of software development is going to become a bit religious, for a short while, split into "morally pure anti AI" and those who are busy using software as a means to an end to solve some real world problem. I think the tools will eventually be embraced, out of necessity, as they become more practically useful (being somewhere around "somewhat useful" right now).
As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).
if you think your code is art like mozarts music, then you're probably part of the first group rather than the group trying to simply get something practical done with software as a means to do it.
Should Mozart have constructed the instruments himself? Or plucked the strings himself? No, he had someone else take care of all that so he could compose music. AI can be used the same way: take care of boring stuff so I can compose a solution to a real world problem. No, that doesn't mean AI has to do everything for you, which outright bans don't seem to be able to comprehend.
I didn't either but I have now realised yes it uses a lot of energy and yes it can be a total waste of energy, but if you are doing good with it, then it is worth the cost.
I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.
Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).
If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.
I read it as "compared to others, in the current context".
If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.
> If you’re not willing to risk being “left behind”...
I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.
Yet another “genuinely nobody cares” take from yet another service or product I’ve never heard of before this post.
I’m not sure what the authors are looking for, a pat on the back? Good boy points? Reddit updoots? To feel like a real 1337 h4xx0r dev?
Nobody cares about this stance and I feel like I see it daily now. People do care about the quality and usefulness of your product and what you’re doing to continue to improve it.
It has the same energy as when a dude orders “black coffee” despite hating the taste to look more badass.
Here’s a paragraph summary of the story, written by Sonnet 4.5:
The Yarn Spinner team explains they don’t use AI in their game development tool despite having academic and professional backgrounds in machine learning—they’ve written books on it and gave talks about ML in games. Their position shifted around 2020 when they observed AI companies pivoting from interesting technical applications toward generative tools explicitly designed to replace workers or extract more output without additional hiring. They argue that firing people has become AI’s primary value proposition, with any other benefits being incidental. Rather than adopt technology for its own sake (“tool-driven development”), they focus on whether features genuinely help developers make better games. While they acknowledge numerous other AI problems exist and may revisit ML techniques if the industry changes, they currently refuse to use, integrate, or normalize AI tools because doing so would financially and socially support companies whose business model centers on eliminating jobs during a period when unemployment can be life-threatening.
I struggle with the "good guys vs bad guys" framing here.
Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
> I struggle with the "good guys vs bad guys" framing here.
It's because generative AI has become part of the "culture wars" and is therefore black and white to lots of people.
Seeing a lot of this, that any use of LLMs is immediately condemned.
I think it's self-defeating, but virtue signallers gonna virtue signal.
That’s because there is no nuance to how AI was built/trained and how it operates. As someone else put it: It is a rude technology that deserves a rude response
Exhibit A.
It really doesn't matter though.
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
No amount of framing (unless written into law) would stop small indie devs from doing this. AI is just too efficient, making too much sense economically. People who are willing to starve for their ideology is always the minority.
Even artisans who build hand-made wooden furniture use power tools today. The tools that make economical sense will prevail one way or another.
I think most people don't have an issue with the models themselves, just the big service providers who are up to some very shady and possibly illegal stuff.
Personally I'd rather a future where everyone used local models.
I think that presumes that an LLM is capable of "unblocking a tricky C# problem" that a group of humans cannot with research. LLMs don't understand the code they output; they just regurgitate code that's already in their training set (the proof is in the pudding; see the many posts on HN about these LLM coding assistants outputting copyrighted code token for token).
So if the tricky C# problem isn't already in their data set, the output of the LLM is, at best, random crap. Even the worst human effort would exceed the output of the LLM, and that is the average case for any "tricky" problem. LLMs are fundamentally only useful on the most common types of problems that are can better be addressed by using frameworks, plugins, or APIs.
(And on that note: every programmer I've met who says that LLM coding agents 10x'd their output is the type of programmer that would have been PIP'd or fired 10 years ago for incompetence. We used to call them "code monkeys" for obvious reasons. Junior programmers think that LLM coding agents are awesome because they don't have the experience or skill to understand just how bad the output of LLM coding agents is, and the few that survive in the industry long enough to become senior programmers will laugh at their younger selves at how much of an unmaintainable mess they made vibe coding.)
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
What about learning the tools you use everyday by yourself?
That's very hard if not impossible in current dev environment to do by yourself. You simply can't learn every part of every tool and if it's something only used rarely there is little reason to learn too much about it. You also want to build and ship in some reasonable timeframe.
> a solo dev using GenAI for texture variants
While not as bad as firing 500 people, using ai to generate slop (and it is inherently slop due to being generated quickly by ai) is still bad.
I think the only people they're calling "dodgy" are the ones offering these AI tools, and not the people using them.
Here is some text from the post
> You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.
> Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
One could probably think of dozens of reasonable arguments for avoiding LLM use, but this one is awful. If LLMs actually are able to get more work done with fewer people aka "firing people" that would be wonderful for humankind. If you disagree and like getting less work done with more people, you are welcome to forego tractors, dishwashers, the steam engine, and all the rest.
Yeah. This has been an interesting cultural shift, especially with “the kids”. I’ve had at least a few people passionately tell me that using (non-generative, non-LLM) AI to assist with social network content moderation, is unethical, because it takes away jobs from people. Mind you, these are jobs in which people are exposed to CSAM, gore, etc. A fact that does not dissuade people of this view. There are certainly some sensible arguments against using “AI” for content moderation. This is not one of them.
It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.
Such a dramatic shift from the music piracy generation a mere decade or two ago.
It’s especially intriguing as a non-American.
Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”
I can't speak to outside the US, but here companies have gotten much more worker-hostile in the last 30 years and generally the economy has not delivered a bunch of wonderful new jobs to replace the ones that the information age already eliminated (let alone the ones that people are trying to eliminate now). A lot of new job growth is in relatively lower-paying and lower-stability roles.
Forty years ago I would've had a personal secretary for my engineering job, and most likely a private office. Now I get to manage more things myself in addition to being expected to be online 24x7 - so I'm not even convinced that eliminating those jobs improve things for the people who now get to self-serve instead of being more directly assisted.
I can talk as a non American: it's the same everywhere in the countries where no new jobs are created. It's maybe less visible when the law is more protective, but at the end a social security net is still a net and works only as long as the rest is sturdy
Last time I checked, most people needed a "jerb" to buy food, have a shelter or provide their children. So when the promise is to lose this "jerb" they are fully in the right to be scared.
They took our jerbs is a perfectly valid argument for people which face ruin without a jerb.
Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.
This is the bit I get frustrated by - the need for jerbs at all.
If we manage to replace all the workers with AI - that's awesome! We will obviously have to work out a system for everyone to get shelter, and food, and so on. But that post-scarcity utopia of everyone being able to do whatever they want with their time and not have to work, that's the goal, right? That's where we want to be.
Jerbs are an interim nightmare that we have had to do to get from subsistence agriculture to post-scarcity abundance, they're not some intrinsic part of human existence.
That's the optimistic take. The pessimistic one is that we, people who need to work jobs to survive, are not an intrinsic part of human existence and will be obsolete and/or left to die once we no longer have an economic purpose.
I can't see that being a realistic outcome. We're a long, long way from that, if it is possible. Billionaires are only billionaires because people buy their company shares. If no-one has any money and we're consigned to scrape in the dust for food, what will billionaires do? Who will buy their products, their shares?
Somehow there is always this huge leap between "Strong AI" -> stuff happens -> "about 10k people live in cloud cities and everyone else lives in the dirt".
I find it completely implausible.
Sorry but you misunderstand how things work.
Money is a tool that has no value by itself. Billionaires are billionaires because they get a much bigger part of the work their group is producing (the group can be one company, a region, a country or the whole world depending on how you see things). If AI does the work instead of people, it will change nothing for them.
You can be optimistic (it will self regulate and everyone will benefit from AI) or pessimistic (only the billionaire class will benefit from AI). But in any case, there will be no need to sell products or share if there is a class of artificial slaves that can replace workers
But right now, there is no way in hell we're going to get any kind of support for people who lost their jobs to AI. Not in the US, at least.
Look at the current administration. Do you think they would even consider providing anything like UBI?
They actively want to take us down the cyberpunk dystopia route (or even the Christofascist regressive dystopia route...). They want us to become serfs to technofeudal overlords. Or just die, and decrease the surplus population.
Agree, but this is how revolutions happen, and everyone knows it, so they're going to have to do something.
I think you (and many others) are overestimating the degree to which everyone does, in fact, know that (everyone should, but not everyone does...), while simultaneously underestimating the degree to which the people in charge right now think they're the absolute most specialest people. Or, in some cases, literally God's chosen.
Furthermore, they really, really want to be absolute rulers being treated like (the popular conception of) medieval lords by all of us, the peasants. They deeply believe that we are beneath them; that we do not deserve to have the means to thrive or even survive if they do not explicitly grant it to us; that our natural state is that of supplication, and theirs is that of power and control.
UBI would give that up. It would give us the unconditional means to live, regardless of their approval. And that they cannot abide.
Yup. The problem was never with the technology replacing work, it was always with the social aspect of deploying it, that ends up pulling the rug under people whose livelihood depend on exchanging labor for money.
The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.
The problem to fix isn't automation, but preventing it from destroying people's lives at scale.
You don't ban new technology you regulate it.
That is what happened with the 19th century factories.
I personally wish for the time when AI can replace everything I can do (at least in my current field). I'm not sure how exactly I'll feel about it then, but it'd be a technological advancement I'd enjoy seeing in my lifetime.
One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.
I would like for an AI to do my work, unfortunately I have to buy food and pay my rent.
The Industrial Revolution caused a great deal of damage. It was a net positive in the long term because new jobs were created to replace those that were lost, but it took decades and enormous violence. Now, the promise of AI is that it will be more efficient than any human being. If this becomes a reality, there will be, by definition, no new jobs created for the people replaced by AI.
Would being the operating word here. In a capitalist economy with wage labour, it is a catastrophe.
Does this argument still work if LLMs end up increasing unemployment and making it a lot harder for graduates to find good jobs? Who is it good for in that case, the shareholders? It's nice if humans can always create more jobs, but that's not what the tech bros are promising investors. They're making claims about how AI is going to seriously reduce the need for human labor. Programming, writing and art are just the starting ground for what's coming, if their predictions are anywhere close to being correct.
Because consumer demand is infinite, the only way to majorly and permanently increase unemployment is to completely automate all labor or maybe almost all. We have been automating away jobs for hundreds of years and unemployment is still ~4%
unemployment figures are a joke. What matters is how much able bodied people that could work, actually do have a job. And this number tells a completely different history from the feel good libertarian narratives such as this one.
[dead]
> This comment pops up a few times, often from programmers. Unfortunately, because of how messy the term AI now is, the same concerns still apply. Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.
If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.
Strongly agree. It's like saying using a knife to prepare dinner is immoral because some people stab other people. I'm highly skeptical of AI, but that particular argument makes the whole article fall pretty flat to me.
Unfortunately that mindset exists amongst a worrying number of people.
When IDEs entered the market, they were the talk of the town. Improve your programming speed. Blazingly fast refactoring. A graphical debugger and logging information. Intellisense, inline documentation and the whole nine yards.
Many developers never bothered with IDEs. We were happy using Vim, Emacs and many of us continue to do so today.
It's not surprising the first "innovation" was agentic programming with a modified IDE.
I'm sure many people will enjoy their new IDEs. I don't enjoy it. I enjoy doing things a different way.
> AI is now a tool for firing people
In essence we have an ownership problem. If I own the AI, I can do my work in couple of hours and then some and then have rest of the day off to enjoy things I like. If the company owns AI - I'm out of work. The difference between a world of plenty and beauty vs the world of misery for many of us - is who owns the AI.
> then have rest of the day off to enjoy things I like
But that's not what companies expect from you, even if you owns AI. They expect you to output more, and when you do, someone else is probably out of work.
Well, maybe Marx was onto something with all that talking about production means ownership...
It's like saying we won't use compilers because it puts all the people who would manually create punch cards out of a job
There is a difference here, though. Compilers and the languages they enable aren't out-and-out ripping off previous creations.
Lots of folks are mad about how the power of these tools comes from training things they put out in the open but didn't intend to be used to enrich or exclude others like this technology is enabling.
Interesting times ahead... it's so powerful people who ignore it are going to get left behind to some degree. (I say this as someone who actively avoids kubernetes and it does give off the vibe I've been left behind compared to my peers who do resume driven development.)
You can apply this same logic to books and all learning and every piece of media and code you ever absorb. It's not theft to observe and incorporate public data. If it is.. lol.
How often does your compiler introduce bugs into your code?
No it's not. AI is a lossy snapshot of existing internet content. It's not going to innovate by virtue of what it is. It's a Chinese room using human language to pretend to be one of us. It's disgusting.
I wish. I have just witnessed a engineer on our (small) team push a 4k line change to prod at the middle of the night. His message was: "lets merge and check it after". AI can help good team become better, but for sure it will make bad teams worse.
I sorry friends, I think imma quit to farming :$
I don’t really see how this is an AI issue. We use AI all the time for code generation but if you put this on my desk with specific instructions to be light on review and it’s not a joke, I’m probably checking to see if you’re still on probation because that’s an attitude that’s incompatible with making good software.
People with this kind of attitude existed long before AI and will continue to exist.
Totally, and im not saying otherwise. I'm saying that it takes the same amount of work to become a good engineering team even with AI. But it takes exponentially less work to bacome worse team. If they say C++ makes it much more easier to shut yourself in the foot, in a similar manner LLMs are hard to aim. If your team can aim properly, you are going to hit more targets more quickly, but if and when you miss, the entire team is in wheelchairs.
Making good software isn’t what matters in most workplaces - making software that works (even if you have taped over the cracks) is.
It’s always been this way in toxic workplaces - LLM’s amplify this.
Try to comply to an infosec standard. Typically one of many compliance controls are "every change must be reviewed and approved by another person". So no one can push on their own.
I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.
If I could at all help it, I would simply not work somewhere with that sort of engineering culture. Huge red flag.
One should not be able to push to prod on their own especially in the middle of the night? Unless its a critical fix
> Unless its a critical fix
The bar for human approval and testing should be even higher for critical fixes.
Exactly. Wake someone up to review.
Who cares, AI has lowered the bar. If AI can produce rubbish 20+% of the time, so can we.
There’s a weird thing going on - I can see value in using LLM’s to put something together so you can see it rather than investing time to do it properly initially.
But to just copy, paste and move on… terrible.
Thats the gist of it. I've been trying to tell the founders that if we invest 2x more time on proper planning we will get 20x more outcomes in return. It's as simple as that, its not about just writing stuff and pushing, its about understanding the boundaries of what you make, how it talks with other stuff, and what are the compromises you are willing to take in return for faster speeds.
I wonder if the title of this post will someday be a certification?
I'm pretty certain it will be. It's a society thing. Much in the same way you have "artisan" items or "organic" food, a similar thing seems like an obvious development of ai in society. Very easy to apply a value and morality system to something like the implications of ai.
I'd be more concerned when AI companies decide its time to make a profit. The more effective its supposed to be, the more they can justify charging for it.
programming is just turning calories/energy into text. some of you are just not that efficient at it, some of you produce garbage when you do. it's only been three years, there is still low hanging fruit on the new branches.
I’d love to work for a company like this, but when you said, “by the time we finished our doctorates,” I knew you were way out of my league.
The reflexive and moralistic anti-AI takes are starting to get more annoying than the actual AI slop
I suspect a subpopulation of software development is going to become a bit religious, for a short while, split into "morally pure anti AI" and those who are busy using software as a means to an end to solve some real world problem. I think the tools will eventually be embraced, out of necessity, as they become more practically useful (being somewhere around "somewhat useful" right now).
As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).
Had he been able to, should Mozart have used a LLM to help him compose Don Giovanni? There are things more important than being productive.
if you think your code is art like mozarts music, then you're probably part of the first group rather than the group trying to simply get something practical done with software as a means to do it.
Should Mozart have constructed the instruments himself? Or plucked the strings himself? No, he had someone else take care of all that so he could compose music. AI can be used the same way: take care of boring stuff so I can compose a solution to a real world problem. No, that doesn't mean AI has to do everything for you, which outright bans don't seem to be able to comprehend.
No they aren't.
I find one easier to avoid or ignore them the other.
Getting annoying, yes, not more annoying than the actual slop, though, that's ridiculous.
I didn't either but I have now realised yes it uses a lot of energy and yes it can be a total waste of energy, but if you are doing good with it, then it is worth the cost.
I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.
“You’ll get left behind if you don’t learn AI”
Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).
If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.
I read it as "compared to others, in the current context".
If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.
> If you’re not willing to risk being “left behind”...
I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.
Didn't know yarn spinner was used to make so many cool games. Norco is a favorite from the last few years.
The anti-ai stance just makes em even cooler.
If the people developing and supporting chat- and c0rn-bots are anxious about the future, that is the correct emotion to be feeling.
This article is silly. Your employees are using AI to get shit done whether you like it or not. They are just being sneaky about it.
Yet another “genuinely nobody cares” take from yet another service or product I’ve never heard of before this post.
I’m not sure what the authors are looking for, a pat on the back? Good boy points? Reddit updoots? To feel like a real 1337 h4xx0r dev?
Nobody cares about this stance and I feel like I see it daily now. People do care about the quality and usefulness of your product and what you’re doing to continue to improve it.
It has the same energy as when a dude orders “black coffee” despite hating the taste to look more badass.
[dead]
[dead]
Here’s a paragraph summary of the story, written by Sonnet 4.5:
The Yarn Spinner team explains they don’t use AI in their game development tool despite having academic and professional backgrounds in machine learning—they’ve written books on it and gave talks about ML in games. Their position shifted around 2020 when they observed AI companies pivoting from interesting technical applications toward generative tools explicitly designed to replace workers or extract more output without additional hiring. They argue that firing people has become AI’s primary value proposition, with any other benefits being incidental. Rather than adopt technology for its own sake (“tool-driven development”), they focus on whether features genuinely help developers make better games. While they acknowledge numerous other AI problems exist and may revisit ML techniques if the industry changes, they currently refuse to use, integrate, or normalize AI tools because doing so would financially and socially support companies whose business model centers on eliminating jobs during a period when unemployment can be life-threatening.