I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.
It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.
I talked to AI basically all day, yet I am genuinely made uneasy by this.
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
It's a major bummer. When I first read the story (a few days ago, maybe?) I thought it was an interesting metaphor that didn't quite line up with the observed details of software development with AI. I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.
For me, “interestingly wrong” becomes just “wrong” without human thinking behind it. I wasn’t bowled over by the prose, I just thought it was an uncommon take and didn’t twig the signs it was Claude product.
A djungelskog is not a threat. AI threatens my livelihood and my humanity. The worst part is I have to use it regardless because I would be uncompetitive without it.
What is it about it that makes the story less interesting to you? It's the same story, down to the same delicate details. When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
I find it interesting to ponder. We look at the luddite movement as futile and somewhat fatalistic in a way. I feel like the current attitude towards AI generated art will suffer the same fate—but I'm really not quite sure.
What is your understanding of the luddite movement? I ask because I don't believe many are aware that luddites were not anti-technology. It was a labor movement which was targeted at exploitation by factory owners. Their issue was with factories forcing the use of machines to produce inferior products so owners could use cheaper, low skill labor.
I'd have been ok if things fell more in their direction... I'm not saying "clear win", but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
> but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
By arguing for letting humans work, particularly quality work, you're not especially finding a middle ground, more adopting the 1811 position of the OG Luddites who were opposed to being put out of work.
Stories are particularly troubling because we have the concept of "suspending disbelief" and readers tend to take a leap of faith with longwinded narratives because we assume the author is going somewhere with the story and has written purposefully.
When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.
Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.
You can get some good guesses from the comment itself.
> I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.
This reminds me of having the reverse experience with the 2017 New Yorker viral "Cat Person" story [0] which a (usually trustworthy) friend forwarded and enthusiastically told me to read: waste of time shaggy-dog story, intentional engagement-trolling aimed at the intersection of the hot-button topics of its target readership. But why are we culturally expected to allow more slack to a human author, even a meretricious one? Both are comparably bad. The LLM-authored one needs a disclaimer at the top to set its readers' expectations right.
("Cat Person" honestly felt like the literary equivalent of Rickrolling; I would have stopped reading it after the first page if not for my friend's glowing endorsement.)
Yes, this is a thing. Bad writing with an interesting idea underneath it all is still interesting if it comes from a human because we have the expectation that the human will improve in how they share their ideas in the future. In other words, we see potential.
But LLMs don't have potential. You can make an LLM write a thousand articles in the next hour and it will not get one iota better at writing because of it. A person would massively improve merely from the act of writing a dozen, but 100x that effort and the LLM is no better off than when it started.
Despite every model release every 6 months being hailed as a "game changer", we can see from the fact that LLMs are just as empty and dumb as they were when GPT-2 was new half a decade ago that there really is no long term potential here. Despite more and more power, larger and hotter and more expensive data centers, it's an asymptotic return where we've already broken over the diminishing returns point.
And you know, I wouldn't care all that much--hell, might even be enthusiastically involved--if folks could just be honest with themselves that this turd sandwich of a product is not going to bring about AGI.
You cannot even get angry or upset if you disagree with anything in the story, maybe the author’s despicable worldview permeating through the characters... because there's no author’s worldview, because there's no author. It's a window into nothing, except perhaps the myriad of stories in the model's training set.
I want to at least have to option of getting upset at the author.
i don't find the luddite comparison accurate. they were against looms and anti-ai people or ai skeptical people are against the wholesale strip mining of intellectual property as it exists... both public domain and non-public domain. it's used to enrich the capital class at the expense of the workers. sure it's similar but it certainly didn't have the copyright and wholesale theft of all of the human ideas behind it. it just feels quite different.
People had a revulsion to eating refrigerated foods. The developed world got over it. We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.
As a couple sibling comments said, I took it for an insight into the way an optimistic writer might see AI software development becoming a new form of "end-user programming" or "citizen developer" tooling. I'm personally too deep in the weeds to ever see it becoming empowering in that way (if nothing else, this will be an incredibly centralizing technology and whoever wins the "arms race" [assuming we we're not in a bubble destined to pop soon] will absolutely have the possible Toms and Megans of such a future by the short hairs). But I love end-user programming, or whatever we're calling it now! (I was partial to "shadow IT" - made it sound really cool.) So I enjoyed the idea that somebody saw AI as a "bicycle for the mind" in that sense, even if I feared they'd end up disappointed.
But there was nobody there, and I'm only disappointed in myself for not noticing.
> When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
For me, the answer to this riddle is very easy: I want to engage with other human minds. A robot (or AI) doesn't have a human mind, so I'm not interested in its "artistic" output.
It was never about how good it was. Of course AI slop adds insult to injury by being also bad. Currently. But it'll get better. My position was never that AI art (shorts, pictures, music, text) is to be frowned up because it's bad. I don't like it because it's not the expression of a human mind.
It's a bit like how an AI boy/girlfriend is not the real deal, no matter how realistic -- and I'm sure they'll get uncannily realistic in the future. They aren't the real deal because there's no real human behind the facade of companionship.
Humans build friendships and relationships on shared experiences. There is an element of relationship-through-experiencing-a-thing. Whether it's going for a walk together or the classic first date template of dinner and a movie. The shared experience is the thing.
With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.
Remove that "shared feeling with the author" and what meaning does it have?
It means, "Wow. Cool. I'm a member of a species that taught rocks to think. Holy fuck. That's pretty insanely fucking awesome. Wow. Wow, wow, wow. Fuck."
That's about all it means. Nothing was removed from your life, but something optional was added.
birth rates have already tanked everywhere that isnt religious. youd think people would move back to religion and save their culture, but the sex doll argument has already pervaded. we werent designed to have our senses constantly hyperstimulated; resultantly, people increasingly dont care about reality. only sociopaths and the well disciplined thrive in this environment, everyone else becomes lost in hyperreality. id love to send it and join the masses ... after contemplating eternal damnation, a few years of sensory pleasure just arent worth it.
I think "I'm a member of a species chasing our own extinction by worshipping an idiot machine god for the purposes of profit. That's so insanely depressing. Fuck fuck fuck fuck fuck"
There is an interesting dichotomy where we express an uncanny-valley revulsion to AI-generated text, art, video and music; yet we seemingly go with the AI-generated code.
Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).
Are we conditioned to use other people's code unthinkingly, or is it something else?
It's because code isn't a way to communicate ideas, it's a way to specify behavior. Text, drawings, video, and music are means for brains to connect with each other. When you read or view or listen to something generated you're not connecting with any other brain. No idea has been transmitted to you. The feeling is analogous to speaking on the phone and only realizing several minutes later that the call was dropped. It's a feeling that combines betrayal, being made to waste time, and alienation.
I tend to disagree that code can't be a way to communicate an idea. Sure, I might struggle to edict an emotion in the reader (excluding confusion or frustration) but I feel it is a way to describe ideas, model constructs and processes, etc.
With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
>I tend to disagree that code can't be a way to communicate an idea.
I wouldn't go as far as can't, but in general it won't be, and if any ideas are indeed communicated, they will be impersonal.
>With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
I can think of a better example. In comic circles there's the rewrite, which is when an editor isn't fluent in the original language, and so instead of actually translating, they just rewrite all the dialogue to something that matches the action. People (generally) hate rewrites. Unknowingly reading a rewrite provokes a similar feeling of betrayal that unknowingly reading LLM output provokes.
Did you read past the first sentence? The kind of information that a piece of code transmits is fundamentally different from that which is transmitted by a sentence or a song.
I had a similar experience a few days ago with some music on Spotify. It was an Irish Pub song, rendering some political satire that seemed pretty consistent with what I figure is a predominant Irish viewpoint. Since I holidayed in Ireland a while ago and adored the public there, I really liked it. I reveled in the fact that somewhere in Ireland, there was a band singing messages in pubs that resonated strongly with me. And then it was pointed out that it was AI. I was crushed. I went from feeling connected to some people across the pond, to feeling lonely.
And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.
My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.
The connection is often with other people experiencing the same thing even if they thing is AI generated. You can see this clearly on Youtube with comments which just quote a line from the video. They get lots of upvotes, probably from other people who felt that line was special too and enjoy seeing others sharing the same feeling. Of course if all those comments are AI too, you would lose that connection.
Well, FWIW, LLMs are specified to infer and fill in the blanks of books. It makes the headlines now and again that publishers put AI companies on the hook for unauthorized use, The New Yorker included.
Whether people know it or not, when they engage with art they are assuming a person not just made it but experienced it. I'm going to blow past the discussion of "what is art" here, but where something came from and how it was made has always mattered to me (you could draw parallels to food here if you wanted). One thing that has been on my mind a lot is a particular photograph I saw in the past few years (and I'm sure it's easy to find online): it's a POV shot taken by a person sitting atop a skyscraper with their feet dangling over the edge. There is just no way that anyone could in good faith claim that the same photo produced by "AI" could possibly have the same emotional impact as knowing someone actually went and did that. I think that for a lot of people they may not even realize that when hey see a painting or even a photo as innocuous as a tree, their mind goes to that the person who produced this went to this that place the tree was in an had an experience and chose to document that particular perspective. If they were to see a painting or drawing of something that is clearly "fantasy," they know that a person made this up in their crazy mind and experience their feelings on it (good or bad). "AI" (heavy quotes) is trying to trick us and rob of us this basic knowledge. Some see this as progress. I personally think it's fucking disgusting, but I've been wrong before.
Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.
I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
You can never be sure if what you're feeling is a genuine human experience or not.
This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.
The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.
I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
I also did not gin to the fact that it was AI, but I did have the distinct feeling that I was reading something not that great. It bothered me because the message was something I could appreciate but the delivery felt anathema to the message.
It felt like it was written by someone trying to quit an addiction to Corporate Memphis content spam. Like it came from some weird timeline where qntm was a LinkedIn influencer. It straddles an uncanny valley of being a criticism of the domination of The Corporation over human culture while at the same time wallowing in The Corporate Eunuch Voice, not because it's a subversion of form, but because it knows no other way.
I then came to the comments section and found the piece that brought the picture into focus.
It's just... hard to explain the specific kind of disappointment. Perhaps there is a German phrase-with-all-the-spaces-removed kind of word that describes it succinctly. I feel like I exist in this Truman Show kind of world where everyone is trying to gaslight me into thinking LLMs are important, but they aren't very good at it and whenever I try to find out how or why, it all evaporates away. I was very reluctant to say that because I'm sure it's going to come with a heaping side of Extremely Earnest Walruses ready to Have A Debate about it and I just don't have the energy for it anymore. That's the baseline existence right now. It's like a really boring version of Gamergate.
And then this thing comes along. And yeah, it's a thing. You got me. Ha. Ha. Joke's on me. I lost the shitty, fake version of the Turing Test that I didn't even ask to be a part of. And it reminds me of the Microsoft Hololens: a massively impressive technological achievement that was ultimately a terrible consumer experience. Like if you figured out Fusion Power but it could only power Guy Fieri restaurants.
Ever since the pandemic I've been keenly aware of the complete destruction of every enjoyable social structure around me. The meetups that evaporated. The offices we essentially squatted in that suddenly turned Extremely Concerned about what people were doing. The complete lack of any social interaction at work because we're all so busy because we're running at half-workforce and pretty sure the executive suite is salivating at the bit to lay the rest of us off. The lack of care about how this is impacting open source software. The lack of concern for people.
I feel like my entire adult life was this slow, agonizing, but at least constant push forward into recognizing the humanity in others and creating a kind and diverse world and then over night it's all been destroyed and half the people I see online are cheering it on like it's Technojesus coming to absolve them of their sins of never learning to invert a binary tree. Where the blogs and books and startups of the early 2000s were about finding the hidden potential in people--the college dropout working as a barista who just needs someone to give them a chance to be a programmer or a graphic designer or an artist or whatever--the modern era seems to all be about the useless middle management guy who never had any creative bone in his body no longer having to write status reports to his equally mendacious boss on his own anymore.
We might be restarting old coal plants, but at least Kevin in middle management gets to enjoy "programming" again.
I guess I'm an expert on LLM-isms somehow, I thought they were still plentiful. They're plentiful at the start but get significantly worse near the end, so I'm guessing you spent more time polishing up the first 2/3rds or so.
But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.
Thanks! Yeah there were a couple I decided to leave in rather than try to rework as I wasn't trying to hide that it was written with AI, more trying to add more variety to the storytelling. I'm sure as I do more of these I'll be able to recognize them a lot easier. I have been toying with the idea of working them more into character's dialogue in the future, as I've already noticed some people I know speaking in LLMisms.
I'm particularly allergic to LLM-isms, if you look at my comment history I'm constantly complaining about LLM-written text. I am genuinely quite surprised to have read that much LLM-generated text and been happy to do so.
I am also extremely interested in thinking about where software development is going, so I really appreciated the ideas that went into this.
Since you seem open to feedback, I want to add that I felt the generated images were a negative addition. Maybe they wouldn't be if they also got a little polish - the labels in them were particularly bad.
Ahh cool, I'll dig through your comment history tonight :) I will say, I suspect we're only in the early stages of the LLM's writing equivalent of "autotune" while we all collectively figure out what's tasteful use, what isn't, what it might be like to use autotune as an instrument itself, and then what gets overused. So it'll probably get a lot worse before it gets better.
And thanks for the note about the images, I'll take that into account! I only really just started this project and am going to keep iterating as I learn to use the tools better and I find the right visual language for it.
Since you seem in the mood to give feedback ;) If you take a quick glance at the previous story, do you feel the same way about the images in that one or was it just this one's that you found particularly unpolished?
I think in this you are the autotune, trying to make the raw LLM writing in tune and palatable.
I did read your previous story (not as polished but still interesting) and noticed in the image that linked to "beautiful but the Mandarin module has a tone recognition bug that makes it nearly impossible for non-native speakers", that the tone bug was Hebrew rather than Chinese characters. Interesting...I might have a look again and translate.
Just wanted to say that I've felt the same about the images. To me it's likely was the text that for some reason had AI-feel to it. Great story though, I was in awe learning it was AI generated.
that's funny, i know where this story is set (i grew up there) - or at least, the place Claude was basing things off of
some inconsistencies that stuck out/i found interesting:
- HWY 29 doesnt run through marshfield, its about 15 miles north.
- not a lot of people grow cabbage in central wisconsin ;)
- no corrugated sheet metal buildings like in the first image around there
- i dont think theres a county road K near Marshfield - not in Marathon county at least
fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.
All that said, as a fictional experiment its pretty cool!
I think it serves even better as a metaphor for software engineering's future than as a forecast for the future of farming. As you suggest, farmers already had to make the "transition" over the course of the 20th century. A farmer from 1926 wouldn't recognize his counterpart today. They would have nothing to talk about. Software people, though, are still twentieth-century programmers at heart, who are just starting to feel their way through the Kubler-Ross process.
Really a great story, and to the extent it was AI-written, well... even greater.
> As you suggest, farmers already had to make the "transition" over the course of the 20th century. A farmer from 1926 wouldn't recognize his counterpart today. They would have nothing to talk about.
Automation and technology in general have made it possible to do more farming with fewer people: https://www.gilderlehrman.org/history-resources/teacher-reso... . In the US job market, agriculture accounted for 51% of workers in 1880 and less than 3% in 1980. It now appears to be closer to 1% depending on which source you reference.
Hard to imagine many occupations that have undergone more radical change in the recent past than farming. The profession is now utterly technology-dependent, and a few companies like John Deere have hastened to take unfair advantage of that. Hence the growing advocacy of right-to-repair laws.
> The milk pricing tool consumed the feed tool’s output as one of its cost inputs. The format change hadn’t broken the connection — the data still flowed — but it had caused the pricing tool to misparse one field, reading a per-head cost as a per-hundredweight cost, which made the feed expenses look much higher than they were, which made the margin calculations come out lower, which made the recommended prices drop.
“You changed your feed tool,” Tom said.
“Yeah, I updated the silage ratios. What does that have to do with milk prices?”
“Everything.”
He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.
Ethan looked ill.
--
I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?
You're not missing something — the chain is internally inconsistent as written.
The per-head vs. per-hundredweight swap is actually plausible for inflating apparent costs: a dairy cow weighs 12-15 hundredweights, so a $5/head daily feed cost misread as $5/hundredweight would balloon to $60-75/head. So "feed expenses look much higher" checks out.
But then the pricing logic goes the wrong direction. Higher perceived costs -> lower calculated margin -> the rational response is to raise prices to restore margin, or at minimum flag the squeeze. Dropping prices when you think you're losing money on every unit is only coherent if the tool is running some kind of volume/elasticity model where it reasons "margins are tight, compete on price" — which is a legitimately dangerous default for spot milk contracts.
Most likely it's just a logic inversion in the story. Either the misparse inflated costs and the tool correctly raised prices (locking in above-market rates Ethan didn't notice because he was happy), or the misparse deflated costs and the tool undercut on price thinking it had headroom. Both are realistic failure modes. The version in the story mixes the two.
Fittingly, a specification error in a story about specification errors.
Around the part where Margaret explains the problem to Tom , and started to feel annoyed. I could tell it was a LLM trying to fit a sci fi novella style of writing. And it was doing a good job , it was certainly better than 90% of posts ive read in the last 6 months.
Dont know why that makes me annoyed, maybe cause its the depressing seriousness of being a 'prompter' and the americana framing of it.
I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story
I think this passes the sniff test only if you're not too familiar with this neighborhood of the training set. Not that the writing is bad but it's just derivative. I listen to stuff like "Lost Scifi" podcast almost daily for example, but there are many similar ones which are focused on reading classic stuff from the golden-age zines because it's all public domain.
The premise/structure/flavor of TFA is an almost pitch-perfect imitation of that kind of voice, to the point that I immediately flagged it as probably generated. I actually think a modern person would have some difficulty even in consciously mimicking it. There's an "aw shucks" yokel-thrown-into-the-future aspect to it. Plot-wise you have rural bicycle repair shop that expands operations to support nuclear reactors and that sort of thing. Substitute any of the more atomic-age stuff for AI stuff and you're mostly there. If you have some Amazing Stories from the 1920s on your shelf then you kind of know what I mean.
Can't speak for them but FWIW it does not sound like OP is necessarily aware of the genre at all. They asked Claude to explain something via fiction, and then perhaps Claude made the "creative decision" based simply on the availability of the material.
The only thing I noticed is that the melody of the words was not equal to the quality of the writing and story arc.
It was the text equivalent of hearing a singer whom you know has perfect pitch sing atonal playground songs.
Take this sentence:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible.
Perfectly fine, a nice set up for a next sentence, but then you get hit with this:
He’d worked for a John Deere dealership in Marshfield for eleven years.
Bad. The rhythm is all off. Minor improvement:
For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield.
Minor change, really, but the fluidity of the language matters a lot and just that one sentence written that one way breaks the flow.
It's almost as if a second person interjected and wrote that sentence like a friends annoying girlfriend who won't let him finish a story without adding in her parts.
But two notes does not a music make, so let's compare that 1 minor change with a before and after of all three opening sentences:
Original:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. He’d worked for a John Deere dealership in Marshfield for eleven years. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
Modified:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
It was pretty obvious to me, but the train of thought was something like this:
* this is a good attempt at a work of art, but written in a generic style that detracts from it
* nobody making genuinely good attempts at art like this would also write so generically
* and if they were making it generic on purpose, they wouldn't be able to do it so flawlessly
* oh, it must be AI
I guess I can discern the presence of a human artist, but only in the idea, which just means it was a good prompt.
Because of a bad habit reading comments before the link I knew it was AI. I read it regardless, and... I still enjoyed it!
I'm very much not a writer or a critic, so my definition of good writing is likely very low. Yet I can't shake off this weird feeling that I truly enjoyed the writing and felt the emotions, _while_ knowing it's LLM.
I'm guessing that human after touch is what made it pleasant to read. I'd love to see the commit history of the process. Fun times we live in!
Nanoclaw is the first hint I've seen of new type of software, user-customizeable code. It's not spec-to-software like in the story, but it is rather interesting. You fork it and then when you add features it self-modifies. I haven't looked deeply, but I'm not sure how you get updates after that, I guess you can probably have it pull and merge itself for a while but if you ever get to where you can't merge anymore, I'm not sure what you do.
As for spec-to-software - I am still pretty unsure about this. Right now of course we are not really that close, it takes too much iteration from a prompt to a usable piece of software, and even then you need to have a good prompt. I'm also not sure about re-generating due to variations on what the result might be. The space of acceptable solutions isn't just one program, it's lots, and if you get a random acceptable solution that might be fine for original generation, but it may be extremely annoying to randomly get a different acceptable solution when regenerating, as you need to re-learn how to use it (thinking about UI specifically here.) Maybe these are the same problem, once you can one-shot the software from a spec maybe you will not have much variation on the solution since you aren't doing a somewhat random walk there iterating on the result.
I also don't know if many users really want to generate their own solutions. That's putting a lot of work on the user to even know what a good idea is. Figuring out what the good ideas are is already a huge part of making software, probably harder than implementing it. Maybe small-(ish) businesses will, like the farmers in the story, but end-users, maybe not, at least not in general.
I do think there is SOMETHING to all this, but it's really hard to predict what it's gonna look like, which is why I appreciate this piece so much.
When I saw this the other day -- and it just went on and on, like a good human author who was going to write this kind of story probably wouldn't -- I looked for a note that it was AI-generated, and I didn't find it.
All I found was a human name given as the author.
We might generously say that the AI was a ghostwriter, or an unattributed collaboration with a ghostwriter, which IIUC is sometimes considered OK within the field of writing. But LLMs carry additional ethical baggage in the minds of writers. I think you won't find a sympathetic ear from professional writers on this.
I understand enthusiasm about tweaking AI, and/or enthusiasm about the commercial potential of that right now. But I'm disappointed to find an AI-generated article pushed on HN under the false pretense of being human-written. Especially an article that requires considerable investment of time even to skim.
I continue to resonate with the Oxide take when I hear this kind of sentiment expressed about AI prose
"... LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?"
I sadly agree with this sentiment. But to add my own thoughts, I wonder if our “human generation” (all consciously existing today) are just plainly dinosaurs. Like in three decades we’ll have a society that knew LLMs from birth.
As such, we can’t comprehend the world they live in. A world in which you ask your device to give you any story and it gives you an entire book to read. I’d like to think that as humans we inevitably want whatever is next. So I’d like to think this future generation will learn to not only control, but be beyond more creative than current people can even imagine.
Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets? Did people riding horses imagine electric cars? Did people living in caves imagine ocean crossing ships?
I kindly can’t tell if you missed my point. As much as past writers and readers could imagine a version of our present, I also imagine that if they got transported here they would still be in awe of what they saw
I agree. I imagine that a writer who predicted modern technology would still be in awe to see smartphone videoconf halfway around the globe finally realized.
And also be surprised by some of the uses to which it's put. And horrified by some of the societal backsliding despite what should be utopian technology.
"This was the mechanic’s paradox: the cheaper you were relative to the cost of failure, the more your clients needed you; and the more they needed you, the more they resisted the implication that they’d need you again."
This is my common issue from building websites for SMEs. It's not until Google updates their algorithm - killing their ranking and their sales leads slow that you hear from them.
There is wisdom in constantly up-selling to your customers (we offer management services, SEO and are cautiously moving in AIO), they may say no, but you have a fall back that you offered things that would have mitigated their current crisis.
I really enjoyed fantasy part of many small farmers. It felt rustic. However based on my understanding the modern world is moving towards megacorps and economies of scale.
Your polishing work made a difference! The prose is like every other work of science fiction I've read.
It's written like this is a dystopia but billing $180/45 minutes in rural low cost of living area sounds awesome. And the choreographer billing "more than a truck" for three weeks? The dream!
> The prose is like every other work of science fiction I've read.
Well, then, you gotta move on to reading better science fiction. Because this is pretty damn bland. I gave up after 2 minutes because of it. Kinda feel vindicated after coming to the comments.
I can see it working for casual readers, which is why it's already an editorial problem. Imagine having to sift through a growing number of faux writers sending publishers AI generated prose.
This sort of article really needs at least a vague clue as to what it is about.
It's a long article and from skimming I see chat of farming, software, GPS. I can't tell whether this is worth investing time to read if I can't even tell what it may be about
Having read most of it, I don't agree that it's worth reading. A bunch of made-up technical jargon and situations that never happened to frame specific problems that are part of the made-up situations using more jargon, in a farmer-centric area. It was a waste of time and a waste of concentration to try to make sense of it. There was no learning, nor was it worth quoting, nor comparing to anything else.
I don't oppose reading AI generated content in principle, but because it's free to generate, I always am less likely to read super long prose that is AI generated. So the question is whether someone has taken the time to keep it as long as necessary but not longer. Or if there are ways to make it easier for me to commit to the experience, with a sort of TLDR
A few months ago, I asked Grok for a piece of fiction set in the cyberpunk 2077 universe. A cremated incredible story about a braindance that was actually stealthily programming the watcher through a back door in the watchers own implants to transmit a AI from beyond the black wall, allowing the AI to escape into the physical universe through the braindance’s audience. Excellent.
I'm disappointed, as the Google result showed "warranty void if regenerated" in the description and I thought HN had started serving witicisms for the desciption
Yes, which is why some of the comments are from a day ago but the post is only a couple of hours old. We originally downranked it due to being AI-generated.
But on reflection and discussion with the author, we decided that enough HN users may find that it gratifies intellectual curiosity, because it's interesting to see how a human and an AI bot can collaborate to create writing like this.
We just asked the author to write an introduction to make it clear it's AI-generated and explain their process.
Please don’t post snarky, shallow dismissals or use internet tropes on HN. I explained thought process we went through. Nothing on HN is to everyone’s taste. Plenty of people are finding this post interesting and having a good discussion about it.
I'm trying to sort out my own emotions on this.
I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.
It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.
I talked to AI basically all day, yet I am genuinely made uneasy by this.
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
It's a major bummer. When I first read the story (a few days ago, maybe?) I thought it was an interesting metaphor that didn't quite line up with the observed details of software development with AI. I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.
Surely you see it's somewhat unreasonable? As if it was written by the author you disliked, and until you knew of the fact, you quite enjoyed it.
Quite honestly, I do that sometimes too -- but I _know_ that it's unreasonable.
For me, “interestingly wrong” becomes just “wrong” without human thinking behind it. I wasn’t bowled over by the prose, I just thought it was an uncommon take and didn’t twig the signs it was Claude product.
hard to form an emotional connection with the emotionless
Says parent post, while thinking a stack of rocks that looks a little like a fat raccoon is kind of cute.
Humans are designed to form emotional connections with non emotional things. Its sort of our whole deal.
Eh, People form emotional connections with inanimate objects, so I'm unsure if that's a good enough argument tbf.
A djungelskog is not a threat. AI threatens my livelihood and my humanity. The worst part is I have to use it regardless because I would be uncompetitive without it.
What is it about it that makes the story less interesting to you? It's the same story, down to the same delicate details. When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
I find it interesting to ponder. We look at the luddite movement as futile and somewhat fatalistic in a way. I feel like the current attitude towards AI generated art will suffer the same fate—but I'm really not quite sure.
What is your understanding of the luddite movement? I ask because I don't believe many are aware that luddites were not anti-technology. It was a labor movement which was targeted at exploitation by factory owners. Their issue was with factories forcing the use of machines to produce inferior products so owners could use cheaper, low skill labor.
https://www.vice.com/en/article/luddites-definition-wrong-la...
Right, wrong, whatever. The one thing every sane person can agree on is that it's a good thing the Luddites didn't prevail.
How much did you pay for the shirt you're wearing now?
I'd have been ok if things fell more in their direction... I'm not saying "clear win", but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
> but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
By arguing for letting humans work, particularly quality work, you're not especially finding a middle ground, more adopting the 1811 position of the OG Luddites who were opposed to being put out of work.
Yeah, that's a fine sentiment in the general, but let's hear some specifics.
Once again showing how little you actual understand about the movement you decry.
Specifically what is the user's misunderstanding? Be constructive.
Stories are particularly troubling because we have the concept of "suspending disbelief" and readers tend to take a leap of faith with longwinded narratives because we assume the author is going somewhere with the story and has written purposefully.
When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.
Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.
You can get some good guesses from the comment itself.
> I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.
This reminds me of having the reverse experience with the 2017 New Yorker viral "Cat Person" story [0] which a (usually trustworthy) friend forwarded and enthusiastically told me to read: waste of time shaggy-dog story, intentional engagement-trolling aimed at the intersection of the hot-button topics of its target readership. But why are we culturally expected to allow more slack to a human author, even a meretricious one? Both are comparably bad. The LLM-authored one needs a disclaimer at the top to set its readers' expectations right.
("Cat Person" honestly felt like the literary equivalent of Rickrolling; I would have stopped reading it after the first page if not for my friend's glowing endorsement.)
https://news.ycombinator.com/item?id=27778689
the story is bad in itself and doesn't add anything to the reader
but if you knew it came from a human it would be interesting as a window to learning what the writer was thinking
since there is no writer such window doesn't exist either
Yes, this is a thing. Bad writing with an interesting idea underneath it all is still interesting if it comes from a human because we have the expectation that the human will improve in how they share their ideas in the future. In other words, we see potential.
But LLMs don't have potential. You can make an LLM write a thousand articles in the next hour and it will not get one iota better at writing because of it. A person would massively improve merely from the act of writing a dozen, but 100x that effort and the LLM is no better off than when it started.
Despite every model release every 6 months being hailed as a "game changer", we can see from the fact that LLMs are just as empty and dumb as they were when GPT-2 was new half a decade ago that there really is no long term potential here. Despite more and more power, larger and hotter and more expensive data centers, it's an asymptotic return where we've already broken over the diminishing returns point.
And you know, I wouldn't care all that much--hell, might even be enthusiastically involved--if folks could just be honest with themselves that this turd sandwich of a product is not going to bring about AGI.
Very well said.
You cannot even get angry or upset if you disagree with anything in the story, maybe the author’s despicable worldview permeating through the characters... because there's no author’s worldview, because there's no author. It's a window into nothing, except perhaps the myriad of stories in the model's training set.
I want to at least have to option of getting upset at the author.
i don't find the luddite comparison accurate. they were against looms and anti-ai people or ai skeptical people are against the wholesale strip mining of intellectual property as it exists... both public domain and non-public domain. it's used to enrich the capital class at the expense of the workers. sure it's similar but it certainly didn't have the copyright and wholesale theft of all of the human ideas behind it. it just feels quite different.
c'mon, were they really just against the looms...?
People had a revulsion to eating refrigerated foods. The developed world got over it. We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.
> We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.
And if you've read literally any science fiction you will know the myriad ways that could be absolutely terrible for us
As a couple sibling comments said, I took it for an insight into the way an optimistic writer might see AI software development becoming a new form of "end-user programming" or "citizen developer" tooling. I'm personally too deep in the weeds to ever see it becoming empowering in that way (if nothing else, this will be an incredibly centralizing technology and whoever wins the "arms race" [assuming we we're not in a bubble destined to pop soon] will absolutely have the possible Toms and Megans of such a future by the short hairs). But I love end-user programming, or whatever we're calling it now! (I was partial to "shadow IT" - made it sound really cool.) So I enjoyed the idea that somebody saw AI as a "bicycle for the mind" in that sense, even if I feared they'd end up disappointed.
But there was nobody there, and I'm only disappointed in myself for not noticing.
>What is it about it that makes the story less interesting to you?
Read my comment below for a perspective.
> When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
For me, the answer to this riddle is very easy: I want to engage with other human minds. A robot (or AI) doesn't have a human mind, so I'm not interested in its "artistic" output.
It was never about how good it was. Of course AI slop adds insult to injury by being also bad. Currently. But it'll get better. My position was never that AI art (shorts, pictures, music, text) is to be frowned up because it's bad. I don't like it because it's not the expression of a human mind.
It's a bit like how an AI boy/girlfriend is not the real deal, no matter how realistic -- and I'm sure they'll get uncannily realistic in the future. They aren't the real deal because there's no real human behind the facade of companionship.
Humans build friendships and relationships on shared experiences. There is an element of relationship-through-experiencing-a-thing. Whether it's going for a walk together or the classic first date template of dinner and a movie. The shared experience is the thing.
With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.
Remove that "shared feeling with the author" and what meaning does it have?
You can look at a tree and feels things by yourself. Also there's the shared readership.
...and what meaning does it have?
It means, "Wow. Cool. I'm a member of a species that taught rocks to think. Holy fuck. That's pretty insanely fucking awesome. Wow. Wow, wow, wow. Fuck."
That's about all it means. Nothing was removed from your life, but something optional was added.
snark filter off, "wow wow wow this sex doll feels so real why would i ever bother with an actual girl"
Agreed, that will indeed be a problem. We may be building the proverbial Fermi filter.
birth rates have already tanked everywhere that isnt religious. youd think people would move back to religion and save their culture, but the sex doll argument has already pervaded. we werent designed to have our senses constantly hyperstimulated; resultantly, people increasingly dont care about reality. only sociopaths and the well disciplined thrive in this environment, everyone else becomes lost in hyperreality. id love to send it and join the masses ... after contemplating eternal damnation, a few years of sensory pleasure just arent worth it.
I think "I'm a member of a species chasing our own extinction by worshipping an idiot machine god for the purposes of profit. That's so insanely depressing. Fuck fuck fuck fuck fuck"
It has absolutely made my life worse not better
There is an interesting dichotomy where we express an uncanny-valley revulsion to AI-generated text, art, video and music; yet we seemingly go with the AI-generated code.
Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).
Are we conditioned to use other people's code unthinkingly, or is it something else?
It's because code isn't a way to communicate ideas, it's a way to specify behavior. Text, drawings, video, and music are means for brains to connect with each other. When you read or view or listen to something generated you're not connecting with any other brain. No idea has been transmitted to you. The feeling is analogous to speaking on the phone and only realizing several minutes later that the call was dropped. It's a feeling that combines betrayal, being made to waste time, and alienation.
I tend to disagree that code can't be a way to communicate an idea. Sure, I might struggle to edict an emotion in the reader (excluding confusion or frustration) but I feel it is a way to describe ideas, model constructs and processes, etc.
With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
>I tend to disagree that code can't be a way to communicate an idea.
I wouldn't go as far as can't, but in general it won't be, and if any ideas are indeed communicated, they will be impersonal.
>With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
I can think of a better example. In comic circles there's the rewrite, which is when an editor isn't fluent in the original language, and so instead of actually translating, they just rewrite all the dialogue to something that matches the action. People (generally) hate rewrites. Unknowingly reading a rewrite provokes a similar feeling of betrayal that unknowingly reading LLM output provokes.
No, code is a way of communicating ideas, or more correctly information. All languages convey information. All languages convey ideas.
Did you read past the first sentence? The kind of information that a piece of code transmits is fundamentally different from that which is transmitted by a sentence or a song.
I had a similar experience a few days ago with some music on Spotify. It was an Irish Pub song, rendering some political satire that seemed pretty consistent with what I figure is a predominant Irish viewpoint. Since I holidayed in Ireland a while ago and adored the public there, I really liked it. I reveled in the fact that somewhere in Ireland, there was a band singing messages in pubs that resonated strongly with me. And then it was pointed out that it was AI. I was crushed. I went from feeling connected to some people across the pond, to feeling lonely.
And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.
My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.
The connection is often with other people experiencing the same thing even if they thing is AI generated. You can see this clearly on Youtube with comments which just quote a line from the video. They get lots of upvotes, probably from other people who felt that line was special too and enjoy seeing others sharing the same feeling. Of course if all those comments are AI too, you would lose that connection.
Well, FWIW, LLMs are specified to infer and fill in the blanks of books. It makes the headlines now and again that publishers put AI companies on the hook for unauthorized use, The New Yorker included.
The duality of generated content.
It feels great to use.
It feels terrible to have it used on you.
It's full of AI generated imagery. Why would it not be AI generated?
Good rule of thumb is if it was posted on HN, it's almost certainly AI slop.
Whether people know it or not, when they engage with art they are assuming a person not just made it but experienced it. I'm going to blow past the discussion of "what is art" here, but where something came from and how it was made has always mattered to me (you could draw parallels to food here if you wanted). One thing that has been on my mind a lot is a particular photograph I saw in the past few years (and I'm sure it's easy to find online): it's a POV shot taken by a person sitting atop a skyscraper with their feet dangling over the edge. There is just no way that anyone could in good faith claim that the same photo produced by "AI" could possibly have the same emotional impact as knowing someone actually went and did that. I think that for a lot of people they may not even realize that when hey see a painting or even a photo as innocuous as a tree, their mind goes to that the person who produced this went to this that place the tree was in an had an experience and chose to document that particular perspective. If they were to see a painting or drawing of something that is clearly "fantasy," they know that a person made this up in their crazy mind and experience their feelings on it (good or bad). "AI" (heavy quotes) is trying to trick us and rob of us this basic knowledge. Some see this as progress. I personally think it's fucking disgusting, but I've been wrong before.
Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.
I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
You can never be sure if what you're feeling is a genuine human experience or not.
This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.
The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.
I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
I also did not gin to the fact that it was AI, but I did have the distinct feeling that I was reading something not that great. It bothered me because the message was something I could appreciate but the delivery felt anathema to the message.
It felt like it was written by someone trying to quit an addiction to Corporate Memphis content spam. Like it came from some weird timeline where qntm was a LinkedIn influencer. It straddles an uncanny valley of being a criticism of the domination of The Corporation over human culture while at the same time wallowing in The Corporate Eunuch Voice, not because it's a subversion of form, but because it knows no other way.
I then came to the comments section and found the piece that brought the picture into focus.
It's just... hard to explain the specific kind of disappointment. Perhaps there is a German phrase-with-all-the-spaces-removed kind of word that describes it succinctly. I feel like I exist in this Truman Show kind of world where everyone is trying to gaslight me into thinking LLMs are important, but they aren't very good at it and whenever I try to find out how or why, it all evaporates away. I was very reluctant to say that because I'm sure it's going to come with a heaping side of Extremely Earnest Walruses ready to Have A Debate about it and I just don't have the energy for it anymore. That's the baseline existence right now. It's like a really boring version of Gamergate.
And then this thing comes along. And yeah, it's a thing. You got me. Ha. Ha. Joke's on me. I lost the shitty, fake version of the Turing Test that I didn't even ask to be a part of. And it reminds me of the Microsoft Hololens: a massively impressive technological achievement that was ultimately a terrible consumer experience. Like if you figured out Fusion Power but it could only power Guy Fieri restaurants.
Ever since the pandemic I've been keenly aware of the complete destruction of every enjoyable social structure around me. The meetups that evaporated. The offices we essentially squatted in that suddenly turned Extremely Concerned about what people were doing. The complete lack of any social interaction at work because we're all so busy because we're running at half-workforce and pretty sure the executive suite is salivating at the bit to lay the rest of us off. The lack of care about how this is impacting open source software. The lack of concern for people.
I feel like my entire adult life was this slow, agonizing, but at least constant push forward into recognizing the humanity in others and creating a kind and diverse world and then over night it's all been destroyed and half the people I see online are cheering it on like it's Technojesus coming to absolve them of their sins of never learning to invert a binary tree. Where the blogs and books and startups of the early 2000s were about finding the hidden potential in people--the college dropout working as a barista who just needs someone to give them a chance to be a programmer or a graphic designer or an artist or whatever--the modern era seems to all be about the useless middle management guy who never had any creative bone in his body no longer having to write status reports to his equally mendacious boss on his own anymore.
We might be restarting old coal plants, but at least Kevin in middle management gets to enjoy "programming" again.
you're saying qntm is NOT an influencer? what a miscalculation i have made
I'm very impressed that was written by an LLM.
Does that make the OP an "authoring mechanic"? Or an "AI editor"?
Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.
I guess I'm an expert on LLM-isms somehow, I thought they were still plentiful. They're plentiful at the start but get significantly worse near the end, so I'm guessing you spent more time polishing up the first 2/3rds or so.
But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.
The story is good.
Thanks! Yeah there were a couple I decided to leave in rather than try to rework as I wasn't trying to hide that it was written with AI, more trying to add more variety to the storytelling. I'm sure as I do more of these I'll be able to recognize them a lot easier. I have been toying with the idea of working them more into character's dialogue in the future, as I've already noticed some people I know speaking in LLMisms.
I'm particularly allergic to LLM-isms, if you look at my comment history I'm constantly complaining about LLM-written text. I am genuinely quite surprised to have read that much LLM-generated text and been happy to do so.
I am also extremely interested in thinking about where software development is going, so I really appreciated the ideas that went into this.
Since you seem open to feedback, I want to add that I felt the generated images were a negative addition. Maybe they wouldn't be if they also got a little polish - the labels in them were particularly bad.
Ahh cool, I'll dig through your comment history tonight :) I will say, I suspect we're only in the early stages of the LLM's writing equivalent of "autotune" while we all collectively figure out what's tasteful use, what isn't, what it might be like to use autotune as an instrument itself, and then what gets overused. So it'll probably get a lot worse before it gets better.
And thanks for the note about the images, I'll take that into account! I only really just started this project and am going to keep iterating as I learn to use the tools better and I find the right visual language for it.
Since you seem in the mood to give feedback ;) If you take a quick glance at the previous story, do you feel the same way about the images in that one or was it just this one's that you found particularly unpolished?
I think in this you are the autotune, trying to make the raw LLM writing in tune and palatable.
I did read your previous story (not as polished but still interesting) and noticed in the image that linked to "beautiful but the Mandarin module has a tone recognition bug that makes it nearly impossible for non-native speakers", that the tone bug was Hebrew rather than Chinese characters. Interesting...I might have a look again and translate.
Just wanted to say that I've felt the same about the images. To me it's likely was the text that for some reason had AI-feel to it. Great story though, I was in awe learning it was AI generated.
that's funny, i know where this story is set (i grew up there) - or at least, the place Claude was basing things off of
some inconsistencies that stuck out/i found interesting:
- HWY 29 doesnt run through marshfield, its about 15 miles north.
- not a lot of people grow cabbage in central wisconsin ;)
- no corrugated sheet metal buildings like in the first image around there
- i dont think theres a county road K near Marshfield - not in Marathon county at least
fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.
All that said, as a fictional experiment its pretty cool!
I think it serves even better as a metaphor for software engineering's future than as a forecast for the future of farming. As you suggest, farmers already had to make the "transition" over the course of the 20th century. A farmer from 1926 wouldn't recognize his counterpart today. They would have nothing to talk about. Software people, though, are still twentieth-century programmers at heart, who are just starting to feel their way through the Kubler-Ross process.
Really a great story, and to the extent it was AI-written, well... even greater.
Kubler-Ross process -> "A model outlining emotional responses to terminal diagnosis or loss: Denial, Anger, Bargaining, Depression, and Acceptance"
Exactly. The stages don't always occur in order, or at all, but you can see the general progression play out any day, all day on here.
I'm happily surprised (frankly amazed TBH) that the submitter didn't get bawled out by people flagging the post and accusing him of posting slop.
> As you suggest, farmers already had to make the "transition" over the course of the 20th century. A farmer from 1926 wouldn't recognize his counterpart today. They would have nothing to talk about.
Can you elaborate on this?
Automation and technology in general have made it possible to do more farming with fewer people: https://www.gilderlehrman.org/history-resources/teacher-reso... . In the US job market, agriculture accounted for 51% of workers in 1880 and less than 3% in 1980. It now appears to be closer to 1% depending on which source you reference.
Hard to imagine many occupations that have undergone more radical change in the recent past than farming. The profession is now utterly technology-dependent, and a few companies like John Deere have hastened to take unfair advantage of that. Hence the growing advocacy of right-to-repair laws.
> The milk pricing tool consumed the feed tool’s output as one of its cost inputs. The format change hadn’t broken the connection — the data still flowed — but it had caused the pricing tool to misparse one field, reading a per-head cost as a per-hundredweight cost, which made the feed expenses look much higher than they were, which made the margin calculations come out lower, which made the recommended prices drop. “You changed your feed tool,” Tom said.
“Yeah, I updated the silage ratios. What does that have to do with milk prices?”
“Everything.”
He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.
Ethan looked ill.
--
I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?
[Edited for clarity]
The entire story is AI slop. Tasty and enjoyable slop, but slop nonetheless.
You're not missing something — the chain is internally inconsistent as written.
The per-head vs. per-hundredweight swap is actually plausible for inflating apparent costs: a dairy cow weighs 12-15 hundredweights, so a $5/head daily feed cost misread as $5/hundredweight would balloon to $60-75/head. So "feed expenses look much higher" checks out.
But then the pricing logic goes the wrong direction. Higher perceived costs -> lower calculated margin -> the rational response is to raise prices to restore margin, or at minimum flag the squeeze. Dropping prices when you think you're losing money on every unit is only coherent if the tool is running some kind of volume/elasticity model where it reasons "margins are tight, compete on price" — which is a legitimately dangerous default for spot milk contracts.
Most likely it's just a logic inversion in the story. Either the misparse inflated costs and the tool correctly raised prices (locking in above-market rates Ethan didn't notice because he was happy), or the misparse deflated costs and the tool undercut on price thinking it had headroom. Both are realistic failure modes. The version in the story mixes the two.
Fittingly, a specification error in a story about specification errors.
Around the part where Margaret explains the problem to Tom , and started to feel annoyed. I could tell it was a LLM trying to fit a sci fi novella style of writing. And it was doing a good job , it was certainly better than 90% of posts ive read in the last 6 months.
Dont know why that makes me annoyed, maybe cause its the depressing seriousness of being a 'prompter' and the americana framing of it.
I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story
I think this passes the sniff test only if you're not too familiar with this neighborhood of the training set. Not that the writing is bad but it's just derivative. I listen to stuff like "Lost Scifi" podcast almost daily for example, but there are many similar ones which are focused on reading classic stuff from the golden-age zines because it's all public domain.
The premise/structure/flavor of TFA is an almost pitch-perfect imitation of that kind of voice, to the point that I immediately flagged it as probably generated. I actually think a modern person would have some difficulty even in consciously mimicking it. There's an "aw shucks" yokel-thrown-into-the-future aspect to it. Plot-wise you have rural bicycle repair shop that expands operations to support nuclear reactors and that sort of thing. Substitute any of the more atomic-age stuff for AI stuff and you're mostly there. If you have some Amazing Stories from the 1920s on your shelf then you kind of know what I mean.
It is a pitch perfect interpretation and I assumed that's what OP was going for. Manna (2010) read very similarly.
Can't speak for them but FWIW it does not sound like OP is necessarily aware of the genre at all. They asked Claude to explain something via fiction, and then perhaps Claude made the "creative decision" based simply on the availability of the material.
> I think this passes the sniff test only if you're not too familiar with this neighborhood of the training set
Which is totally fair, I'm honestly not! I haven't read much of that myself
The only thing I noticed is that the melody of the words was not equal to the quality of the writing and story arc.
It was the text equivalent of hearing a singer whom you know has perfect pitch sing atonal playground songs.
Take this sentence:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible.
Perfectly fine, a nice set up for a next sentence, but then you get hit with this:
He’d worked for a John Deere dealership in Marshfield for eleven years.
Bad. The rhythm is all off. Minor improvement:
For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield.
Minor change, really, but the fluidity of the language matters a lot and just that one sentence written that one way breaks the flow.
It's almost as if a second person interjected and wrote that sentence like a friends annoying girlfriend who won't let him finish a story without adding in her parts.
But two notes does not a music make, so let's compare that 1 minor change with a before and after of all three opening sentences:
Original:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. He’d worked for a John Deere dealership in Marshfield for eleven years. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
Modified:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
It was pretty obvious to me, but the train of thought was something like this:
* this is a good attempt at a work of art, but written in a generic style that detracts from it * nobody making genuinely good attempts at art like this would also write so generically * and if they were making it generic on purpose, they wouldn't be able to do it so flawlessly * oh, it must be AI
I guess I can discern the presence of a human artist, but only in the idea, which just means it was a good prompt.
Reading this was a roller coaster for me.
Because of a bad habit reading comments before the link I knew it was AI. I read it regardless, and... I still enjoyed it!
I'm very much not a writer or a critic, so my definition of good writing is likely very low. Yet I can't shake off this weird feeling that I truly enjoyed the writing and felt the emotions, _while_ knowing it's LLM.
I'm guessing that human after touch is what made it pleasant to read. I'd love to see the commit history of the process. Fun times we live in!
Nanoclaw is the first hint I've seen of new type of software, user-customizeable code. It's not spec-to-software like in the story, but it is rather interesting. You fork it and then when you add features it self-modifies. I haven't looked deeply, but I'm not sure how you get updates after that, I guess you can probably have it pull and merge itself for a while but if you ever get to where you can't merge anymore, I'm not sure what you do.
As for spec-to-software - I am still pretty unsure about this. Right now of course we are not really that close, it takes too much iteration from a prompt to a usable piece of software, and even then you need to have a good prompt. I'm also not sure about re-generating due to variations on what the result might be. The space of acceptable solutions isn't just one program, it's lots, and if you get a random acceptable solution that might be fine for original generation, but it may be extremely annoying to randomly get a different acceptable solution when regenerating, as you need to re-learn how to use it (thinking about UI specifically here.) Maybe these are the same problem, once you can one-shot the software from a spec maybe you will not have much variation on the solution since you aren't doing a somewhat random walk there iterating on the result.
I also don't know if many users really want to generate their own solutions. That's putting a lot of work on the user to even know what a good idea is. Figuring out what the good ideas are is already a huge part of making software, probably harder than implementing it. Maybe small-(ish) businesses will, like the farmers in the story, but end-users, maybe not, at least not in general.
I do think there is SOMETHING to all this, but it's really hard to predict what it's gonna look like, which is why I appreciate this piece so much.
When I noticed the article header image was generated with AI my interest in reading the article itself dropped to zero.
Thanks for sharing. This was an amazing read. I’d love to see novels with similar style stories about speculative near future tech and world.
When I saw this the other day -- and it just went on and on, like a good human author who was going to write this kind of story probably wouldn't -- I looked for a note that it was AI-generated, and I didn't find it.
All I found was a human name given as the author.
We might generously say that the AI was a ghostwriter, or an unattributed collaboration with a ghostwriter, which IIUC is sometimes considered OK within the field of writing. But LLMs carry additional ethical baggage in the minds of writers. I think you won't find a sympathetic ear from professional writers on this.
I understand enthusiasm about tweaking AI, and/or enthusiasm about the commercial potential of that right now. But I'm disappointed to find an AI-generated article pushed on HN under the false pretense of being human-written. Especially an article that requires considerable investment of time even to skim.
I continue to resonate with the Oxide take when I hear this kind of sentiment expressed about AI prose
"... LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?"
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers
I sadly agree with this sentiment. But to add my own thoughts, I wonder if our “human generation” (all consciously existing today) are just plainly dinosaurs. Like in three decades we’ll have a society that knew LLMs from birth.
As such, we can’t comprehend the world they live in. A world in which you ask your device to give you any story and it gives you an entire book to read. I’d like to think that as humans we inevitably want whatever is next. So I’d like to think this future generation will learn to not only control, but be beyond more creative than current people can even imagine.
Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets? Did people riding horses imagine electric cars? Did people living in caves imagine ocean crossing ships?
> Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets?
Yes, science fiction writers and readers have, since before any of us were born.
I kindly can’t tell if you missed my point. As much as past writers and readers could imagine a version of our present, I also imagine that if they got transported here they would still be in awe of what they saw
I agree. I imagine that a writer who predicted modern technology would still be in awe to see smartphone videoconf halfway around the globe finally realized.
And also be surprised by some of the uses to which it's put. And horrified by some of the societal backsliding despite what should be utopian technology.
"This was the mechanic’s paradox: the cheaper you were relative to the cost of failure, the more your clients needed you; and the more they needed you, the more they resisted the implication that they’d need you again."
This is my common issue from building websites for SMEs. It's not until Google updates their algorithm - killing their ranking and their sales leads slow that you hear from them.
There is wisdom in constantly up-selling to your customers (we offer management services, SEO and are cautiously moving in AIO), they may say no, but you have a fall back that you offered things that would have mitigated their current crisis.
I liked it. It has a similar feel to an Andy Weir "The martian" type of novel.
I really enjoyed fantasy part of many small farmers. It felt rustic. However based on my understanding the modern world is moving towards megacorps and economies of scale.
Your polishing work made a difference! The prose is like every other work of science fiction I've read.
It's written like this is a dystopia but billing $180/45 minutes in rural low cost of living area sounds awesome. And the choreographer billing "more than a truck" for three weeks? The dream!
> The prose is like every other work of science fiction I've read.
Well, then, you gotta move on to reading better science fiction. Because this is pretty damn bland. I gave up after 2 minutes because of it. Kinda feel vindicated after coming to the comments.
I can see it working for casual readers, which is why it's already an editorial problem. Imagine having to sift through a growing number of faux writers sending publishers AI generated prose.
The story didn't mention what had happened to inflation in the meantime. A dozen eggs costs $32.
Huh, I got cottage core, not dystopia!
That it was largely/mostly generated by Claude adds a certain poignancy to it.
As an allegory it reminds a lot of one I read as a teen: Joshua by Joseph Girzone. Not a literary masterpiece but a cleaver thought-raising story.
I started reading this and it gave a strong whiff of Richard Stallman’s “the right to read” - once dystopian and now a commonplace.
Then I started scrolling and thought the author was just verbose like RMS.
When it just kept going I was just mad to have fallen into the AI tarpit.
Fun idea. 5x too long. I need to calibrate my ai spidey sense better.
So, in the past, your stories were warrant-eed? But no longer?
This sort of article really needs at least a vague clue as to what it is about.
It's a long article and from skimming I see chat of farming, software, GPS. I can't tell whether this is worth investing time to read if I can't even tell what it may be about
It's speculative fiction.
It's worth reading. It's about AI.
Having read most of it, I don't agree that it's worth reading. A bunch of made-up technical jargon and situations that never happened to frame specific problems that are part of the made-up situations using more jargon, in a farmer-centric area. It was a waste of time and a waste of concentration to try to make sense of it. There was no learning, nor was it worth quoting, nor comparing to anything else.
My favorite part was the illustration from inside the car. The rear-view mirror clearly shows un-mirrored store signs.
Prompts in, garbage out.
I can see this future happening!
this was a ridiculously pointless story, I stopped after the second paragraph and came here to ask politely what was the point of it
what was my surprise when I read it was AI-generated
I don't oppose reading AI generated content in principle, but because it's free to generate, I always am less likely to read super long prose that is AI generated. So the question is whether someone has taken the time to keep it as long as necessary but not longer. Or if there are ways to make it easier for me to commit to the experience, with a sort of TLDR
A few months ago, I asked Grok for a piece of fiction set in the cyberpunk 2077 universe. A cremated incredible story about a braindance that was actually stealthily programming the watcher through a back door in the watchers own implants to transmit a AI from beyond the black wall, allowing the AI to escape into the physical universe through the braindance’s audience. Excellent.
with the speed of llm/ai improvement, this too maybe steamrolled
There are always bugs in software. The question is do you have enough eyes on the data to spot them or do they linger for years.
it's crap. you all need to go outside
I'm disappointed, as the Google result showed "warranty void if regenerated" in the description and I thought HN had started serving witicisms for the desciption
Did this story disappear then re-appear?
Yes, which is why some of the comments are from a day ago but the post is only a couple of hours old. We originally downranked it due to being AI-generated.
But on reflection and discussion with the author, we decided that enough HN users may find that it gratifies intellectual curiosity, because it's interesting to see how a human and an AI bot can collaborate to create writing like this.
We just asked the author to write an introduction to make it clear it's AI-generated and explain their process.
"it's boring to see how a human and an AI bot can collaborate to create writing like this."
FTFY
I wanted to agree, but this story is really good.
Please don’t post snarky, shallow dismissals or use internet tropes on HN. I explained thought process we went through. Nothing on HN is to everyone’s taste. Plenty of people are finding this post interesting and having a good discussion about it.