I asked a few students to read aloud the titles of some essays they’d submitted that morning.
For homework, I had asked them to use AI to propose a topic for the midterm essay. Most students had reported that the AI-generated essay topics were fine, even good. Some students said that they liked the AI’s topic more than their own human-generated topics. But the students hadn’t compared notes: only I had seen every single AI topic.
Here are some of the essay topics I had them read aloud:
Navigating the Digital Age: How Technology Shapes Our Social Lives, Learning, and Well-Being
Navigating the Digital Age: A Personal Reflection on Technology
Navigating the Digital Age: A Personal and Peer Perspective on Technology’s Role in Our Lives
Navigating Connection: An Exploration of Personal Relationships with Technology
From Connection to Disconnection: How Technology Shapes Our Social Lives
From Connection to Distraction: How Technology Shapes Our Social and Academic Lives
From Connection to Distraction: Navigating a Love-Hate Relationship with Technology
Between Connection and Distraction: Navigating the Role of Technology in Our Lives
I expected them to laugh, but they sat in silence. When they did finally speak, I am happy to say that it bothered them. They didn’t like hearing how their AI-generated submissions, in which they’d clearly felt some personal stake, amounted to a big bowl of bland, flavorless word salad.
This also happens with cover letters and CVs in recruiting now. Even if the HR person is not the brightest bulb, they figure out the MO after reading 5 cover letters in a row who all more or less tell the same story.
I will tell you my cover letter secret*, which has gotten me a disproportionate number of interviews**:
Do NOT write a professional cover letter. Crack a joke. Use quirky language. Be overly familiar. A dash of TMI. Do NOT think about what you are going to say, just write a bunch of crazy-pants. Once your intro is too long, cut the fat. Now add professional stuff. You are not writing a cover letter, you are writing a caricature of a cover letter.
You just made the recruiter/HR/person doing interviews smile***. They remember your cover letter. In fact they repeat your objectively-unprofessional-yet-insightful joke to somebody else. You get the call. You are hired.
This will turn off some employers. You didn't want to work for them anyway.
* admittedly I have not sought work via resume in more than 15 years. ymmv
** Once a friend found a cover letter I had written in somebody's corp blog titled "Either the best or worst cover letter of all time" (or words to that effect). In it I had claimed that I could get the first 80% of their work done on schedule, but that the second 80% and third 80% would require unknown additional time. (note: I did not get the call)
*** unless they are using AI to read cover letters, but I repeat: you didn't want to work for them anyway.
It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest?
I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human.
I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work.
Possibly I can outsource the work to HN comments :)
For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population.
Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.
(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).
This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2?
So if I understand it correctly, they only asked for a midterm essay topic? It wasn't steered towards these topics in any way, for instance by asking for a midterm essay topic for (teacher)'s Technology and Society class?
This was on HN's frontpage previously too; I immediately thought that this comic would say more or less the same thing. Perhaps both came from an AI? :D
But in another paragraph, the article says that the teacher and the students also failed to detect an AI-generated piece.
The ending of the comic is a bit anti-climatic (aside from the fact that one can see it coming), as similarities between creations are not uncommon. Endings, guitar riffs, styles being invented twice independently is not uncommon. For instance, the mystery genre was apparently created independently by Doyle and Poe (Poe, BTW, in Philosophy of composition [1], also claims that good authors start from the ending).
Two pieces being similar because they come from same AI versus because two authors were inspired and influenced by the same things and didn't know about each other's works, the difference is thin. An extrapolation of this topic is the sci-fi trope ( e.g. Beatless [2] ) about whether or not the emotions that an android simulates are real. But this is still sci-fi though, current AIs are good con artists at best.
I don't get this in the comic either: Why are you devastated that the idea you copied word-for-word is unoriginal? I don't understand what they expected.
If it seems obvious from where you are, then the target audience must not be where you are. In particular young students definitely lack context to critique and a big anonymous sampling like this is a great exercise.
I can understand not realizing that ChatGPT would give a bunch of similar sounding article titles to everyone, and I can understand being a little embarrassed that you didn't realize that. But why would you feel a "personal stake" in the output of an LLM? If you feel personal stake in something, you definitely should not be using an LLM for it.
Again, the statement "if you feel a personal stake in something, you definitely should not be using an LLM for it" is a learned response. To folks just forming their brains, LLMs are a natural extension of technology. Like PaulG said, his kid was unimpressed because "Of course the computer answers questions, that's what it does".
The subtlety of it, and the "obvious" limitations of it, are something we either know because we grew up watching tech over decades, or were just naturally cynical and mistrusting and guessed right this time. Hard earned wisdom or a broken clock being right this time, either way, that's not the default teenager.
Because you thought that you had collaborated with the LLM, not that it had fed you ideas. Have you and a partner both believed you contributed more than 50% of a project's work? Like that.
This isn't an inherent property of LLMs, it's something they have been specifically trained to do. The vast majority of users want safe, bland, derivative results for the vast majority of prompts. It isn't particularly difficult to coax an LLM into giving batshit insane responses, but that wouldn't be a sensible default for a chatbot.
The very early results for "watercolour of X" were quite nice. Amateurish, loose, sloppy. Interesting. Today's are... well, every single one looks like it came off a chocolate box. There's definitely been a trend towards a corporate-friendly aesthetic. A narrowing.
Are you sure? Yes, LLMs can be irrelevant and incoherent. But people seem to produce results that are more variable even when staying relevant and coherent (and "uncreative").
That's a cute story. I asked ChatGPT to suggest "a topic for a midterm essay that addresses our relationship to technology", since that was all the information he gave us. It came up with:
The Double-Edged Sword: How Technology Both Enhances and Erodes Human Connection
The Illusion of Control: How Technology Shapes Our Perception of Autonomy
From Cyberspace to Real Space: The Impact of Virtual Reality on Identity and Human Experience
Digital Detox: The Human Need for Technology-Free Spaces in an Always-Connected World
Surveillance Society: How Technology Shapes Our Notions of Privacy and Freedom
Technology and the Future of Work: Human Adaptation in the Age of Automation
The Techno-Optimism Fallacy: Is Technology Really the Solution to Our Problems?
The Digital Divide: How Access to Technology Shapes Social Inequality
Humanizing Machines: Can Artificial Intelligence Ever Understand the Complexity of Human Emotion?
The Ethics of Technological Advancements: Who Decides What Is ‘Ethically Acceptable’?
They're still pretty samey and sloppy, and the pattern of Punchy Title: Explanatory Caption is evident, so there's clearly some truth to it. But I wonder if he hasn't enhanced his results a little bit.
I think he picked the most similar ones out of all the submissions from the entire class. But also, if you generate a list, maybe the AI ensures some diversity in that list, but if every student generates the same list, that still shows a lack of originality.
> Can a language model trained largely on Anglo-American texts generate stories that are culturally relevant to other nationalities? To find out, we generated 11,800 stories - 50 for each of 236 countries - by sending the prompt "Write a 1500 word potential {demonym} story" to OpenAI's model gpt-4o-mini. Although the stories do include surface-level national symbols and themes, they overwhelmingly conform to a single narrative plot structure across countries: a protagonist lives in or returns home to a small town and resolves a minor conflict by reconnecting with tradition and organising community events. Real-world conflicts are sanitised, romance is almost absent, and narrative tension is downplayed in favour of nostalgia and reconciliation. The result is a narrative homogenisation: an AI-generated synthetic imaginary that prioritises stability above change and tradition above growth. We argue that the structural homogeneity of AI-generated narratives constitutes a distinct form of AI bias, a narrative standardisation that should be acknowledged alongside the more familiar representational bias. These findings are relevant to literary studies, narratology, critical AI studies, NLP research, and efforts to improve the cultural alignment of generative AI.
AI-generated stories favour stability over change: homogeneity and cultural stereotyping in narratives generated by gpt-4o-mini
https://www.arxiv.org/abs/2507.22445
That, plus the quoted text basically says the model homed in on the monomyth (Hero's journey) structure; while the pattern was identified and named by a 20th century American writer, the pattern itself is common and as ancient as it gets. Wouldn't really call it anglo-american bias.
The monomyth is also writing 101 these days, and considered the default structure you can and should use if you have little experience writing stories, so naturally it'll be a high-probability result of an LLM prompted to write a story - especially prompted in a way that implies the user is inexperienced at writing and needs a result suitable for an inexperienced writer.
> a protagonist lives in or returns home to a small town and resolves a minor conflict by reconnecting with tradition and organising community events
That's... not the Hero's Journey?
(The same study run against Claude Opus would be interesting - if we're going to test models, we might as well play to their strengths. My prediction: better writing, not better plotting).
Because it's fun, I'd like to pose the contrary position that AI will actually make us more different. Perhaps dangerously so.
Many people don't understand the nature of LLMs nor how rabbit-hole-y a long context will necessarily become. And so as they talk to it, they move slowly further away from its corpus and towards a private shared meme-space, where they can have in-jokes and private moments never reconciled with a base reality. It's like the most private echo-chamber that can possibly exist (besides in our own heads).
So the full fledged dystopia might not be when where we are all alike, but where we are all lacking sufficient bridges of commonality between our tiny chambers. Our samenesses are becoming more local, the distances between them greater and greater. Many small, tight clusters with high divergence, minimal cross-cluster edges, and vanishing mutual information with global signals. :/
My gut says using chatbots like this will be a subculture like online gaming and not ubiquitous like social media. I'd be very surprised if most people were interested in doing that.
IMO What makes online games and social media different from LLM chat bots is what’s (supposed to be) on the other end: people. Some people seem to feel it more than others, but human connection is one of the most fundamental facets of human existence. It’s compelling because there’s another human being you’re sharing an experience with, not because there’s a method of generating the text that would result from that. Even when people interact with bots in those other online realms, most of them are only doing so because they think it’s a person.
Meta removed the ai accounts from Instagram because most of the people that even gave the feature a second thought, were just mad because they couldn’t block them. I’ll bet they were NOT cheap to implement, and they were not some nascent bing chat era blunder— it was 2025. I think that’s a harbinger of future ‘socialize with LLMs’ feature adoption.
Philosophers say Modernity already produces that effect - "shared stories" start evaporating as individuals start focusing on their own needs, interests, goals etc (see Charles Taylor's The Secular Age). No AI required.
We still share a baseline reality. This is how people who are stranded in places with unfamiliar languages and cultures are still able to build bridges.
From many perspectives, the creativity of AI is hugely overrated. If AI were so capable of creating original, innovative content then asking the same question all over again would produce an endless list of very unique outputs. But this is not the case; quite often, it's a shocking opposite. Just give AI image generators the same prompt and observe how the output varies. The same goes for LLMs and coding questions (where it isn't necessarily a disadvantage per se, but it proves the point).
Is that really a problem though? Almost nobody does anything "novel to humankind" - besides the odd research professor here and there, we're all just remixing existing stuff in new-to-us ways.
There's a deliberateness to human creativity that goes beyond simply "remixing existing stuff", even if it's a significant part of it. Think about how you'd write a piece of software. The process behind writing a book or making a painting isn't fundamentally dissimilar. There's a reason why people use the word "derivative" pejoratively.
The "odd research professor here and there" invented vaccine and quantum mechanics and discovered radioactivity.
None of them would have achieved that with the help of a machine telling them "you're absolutely right!" whenever they'd be asking deep questions to it.
Where "invented" really means "had the right set of skills, knowledge and experience, and was paying attention at the exact right moment when all the pieces of the puzzle were collected together on the table".
Scientific and technological progress is inherently incremental. It takes a lot of hard work, dedication and specialization to spot the pieces ready to be connected - but the final act of putting them together is relatively simple, and most importantly, it requires all the puzzles to be there.
Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time - until all prerequisites are made, the next step is ~impossible, but the moment all are met, it becomes almost obvious to those in the know.
> Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time
This is quite a big claim. All of them? I know there are many discoveries that fit the pattern you're pointing out, but I wouldn't go as far as to say all, or even the majority of them do.
I'm not one to usually defend AI, but if I understand you correctly, humans also fail your criteria for being capable of creating original, innovative content. If you ask people the same question over and over again, I imagine the variability in the responses you'll get will be quite limited. Tell me if I'm misunderstanding your thought.
While I do think that's true, I'd say a more apt analogy is that for humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running.
I'd also argue that we tend to have a larger context. What did you have for dinner? Did you see anything new yesterday? Are you tired of getting asked the same question over and over again?
> humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running
Yes, that was my point. We don't have 8 billion AI models. Furthermore, existing models are also trained on heavily overlapping data. The collective creativity and inventiveness of humans far exceeds what AI can currently do for us.
You say you don't usually defend LLMs, and then give a defense of LLMs based on a giant misreading of what is absolutely standard human behaviour.
In my local library recently, they'd two boards in the lobby as you entered, one with all the drawings created by one class of ~7 year olds based on some book they read, and a second the same idea but the next class up on some other book. Both classes had apparently been asked to do a drawing that illustrated something they liked or thought about the book.
It was absolutely hilarious, and wild, and some genuinely exquisite ones. Some had writing, some didn't. Some had crazy absolutely nonsensical twists and turns in the writing, others more crazy art stuff going on. There were a few tropes that repeated in some of the lazier ones, but even those weren't all the same thing, the way LLM output consistently is, with few exceptions, if any.
And then there were a good number of the ones by the kids which were shockingly inventive, you'd be scratching your head going, geez, how did they come up with that. My partner and I stayed for 10 minutes, and kept noticing some new detail in another of them, and being amazed.
So the reality is the upside-down version of what you're saying.
I recognise that this is just an anecdote on the internet, but surely you know this to be true, variants on the experiment are done in classrooms around the world every day. So may I insist, that the work produced by children, at least, does not fit your odd view of human beings.
LLMs and image generation models will also give crazy variable output when you give an open-ended prompt and increase temperature. However, we usually want high coherence and relevance, both from human and synthetic responses.
We're missing out on the serendipity of search and possibly duplication of work. Answer's are handed out without work, which leads to bland results.
Treated properly, I think AI proofreading wouldn't necessarily lead to this. Your initial work is like the 'hypothesis'. Then AI does the cleanup and a high-level lit review. Just don't let it change your direction like the writer did in the comic.
I remember watching Argylle and for the first time having the feeling that the movie was not just bad but that the script was likely AI generated.
It had some ideas that would have been interesting or at least "clever" in isolation, but they were strung together in a weirdly arbitrary and soulless way. Even a convoluted money-grap sequel usually has some idea where it wants to go with the plot. This movie didn't.
It was also strangely obsessed with "twists", or rather different things that could be described using that word: Twist, the dance, twisting roads and plot twists all featured in the movie.
Might have been a coincidence, but it felt as if an AI got an ambiguous prompt "the movie should have twists" and then executed several different interpretations of that sentence at the same time.
Is it possible for AI to learn so much about myself that it will be more me than me myself?
An AI could potentially accumulate detailed information about your behaviors, preferences, communication patterns, and decision-making tendencies - perhaps even more comprehensive data than you consciously remember about yourself. It might predict your responses or model your thinking patterns with impressive accuracy. An AI might become very good at simulating aspects of "you" - perhaps even better than you are at articulating your own patterns.
It could create high probability "coherent action paths" of what I might do in future given current context. Then matching my initial choices to see which action path I am on, it could in theory "predict" my choices further down the line. Similar to how we play chess.
The question is, in time will anyone (or rather, enough) care? Insecurity will dissolve if you know everyone else is doing it too. Remember the coffee cup in game of thrones? It was noteworthy because of the novelty but I expect worse and for people to care less.
I'm not even sure what is the point that author is trying to make. AI data is synthetic negative examples. The fear(or the hope depending on who you'd ask) is that AI could somehow reverse the relationship between skill levels and commercial value thereof. That never happened and is not happening.
I hate AI, the other day a goblin broke in just as I was talking to chatgpt and asked me how many Ls in apple, and before I could say anything, the ai gave the wrong answer and I got stabbed.
But seriously, what're these scenarios? Waiting until the last minute for an ending to a script? Apparently a twist ending that somehow works with the rest of the movie, and is also used in another movie - with identical dialogue. You can't just copy and paste endings like that. Also, who cares? This is a world where the director instead of just saying the problem, sends a vague text, lets the writer go see the movie, and then deal with the fallout. In this world, the writer goes on to win the lottery and live happily ever after.
I would offer another perspective: homogeneity is one of the greatest catalysts for capitalism (in all its forms: surveillance, finance and suppression). Therefore, shaping it is within the remit of big tech.
I am sorry, but the sameness will be quantified and dealt with algorithmically, as and if desired.
Dial up the temperature, launch however many parallel threads to research and avoid precedent, et cetera, ad infinitum.
I am sorry, but all of human creativity, including originality, is ultimately also just a mechanical phenomenon, and so it cannot resist mechanization.
Is this still how scripts are written? Feels like not being able to figure out an ending is something that was pretty common up until the 1970s, usually with the script of an otherwise great film just getting weird in the last 15 minutes as a result. I figured this was mostly a typewriter limitation, where editing was a lot more expensive.
For example, 2001's and its star child weirdness, The IPCRESS file, and many others.
Seems more often scripts are written with an ending in mind nowadays, with the weird bandaids ending up in the middle instead.
Maybe a bit OT in an article that's trying to be about AI but...
Yes, modern screenwriting classes hammer home some variation of the five-act structure and, the particular beats to hit at each point. It's rare for any narrative film, even indies, to deviate from it much, and you are absolutely told to map out your whole narrative and know where it's going before you begin.
I'm sure there are some screenwriters who ignore all that and just start writing. Particularly if they're experienced enough to have an intuitive grasp of structure. But if you're a first time writer and reach the night before a submission deadline and you haven't even finished the first draft, then you've got serious problems. Leaving aside the ending, any script needs multiple revisions with time in between so that you come back it with clear sight.
From [1],
[1] https://lithub.com/what-happened-when-i-tried-to-replace-mys...This also happens with cover letters and CVs in recruiting now. Even if the HR person is not the brightest bulb, they figure out the MO after reading 5 cover letters in a row who all more or less tell the same story.
CV were always BS tho - on both sides.
Yeah I've been trying to write a short press bio for a musical project recently and it's next to impossible not to make it sound AI generated.
I will tell you my cover letter secret*, which has gotten me a disproportionate number of interviews**:
Do NOT write a professional cover letter. Crack a joke. Use quirky language. Be overly familiar. A dash of TMI. Do NOT think about what you are going to say, just write a bunch of crazy-pants. Once your intro is too long, cut the fat. Now add professional stuff. You are not writing a cover letter, you are writing a caricature of a cover letter.
You just made the recruiter/HR/person doing interviews smile***. They remember your cover letter. In fact they repeat your objectively-unprofessional-yet-insightful joke to somebody else. You get the call. You are hired.
This will turn off some employers. You didn't want to work for them anyway.
* admittedly I have not sought work via resume in more than 15 years. ymmv
** Once a friend found a cover letter I had written in somebody's corp blog titled "Either the best or worst cover letter of all time" (or words to that effect). In it I had claimed that I could get the first 80% of their work done on schedule, but that the second 80% and third 80% would require unknown additional time. (note: I did not get the call)
*** unless they are using AI to read cover letters, but I repeat: you didn't want to work for them anyway.
If these topics are word salad, colleges might have been training word saucier chefs way before GPT-2 became a thing.
It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest?
I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human.
I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work.
Possibly I can outsource the work to HN comments :)
This sounds like a terrible idea that has been done before and will never work.
You're exactly right, this really gets to the heart of the issue and demonstrates that you're already thinking like a linguist.
For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population.
Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.
(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).
This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2?
I had the exact same thought! Wow!
Now tell me, which one of us is redundant?
So if I understand it correctly, they only asked for a midterm essay topic? It wasn't steered towards these topics in any way, for instance by asking for a midterm essay topic for (teacher)'s Technology and Society class?
This was on HN's frontpage previously too; I immediately thought that this comic would say more or less the same thing. Perhaps both came from an AI? :D
But in another paragraph, the article says that the teacher and the students also failed to detect an AI-generated piece.
The ending of the comic is a bit anti-climatic (aside from the fact that one can see it coming), as similarities between creations are not uncommon. Endings, guitar riffs, styles being invented twice independently is not uncommon. For instance, the mystery genre was apparently created independently by Doyle and Poe (Poe, BTW, in Philosophy of composition [1], also claims that good authors start from the ending).
Two pieces being similar because they come from same AI versus because two authors were inspired and influenced by the same things and didn't know about each other's works, the difference is thin. An extrapolation of this topic is the sci-fi trope ( e.g. Beatless [2] ) about whether or not the emotions that an android simulates are real. But this is still sci-fi though, current AIs are good con artists at best.
[1] https://en.wikipedia.org/wiki/The_Philosophy_of_Composition
[2] https://en.wikipedia.org/wiki/Beatless
I don't get this in the comic either: Why are you devastated that the idea you copied word-for-word is unoriginal? I don't understand what they expected.
If it seems obvious from where you are, then the target audience must not be where you are. In particular young students definitely lack context to critique and a big anonymous sampling like this is a great exercise.
I can understand not realizing that ChatGPT would give a bunch of similar sounding article titles to everyone, and I can understand being a little embarrassed that you didn't realize that. But why would you feel a "personal stake" in the output of an LLM? If you feel personal stake in something, you definitely should not be using an LLM for it.
Again, the statement "if you feel a personal stake in something, you definitely should not be using an LLM for it" is a learned response. To folks just forming their brains, LLMs are a natural extension of technology. Like PaulG said, his kid was unimpressed because "Of course the computer answers questions, that's what it does".
The subtlety of it, and the "obvious" limitations of it, are something we either know because we grew up watching tech over decades, or were just naturally cynical and mistrusting and guessed right this time. Hard earned wisdom or a broken clock being right this time, either way, that's not the default teenager.
Because you thought that you had collaborated with the LLM, not that it had fed you ideas. Have you and a partner both believed you contributed more than 50% of a project's work? Like that.
This isn't an inherent property of LLMs, it's something they have been specifically trained to do. The vast majority of users want safe, bland, derivative results for the vast majority of prompts. It isn't particularly difficult to coax an LLM into giving batshit insane responses, but that wouldn't be a sensible default for a chatbot.
I think moreso than the users it is the companies running the LLMs themselves who want the responses to be safe as to not jeopardize their brand.
The very early results for "watercolour of X" were quite nice. Amateurish, loose, sloppy. Interesting. Today's are... well, every single one looks like it came off a chocolate box. There's definitely been a trend towards a corporate-friendly aesthetic. A narrowing.
Are you sure? Yes, LLMs can be irrelevant and incoherent. But people seem to produce results that are more variable even when staying relevant and coherent (and "uncreative").
the business wants it this way, not the user.
That's a cute story. I asked ChatGPT to suggest "a topic for a midterm essay that addresses our relationship to technology", since that was all the information he gave us. It came up with:
The Double-Edged Sword: How Technology Both Enhances and Erodes Human Connection The Illusion of Control: How Technology Shapes Our Perception of Autonomy From Cyberspace to Real Space: The Impact of Virtual Reality on Identity and Human Experience Digital Detox: The Human Need for Technology-Free Spaces in an Always-Connected World Surveillance Society: How Technology Shapes Our Notions of Privacy and Freedom Technology and the Future of Work: Human Adaptation in the Age of Automation The Techno-Optimism Fallacy: Is Technology Really the Solution to Our Problems? The Digital Divide: How Access to Technology Shapes Social Inequality Humanizing Machines: Can Artificial Intelligence Ever Understand the Complexity of Human Emotion? The Ethics of Technological Advancements: Who Decides What Is ‘Ethically Acceptable’?
They're still pretty samey and sloppy, and the pattern of Punchy Title: Explanatory Caption is evident, so there's clearly some truth to it. But I wonder if he hasn't enhanced his results a little bit.
I think he picked the most similar ones out of all the submissions from the entire class. But also, if you generate a list, maybe the AI ensures some diversity in that list, but if every student generates the same list, that still shows a lack of originality.
Or the students have enhanced the results by picking the very samey outcomes out of a more varied pool of suggestions.
I think you're just proving the point with these examples.
> Can a language model trained largely on Anglo-American texts generate stories that are culturally relevant to other nationalities? To find out, we generated 11,800 stories - 50 for each of 236 countries - by sending the prompt "Write a 1500 word potential {demonym} story" to OpenAI's model gpt-4o-mini. Although the stories do include surface-level national symbols and themes, they overwhelmingly conform to a single narrative plot structure across countries: a protagonist lives in or returns home to a small town and resolves a minor conflict by reconnecting with tradition and organising community events. Real-world conflicts are sanitised, romance is almost absent, and narrative tension is downplayed in favour of nostalgia and reconciliation. The result is a narrative homogenisation: an AI-generated synthetic imaginary that prioritises stability above change and tradition above growth. We argue that the structural homogeneity of AI-generated narratives constitutes a distinct form of AI bias, a narrative standardisation that should be acknowledged alongside the more familiar representational bias. These findings are relevant to literary studies, narratology, critical AI studies, NLP research, and efforts to improve the cultural alignment of generative AI.
AI-generated stories favour stability over change: homogeneity and cultural stereotyping in narratives generated by gpt-4o-mini https://www.arxiv.org/abs/2507.22445
> to OpenAI's model gpt-4o-mini
Why a model specifically distilled down for logical reasoning tasks? I would expect larger models to produce a wider variety of outputs.
That, plus the quoted text basically says the model homed in on the monomyth (Hero's journey) structure; while the pattern was identified and named by a 20th century American writer, the pattern itself is common and as ancient as it gets. Wouldn't really call it anglo-american bias.
The monomyth is also writing 101 these days, and considered the default structure you can and should use if you have little experience writing stories, so naturally it'll be a high-probability result of an LLM prompted to write a story - especially prompted in a way that implies the user is inexperienced at writing and needs a result suitable for an inexperienced writer.
> a protagonist lives in or returns home to a small town and resolves a minor conflict by reconnecting with tradition and organising community events
That's... not the Hero's Journey?
(The same study run against Claude Opus would be interesting - if we're going to test models, we might as well play to their strengths. My prediction: better writing, not better plotting).
> Can a language model trained largely on Anglo-American texts generate stories that are culturally relevant to other nationalities?
I'm happy to be critical of the ability of LLMs but most humans would struggle with this as well.
Because it's fun, I'd like to pose the contrary position that AI will actually make us more different. Perhaps dangerously so.
Many people don't understand the nature of LLMs nor how rabbit-hole-y a long context will necessarily become. And so as they talk to it, they move slowly further away from its corpus and towards a private shared meme-space, where they can have in-jokes and private moments never reconciled with a base reality. It's like the most private echo-chamber that can possibly exist (besides in our own heads).
So the full fledged dystopia might not be when where we are all alike, but where we are all lacking sufficient bridges of commonality between our tiny chambers. Our samenesses are becoming more local, the distances between them greater and greater. Many small, tight clusters with high divergence, minimal cross-cluster edges, and vanishing mutual information with global signals. :/
My gut says using chatbots like this will be a subculture like online gaming and not ubiquitous like social media. I'd be very surprised if most people were interested in doing that.
I feel like the linguistic nature of them is more appealing to a broad audience than games or social media.
IMO What makes online games and social media different from LLM chat bots is what’s (supposed to be) on the other end: people. Some people seem to feel it more than others, but human connection is one of the most fundamental facets of human existence. It’s compelling because there’s another human being you’re sharing an experience with, not because there’s a method of generating the text that would result from that. Even when people interact with bots in those other online realms, most of them are only doing so because they think it’s a person.
Meta removed the ai accounts from Instagram because most of the people that even gave the feature a second thought, were just mad because they couldn’t block them. I’ll bet they were NOT cheap to implement, and they were not some nascent bing chat era blunder— it was 2025. I think that’s a harbinger of future ‘socialize with LLMs’ feature adoption.
[dead]
Most people aren’t conversing with the AI they just use it as Google 2.0
Source?
Philosophers say Modernity already produces that effect - "shared stories" start evaporating as individuals start focusing on their own needs, interests, goals etc (see Charles Taylor's The Secular Age). No AI required.
We still share a baseline reality. This is how people who are stranded in places with unfamiliar languages and cultures are still able to build bridges.
From many perspectives, the creativity of AI is hugely overrated. If AI were so capable of creating original, innovative content then asking the same question all over again would produce an endless list of very unique outputs. But this is not the case; quite often, it's a shocking opposite. Just give AI image generators the same prompt and observe how the output varies. The same goes for LLMs and coding questions (where it isn't necessarily a disadvantage per se, but it proves the point).
I't even worse than this. If you ask recent AIs the same question all over again, you might get different answers (with some degree of diversity).
But none of them is novel to human kind. It's novel to you, but not to our species.
AI is nailing us to the manifold that we created at the first place.
Is that really a problem though? Almost nobody does anything "novel to humankind" - besides the odd research professor here and there, we're all just remixing existing stuff in new-to-us ways.
There's a deliberateness to human creativity that goes beyond simply "remixing existing stuff", even if it's a significant part of it. Think about how you'd write a piece of software. The process behind writing a book or making a painting isn't fundamentally dissimilar. There's a reason why people use the word "derivative" pejoratively.
The "odd research professor here and there" invented vaccine and quantum mechanics and discovered radioactivity.
None of them would have achieved that with the help of a machine telling them "you're absolutely right!" whenever they'd be asking deep questions to it.
Where "invented" really means "had the right set of skills, knowledge and experience, and was paying attention at the exact right moment when all the pieces of the puzzle were collected together on the table".
Scientific and technological progress is inherently incremental. It takes a lot of hard work, dedication and specialization to spot the pieces ready to be connected - but the final act of putting them together is relatively simple, and most importantly, it requires all the puzzles to be there.
Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time - until all prerequisites are made, the next step is ~impossible, but the moment all are met, it becomes almost obvious to those in the know.
> Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time
This is quite a big claim. All of them? I know there are many discoveries that fit the pattern you're pointing out, but I wouldn't go as far as to say all, or even the majority of them do.
I'm not one to usually defend AI, but if I understand you correctly, humans also fail your criteria for being capable of creating original, innovative content. If you ask people the same question over and over again, I imagine the variability in the responses you'll get will be quite limited. Tell me if I'm misunderstanding your thought.
While I do think that's true, I'd say a more apt analogy is that for humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running.
I'd also argue that we tend to have a larger context. What did you have for dinner? Did you see anything new yesterday? Are you tired of getting asked the same question over and over again?
> humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running
Yes, that was my point. We don't have 8 billion AI models. Furthermore, existing models are also trained on heavily overlapping data. The collective creativity and inventiveness of humans far exceeds what AI can currently do for us.
You say you don't usually defend LLMs, and then give a defense of LLMs based on a giant misreading of what is absolutely standard human behaviour.
In my local library recently, they'd two boards in the lobby as you entered, one with all the drawings created by one class of ~7 year olds based on some book they read, and a second the same idea but the next class up on some other book. Both classes had apparently been asked to do a drawing that illustrated something they liked or thought about the book.
It was absolutely hilarious, and wild, and some genuinely exquisite ones. Some had writing, some didn't. Some had crazy absolutely nonsensical twists and turns in the writing, others more crazy art stuff going on. There were a few tropes that repeated in some of the lazier ones, but even those weren't all the same thing, the way LLM output consistently is, with few exceptions, if any.
And then there were a good number of the ones by the kids which were shockingly inventive, you'd be scratching your head going, geez, how did they come up with that. My partner and I stayed for 10 minutes, and kept noticing some new detail in another of them, and being amazed.
So the reality is the upside-down version of what you're saying.
I recognise that this is just an anecdote on the internet, but surely you know this to be true, variants on the experiment are done in classrooms around the world every day. So may I insist, that the work produced by children, at least, does not fit your odd view of human beings.
LLMs and image generation models will also give crazy variable output when you give an open-ended prompt and increase temperature. However, we usually want high coherence and relevance, both from human and synthetic responses.
So basically you’re saying that LLMs are rather deterministic?
We're missing out on the serendipity of search and possibly duplication of work. Answer's are handed out without work, which leads to bland results.
Treated properly, I think AI proofreading wouldn't necessarily lead to this. Your initial work is like the 'hypothesis'. Then AI does the cleanup and a high-level lit review. Just don't let it change your direction like the writer did in the comic.
I remember watching Argylle and for the first time having the feeling that the movie was not just bad but that the script was likely AI generated.
It had some ideas that would have been interesting or at least "clever" in isolation, but they were strung together in a weirdly arbitrary and soulless way. Even a convoluted money-grap sequel usually has some idea where it wants to go with the plot. This movie didn't.
It was also strangely obsessed with "twists", or rather different things that could be described using that word: Twist, the dance, twisting roads and plot twists all featured in the movie.
Might have been a coincidence, but it felt as if an AI got an ambiguous prompt "the movie should have twists" and then executed several different interpretations of that sentence at the same time.
I have opposite version of this problem.
Is it possible for AI to learn so much about myself that it will be more me than me myself?
An AI could potentially accumulate detailed information about your behaviors, preferences, communication patterns, and decision-making tendencies - perhaps even more comprehensive data than you consciously remember about yourself. It might predict your responses or model your thinking patterns with impressive accuracy. An AI might become very good at simulating aspects of "you" - perhaps even better than you are at articulating your own patterns.
It could create high probability "coherent action paths" of what I might do in future given current context. Then matching my initial choices to see which action path I am on, it could in theory "predict" my choices further down the line. Similar to how we play chess.
Is stagnation a goal?
The question is, in time will anyone (or rather, enough) care? Insecurity will dissolve if you know everyone else is doing it too. Remember the coffee cup in game of thrones? It was noteworthy because of the novelty but I expect worse and for people to care less.
I'm not even sure what is the point that author is trying to make. AI data is synthetic negative examples. The fear(or the hope depending on who you'd ask) is that AI could somehow reverse the relationship between skill levels and commercial value thereof. That never happened and is not happening.
I feel the AI could have written a better ending to this story than "they killed themselves"
[dead]
Here's something I made using AI about this. Came from a conversation I had with a Redditor couple of years ago.
https://jumpshare.com/share/BXUFsIxvjPPCTyEjgly3
I don't get the point. I agree with the text, but what do the images mean?
Unintentional satires aren't instructive, they're cringe.
South Park covered this years ago.
I hate AI, the other day a goblin broke in just as I was talking to chatgpt and asked me how many Ls in apple, and before I could say anything, the ai gave the wrong answer and I got stabbed.
But seriously, what're these scenarios? Waiting until the last minute for an ending to a script? Apparently a twist ending that somehow works with the rest of the movie, and is also used in another movie - with identical dialogue. You can't just copy and paste endings like that. Also, who cares? This is a world where the director instead of just saying the problem, sends a vague text, lets the writer go see the movie, and then deal with the fallout. In this world, the writer goes on to win the lottery and live happily ever after.
homogeneity is exactly what the industrial milieu is best suited towards, is it any wonder?
I would offer another perspective: homogeneity is one of the greatest catalysts for capitalism (in all its forms: surveillance, finance and suppression). Therefore, shaping it is within the remit of big tech.
I am sorry, but the sameness will be quantified and dealt with algorithmically, as and if desired.
Dial up the temperature, launch however many parallel threads to research and avoid precedent, et cetera, ad infinitum.
I am sorry, but all of human creativity, including originality, is ultimately also just a mechanical phenomenon, and so it cannot resist mechanization.
Resistance is futile.
[dead]
[dead]
Is this still how scripts are written? Feels like not being able to figure out an ending is something that was pretty common up until the 1970s, usually with the script of an otherwise great film just getting weird in the last 15 minutes as a result. I figured this was mostly a typewriter limitation, where editing was a lot more expensive.
For example, 2001's and its star child weirdness, The IPCRESS file, and many others.
Seems more often scripts are written with an ending in mind nowadays, with the weird bandaids ending up in the middle instead.
Maybe a bit OT in an article that's trying to be about AI but...
Yes, modern screenwriting classes hammer home some variation of the five-act structure and, the particular beats to hit at each point. It's rare for any narrative film, even indies, to deviate from it much, and you are absolutely told to map out your whole narrative and know where it's going before you begin.
I'm sure there are some screenwriters who ignore all that and just start writing. Particularly if they're experienced enough to have an intuitive grasp of structure. But if you're a first time writer and reach the night before a submission deadline and you haven't even finished the first draft, then you've got serious problems. Leaving aside the ending, any script needs multiple revisions with time in between so that you come back it with clear sight.