I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.
I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)
> My account got flagged as spam, still no idea why.
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.
Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.
You can have a LinkedIn profile without reading the feed.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.
Better suggestion: Ignore the feed if you don’t like it.
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.
One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.
Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
Maybe. Nature hates vacuum. I personally suspect that something new will emerge. For better or worse, some humans work best when weird restrictions are imposed. That said, yes, then wild 90s net is dead. It probably was for a while, but were all mourning.
Not quite dead yet. For me the rise of LLMs and BigTech has helped me turn more away from it. The more I find Ads or AI injected into my life, the more accounts I close, or sites I ignore. I've now removed most of my BigTech 'fixes', and find myself with time to explore the fun side of hacking again.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
There are still small pockets with actual humans to be found. The small web exists. Some forums keep on going, im still shitposting on Something Awful after twenty years and it’s still quite active. Bluesky has its faults but it also has for example an active community of scholars you can follow and interact with.
100%. I miss trackers and napster. I miss newgrounds. This mobile AI bullshit is not the same. I don't know why, but I hate AI. I consider myself just as good as the best at using it. I can make it do my programming. It does a great job. It's just not enjoyable anymore.
I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
Not sure if it's an endemic problem, just yet, but I expect it to be, soon.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
I just want to chime in and say I enjoy reading your takes across HN, it's also inspiring how informative and insightful they are. Glazing over, please never stop writing.
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
There are already many AI-generated submissions on HN every day. Comments maybe less so, but I've already seen some, and the amount is only going to increase with time.
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
I've seen AI generated comments on HN recently, though not many. Users who post them usually only revert back to human when challenged (to reply angrily), which hilariously makes the change in style very obvious.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
Humans are evolved to spend fewer calories and avoid cognitively demanding tasks.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
They get drowned by bots and missinformation and rage bait and 'easyness'.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
Let's clarify, maybe the best ideas would win out in the "level marketplace", where the consumer actually is well informed on the products, the product's true costs have to be priced, and there was no ad-agencies.
Instead, we have misinformation (PR), lobbying, bad regulation made by big companies to trench their products, and corruption.
So, maybe, like communism, in a perfect environment, the market would produces what's best for the consumers/population, but as always, there are minority power-seeking subgroups that will have no moral barriers to manipulate the environment to push their product/company.
The global alignment also happens through media like tv shows and movies, the internet overall.
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
In one of the WhatsApp communities I belong to, I noticed that some people use ChatGPT to express their thoughts (probably asking it to make their messages more eloquent or polite or whatever).
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
A friend of mine does this as English as second language and his tone was always misconstrued. I'd bug him about his slop, but he'll take that over getting his tone misconstrued. I get it
Also that these models are being used to promote fake news and create controversy ou interact with real humans with unknown purposes
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
> In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
Are they your ideas if they go through a heavy-handed editor? If you've had lots of conversations with others to refine them?
I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.
I hate when people hijack progressive language - like in your case the language of accessibility - for cheap marketing and hype.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
I don't disagree with a lot of what you're saying but I also have a different frame.
Even if we take your claim that LLMs don't make people better writers as true (which I think there's plenty to argue with), that's not the point at all.
What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
Most people aren't trying to become good writers. That's true before, and true now.
On the other hand, this argument probably isn't worth having. If your frame is that LLMs are expensive toys that ruin everything -- well, that's quite an aggressive posture to start with and is both unlikely to bear a useful conversation or a particularly delightful future for you.
It basically boils down to "I want the external validation of being seen as a good writer, without any of the internal growth and struggle needed to get there."
I mean, kinda, but also: not only are someone’s meandering ramblings a part of a process that leads to less meandering ramblings, they’re also infinitely more interesting than LLM slop.
I seriously doubt people didn't write blog posts or articles before LLMs because they didn't know how to write.
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
> I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ).
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
There are deterministic solutions for grammar and spellcheck. I wouldn't rely on LLMs for this. Not only is it wasteful, we're turning to LLMs for every single problem which is quite sad.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I think there's a difference between using an LLM as an editor and asking the LLM to write something for you. The output in the former I find to still have a far clearer tonal fingerprint than the latter.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
Here's my guess- your post reflects your honest opinion on the matter, with some LLM help. It elaborated on your smartphone analogy, and may have tightened up the text overall.
Transformation seems reasonable for that purpose. And if we were friends, I'd rather read your idiosyncratic raw output.
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
I should probably do that too. I once wrote an email that to me was just filled with impersonal information. The receiver was somebody I did not personally know. I later learned I made that person cry. Which I obviously did not intend. I did not swear or call anyone names. I basically described what I believe they did, what is wrong about that and what they should do instead.
I call it the enshittification fix-point. Not only are we losing our voice, we'll soon enough start thinking and talking like LLMs. After a generation of kids grows up reading and talking to LLMs, that will be only way they'll know how to communicate. You'll talk to a person and you couldn't tell the difference between them and LLMs, not because LLMs became amazing, but because our writing and thinking style become more LLM-like.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
Social media is a reminder we are losing our voice to mass media consumption way before LLMs were a thing.
Even before LLMs, do you want to be a big content creator on YouTube, Instagram, tiktok...? You better fall in line and produce content with the target aesthetic. Otherwise good luck.
I’ve realized that if you say that pro AI commenters are actually bot accounts, theres not really much that can be done to prove otherwise.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
Process before product, unless the product promises to deliver a 1000% return on your investment. Only the disciplined artist can escape that grim formula.
"Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager."
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.
I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
It would just become part of the shitshow, cf. Grok.
> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.
I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)
> My account got flagged as spam, still no idea why.
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
> it's how the algorithms promote engagement.
They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.
> should be heavily regulated.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.
Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.
N.B. Still employed btw.
You can have a LinkedIn profile without reading the feed.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.
Sorting the feed by "recent" at least gives you a randomized assortment of self aggrandizement, instead of algorithmically enhanced ragebait
Better suggestion: Ignore the feed if you don’t like it.
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
I have a special, deep, loathing for linkedin. I honestly can't believe how horrible it is and I don't understand why people engage with it.
This. Linkedin is garbage, yet I still use it because there are no competitors. This is what happens in a monoculture.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.
>it’s not just X — it’s Y
One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.
Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
The Internet will become truly dead with the rise of LLMs. The whole hacking culture within 90s and 00s will always be the golden age. RIP
Maybe. Nature hates vacuum. I personally suspect that something new will emerge. For better or worse, some humans work best when weird restrictions are imposed. That said, yes, then wild 90s net is dead. It probably was for a while, but were all mourning.
I hacked in the 90s and 00s, wasn’t that great/golden if you took your profession seriously…
Not quite dead yet. For me the rise of LLMs and BigTech has helped me turn more away from it. The more I find Ads or AI injected into my life, the more accounts I close, or sites I ignore. I've now removed most of my BigTech 'fixes', and find myself with time to explore the fun side of hacking again.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
There are still small pockets with actual humans to be found. The small web exists. Some forums keep on going, im still shitposting on Something Awful after twenty years and it’s still quite active. Bluesky has its faults but it also has for example an active community of scholars you can follow and interact with.
100%. I miss trackers and napster. I miss newgrounds. This mobile AI bullshit is not the same. I don't know why, but I hate AI. I consider myself just as good as the best at using it. I can make it do my programming. It does a great job. It's just not enjoyable anymore.
I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
Ironically I find it hard to tell whether this writing is LLM or merely a bit hollow and vapid.
Not sure if it's an endemic problem, just yet, but I expect it to be, soon.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
[0] https://littlegreenviper.com/miscellany/
I just want to chime in and say I enjoy reading your takes across HN, it's also inspiring how informative and insightful they are. Glazing over, please never stop writing.
Thanks so much!
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
i think the frontpage of hn has had at least one llm-generated blog post or large github readme on it almost every day for several months now
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
There are already many AI-generated submissions on HN every day. Comments maybe less so, but I've already seen some, and the amount is only going to increase with time.
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
I've seen AI generated comments on HN recently, though not many. Users who post them usually only revert back to human when challenged (to reply angrily), which hilariously makes the change in style very obvious.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
I see them regularly on several subreddits, I frequent.
It’s ok. Most of our opinions suck and are unoriginal anyway.
The few ones who have something important to say they will, and we will listen regardless of the medium.
Humans are evolved to spend fewer calories and avoid cognitively demanding tasks.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
They get drowned by bots and missinformation and rage bait and 'easyness'.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
The liberal idea that the best ideas win out in the marketplace turned out to be laughably wrong.
I'd argue that they do win out, it's just not the ideas that we thought were best.
"Best idea", but it's "best" by memetic reproduction score, not by "how well does this solve a real problem?"
Same thing with evolution: "survival of the fittest" doesn't mean "survival of the muscle", just whatever's best at passing on DNA.
Wouldn’t say it’s a liberal idea. It was a foundational argument in jurisprudence, from Holme’s dissent in the Abram’s case.
Let's clarify, maybe the best ideas would win out in the "level marketplace", where the consumer actually is well informed on the products, the product's true costs have to be priced, and there was no ad-agencies.
Instead, we have misinformation (PR), lobbying, bad regulation made by big companies to trench their products, and corruption.
So, maybe, like communism, in a perfect environment, the market would produces what's best for the consumers/population, but as always, there are minority power-seeking subgroups that will have no moral barriers to manipulate the environment to push their product/company.
The global alignment also happens through media like tv shows and movies, the internet overall.
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
In one of the WhatsApp communities I belong to, I noticed that some people use ChatGPT to express their thoughts (probably asking it to make their messages more eloquent or polite or whatever).
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
A friend of mine does this as English as second language and his tone was always misconstrued. I'd bug him about his slop, but he'll take that over getting his tone misconstrued. I get it
Also that these models are being used to promote fake news and create controversy ou interact with real humans with unknown purposes
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
> In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
> In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
They aren't your ideas if its coming out of an LLM
Are they your ideas if they go through a heavy-handed editor? If you've had lots of conversations with others to refine them?
I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.
I hate when people hijack progressive language - like in your case the language of accessibility - for cheap marketing and hype.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
I don't disagree with a lot of what you're saying but I also have a different frame.
Even if we take your claim that LLMs don't make people better writers as true (which I think there's plenty to argue with), that's not the point at all.
What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
Most people aren't trying to become good writers. That's true before, and true now.
On the other hand, this argument probably isn't worth having. If your frame is that LLMs are expensive toys that ruin everything -- well, that's quite an aggressive posture to start with and is both unlikely to bear a useful conversation or a particularly delightful future for you.
> I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
Oh I know. I called it hijacking because the result is as progressive as a national socialist is a socialist.
> What I'm saying is people are communicating better.
Actually they’re no longer communicating at all.
It basically boils down to "I want the external validation of being seen as a good writer, without any of the internal growth and struggle needed to get there."
I mean, kinda, but also: not only are someone’s meandering ramblings a part of a process that leads to less meandering ramblings, they’re also infinitely more interesting than LLM slop.
It's probably true that it reduces the barrier to entry, you don't refute that point in your post. You just call it cheap marketing and hype.
Barriers to entry can be a good thing. It’s a filter for low effort content.
It doesn’t. You’re not entering anything with an LLM.
I seriously doubt people didn't write blog posts or articles before LLMs because they didn't know how to write.
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
<< Write in your voice.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
> I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ).
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
FWIW this prompt works for very good for me:
Your mileage may vary.There are deterministic solutions for grammar and spellcheck. I wouldn't rely on LLMs for this. Not only is it wasteful, we're turning to LLMs for every single problem which is quite sad.
The LLM v human debate here reminds me of the now dormant "Are you living in a simulation?" discussions of previous decades.
It is not a zero sum game.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I promise to tell the truth.
I think there's a difference between using an LLM as an editor and asking the LLM to write something for you. The output in the former I find to still have a far clearer tonal fingerprint than the latter.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
Here's my guess- your post reflects your honest opinion on the matter, with some LLM help. It elaborated on your smartphone analogy, and may have tightened up the text overall.
Transformation seems reasonable for that purpose. And if we were friends, I'd rather read your idiosyncratic raw output.
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
I should probably do that too. I once wrote an email that to me was just filled with impersonal information. The receiver was somebody I did not personally know. I later learned I made that person cry. Which I obviously did not intend. I did not swear or call anyone names. I basically described what I believe they did, what is wrong about that and what they should do instead.
If someone cries about an email you sent, the problem isn’t with you.
LLMs have now robbed you of the opportunity to make your communication clearer
Please share what you told the LLM! I can't be the only curious one.
I don't see signs of LLM writing in your comment so I'll have to guess no.
If you didn't intentionally try and trick us, then yes, you used an LLM.
Soon, we'll be nostalgic for social media. The irony.
I call it the enshittification fix-point. Not only are we losing our voice, we'll soon enough start thinking and talking like LLMs. After a generation of kids grows up reading and talking to LLMs, that will be only way they'll know how to communicate. You'll talk to a person and you couldn't tell the difference between them and LLMs, not because LLMs became amazing, but because our writing and thinking style become more LLM-like.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
I think that this is imbalanced in favour of wannabe influencers, who want to be consistent and popular.
If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.
But yes, it's sad to see that your original stuff is lost in the sea of slop.
Sadly, as long as there will be money in publishing, this will keep happening.
For those of us not constantly online, we're doing just fine.
I suppose when your existence is in the cloud, the fall back to earth can look scary. But it's really only a few inches down. You'll be ok.
We lose our voice based on how we use our voice.
We improve our use of words when we work to improve our use of words.
We improve how we understand by how we ask.
Social media is a reminder we are losing our voice to mass media consumption way before LLMs were a thing.
Even before LLMs, do you want to be a big content creator on YouTube, Instagram, tiktok...? You better fall in line and produce content with the target aesthetic. Otherwise good luck.
I’ve realized that if you say that pro AI commenters are actually bot accounts, theres not really much that can be done to prove otherwise.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
Process before product, unless the product promises to deliver a 1000% return on your investment. Only the disciplined artist can escape that grim formula.
Let's not forget to mention the rise of AI-generated video. You can't really trust any video as real anymore.
"Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager."
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.
whatever bro