The Internet and other technology has already come for and destroyed much of what I'd call culture, particularly in the west but also all over the world.
Tech replaces almost every human-to-human dependency with a human-to-machine dependency, many voluntary human-to-human interactions with human-to-machine interactions and most of the remaining human-to-human interactions with human-through-machine-to-human interactions.
AI as an advancement in tech is accelerating this.
I fight for culture by making music and food with other people at every opportunity.
I thought this piece was absolutely stellar because it's neither "AI is bad and we should stop all development" nor "You're only allowed to love AI because it's the next big thing."
I genuinely wonder what happens when media consumption is disassembled from its previous forms. Part of that has already happened where podcasts, shows and movies are now being created for the TikTok clips.
The few times I've watched TikTok, I realized recommendation algorithms with human-generated content are already impossible to ignore and best excised. They absorb attention because that's what they're designed to do.
I don't want to know what happens when the algorithms just experiment on our attention by themselves because they're hooked up to the actual generation of the content they'll push. When they form hypotheses on what we'll pay attention to, autonomously generate the content and push it out, where does that take us?
My hope is that these things will become like smoking: Available for adults, generally accepted to be a bad thing and stigmatized to do around other people.
Large media corporations. It's a bit frustrating at times. Sometimes when I read the news or try to find something on Google, I can sense an editorial bias or narrative. A lot of times it's subtle, like something being omitted or a tangential issue addressed.
A prime example of this are modern "fact checks". For instance if you wanted to explore if people are arrested for mean tweets in the UK, you can Google "fact check arrested for mean tweets", which brings you to a fact check that is a cherry picked case of an 11 year old getting arrested for "suspicion of violent disorder" and listed as "Misleading". But there are no follow ups, like what is the definition of "violent disorder" or are there any other cases where people were arrested for mean tweets. And since it's deemed as "Misleading", you're left to believe that people are not being arrested for mean tweets, which is just not true.
I feel LLMs allow you to ask follow ups and reason from pure principles much easier than the canned narrative from some giant corporation. Even when you do run up against a narrative, it's pretty easy to poke holes through it with follow up questions.
Of course the reverse is that you can probably find some evidence that feeds into your existing world view. But I think on the whole AI will be a positive impact on culture in this regard
I try to sample well-investigated news articles from both sides of the aisle, and it is unfortunate how often I notice the "omission" of key topics from headline reporting on one side or the other. Likely because the conclusion undermines their leanings in other areas. And rather than exploring exceptional topics deeply from their side, they just avoid it altogether.
If we really had a convergence of culture on the tools that keep ownership on the contributors, the world would be very different. But most people accept the cost of last generation patterns of control, for whatever small benefits they can squeeze out. We are recreating mass media, poorly, as if the future was not already here.
something deeply ironic about this piece using an AI generated video at the top of the page.
like, of course AI is coming for culture if even the New Yorker, a very well known American cultural magazine, is willing to leverage it to avoid paying an artist to create a short video.
Nitpick, but I'd assume The New Yorker paid the artist who generated with AI. And it's a fitting artwork that was generated using AI. I don't find that a massive problem.
The article doesn't even argue that AI is exclusively horrible and must be banished.
I think that was the point, as explained in the caption to the photo:
> "If A.I. continues to automate creative work, the total volume of cultural “stuff” will increase. New forms, or new uses for existing forms, will pull us in directions we don’t anticipate. Visual by David Szauder; Generated using A.I."
I definitely understand the intention, it still just feels a bit intellectually lazy to me.
Maybe it’s just my own aversion to using AI for creative work coming through… I can’t get myself to even add an AI generated image to a personal blog post.
The Internet and other technology has already come for and destroyed much of what I'd call culture, particularly in the west but also all over the world.
Tech replaces almost every human-to-human dependency with a human-to-machine dependency, many voluntary human-to-human interactions with human-to-machine interactions and most of the remaining human-to-human interactions with human-through-machine-to-human interactions.
AI as an advancement in tech is accelerating this.
I fight for culture by making music and food with other people at every opportunity.
I thought this piece was absolutely stellar because it's neither "AI is bad and we should stop all development" nor "You're only allowed to love AI because it's the next big thing."
I genuinely wonder what happens when media consumption is disassembled from its previous forms. Part of that has already happened where podcasts, shows and movies are now being created for the TikTok clips.
The few times I've watched TikTok, I realized recommendation algorithms with human-generated content are already impossible to ignore and best excised. They absorb attention because that's what they're designed to do.
I don't want to know what happens when the algorithms just experiment on our attention by themselves because they're hooked up to the actual generation of the content they'll push. When they form hypotheses on what we'll pay attention to, autonomously generate the content and push it out, where does that take us?
My hope is that these things will become like smoking: Available for adults, generally accepted to be a bad thing and stigmatized to do around other people.
Who defines our culture today?
Large media corporations. It's a bit frustrating at times. Sometimes when I read the news or try to find something on Google, I can sense an editorial bias or narrative. A lot of times it's subtle, like something being omitted or a tangential issue addressed.
A prime example of this are modern "fact checks". For instance if you wanted to explore if people are arrested for mean tweets in the UK, you can Google "fact check arrested for mean tweets", which brings you to a fact check that is a cherry picked case of an 11 year old getting arrested for "suspicion of violent disorder" and listed as "Misleading". But there are no follow ups, like what is the definition of "violent disorder" or are there any other cases where people were arrested for mean tweets. And since it's deemed as "Misleading", you're left to believe that people are not being arrested for mean tweets, which is just not true.
I feel LLMs allow you to ask follow ups and reason from pure principles much easier than the canned narrative from some giant corporation. Even when you do run up against a narrative, it's pretty easy to poke holes through it with follow up questions.
Of course the reverse is that you can probably find some evidence that feeds into your existing world view. But I think on the whole AI will be a positive impact on culture in this regard
https://mleverything.substack.com/p/anatomy-of-a-fact-check
I try to sample well-investigated news articles from both sides of the aisle, and it is unfortunate how often I notice the "omission" of key topics from headline reporting on one side or the other. Likely because the conclusion undermines their leanings in other areas. And rather than exploring exceptional topics deeply from their side, they just avoid it altogether.
If we really had a convergence of culture on the tools that keep ownership on the contributors, the world would be very different. But most people accept the cost of last generation patterns of control, for whatever small benefits they can squeeze out. We are recreating mass media, poorly, as if the future was not already here.
something deeply ironic about this piece using an AI generated video at the top of the page.
like, of course AI is coming for culture if even the New Yorker, a very well known American cultural magazine, is willing to leverage it to avoid paying an artist to create a short video.
Nitpick, but I'd assume The New Yorker paid the artist who generated with AI. And it's a fitting artwork that was generated using AI. I don't find that a massive problem.
The article doesn't even argue that AI is exclusively horrible and must be banished.
I think that was the point, as explained in the caption to the photo:
> "If A.I. continues to automate creative work, the total volume of cultural “stuff” will increase. New forms, or new uses for existing forms, will pull us in directions we don’t anticipate. Visual by David Szauder; Generated using A.I."
I definitely understand the intention, it still just feels a bit intellectually lazy to me.
Maybe it’s just my own aversion to using AI for creative work coming through… I can’t get myself to even add an AI generated image to a personal blog post.