Several European governments have jailed people for social media posts. Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
As for falsehoods: some people will be mistaken, some people will lie, and sometimes sarcasm will be misunderstood. Why should anyone be liable? It is on each individual to inform themselves, and to decide what to believe and what to disregard.
Article 19 of the Universal Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
It doesn't say "if your opinion is approved by the government". It doesn't say "if your opinion is correct". It makes no exceptions whatsoever, and that is what we need to strive for.
Telegram founder Pavel Durov on Sunday accused France of asking him to remove some Moldovan channels from the social media platform ahead of the country’s presidential election last year.
In a statement on the Dubai-headquartered company, Durov claimed that the French intelligence services asked him “through an intermediary” to help the Moldovan government to censor “certain Telegram channels” before the vote on Oct. 20, in which incumbent President Maia Sandu secured a second term in office following a runoff held on Nov. 3.
He said a few channels were identified to have violated Telegram’s rules following reviews of the channels concerned and were subsequently removed.
“The intermediary then informed me that, in exchange for this cooperation, French intelligence would ‘say good things’ about me to the judge who had ordered my arrest in August last year,” Durov said, describing this as “unacceptable on several levels.”
“If the agency did in fact approach the judge — it constituted an attempt to interfere in the judicial process. If it did not, and merely claimed to have done so, then it was exploiting my legal situation in France to influence political developments in Eastern Europe — a pattern we have also observed in Romania,” he further said.
Durov also said that Telegram later received a second list of "Moldovan channels," which he noted were “legitimate and fully compliant with our rules,” unlike the initial list.
>Several European governments have jailed people for social media posts. Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
I think quite a few Europeans have lasting and direct experience with totalitarian, oppressive regimes. Which might also explain why they have stricter (or simply more precise) laws governing expression – not as an oppressive tool, but as a safety valve for the society.
Most societies have decided that some speech should be illegal. The classic example is yelling "FIRE" in a crowded theatre in the absence of a fire.
I think it is good and healthy to have conversations as to what should and should not be protected speech, but I think that there is this rote reaction that kinda boils down to free speech absolutism. But of course, all the free speech absolutists find at some point or another there is some speech they want made illegal.
A great example of this is in the US where Republicans often outwardly took such as stand when they weren't in power, but recently tried to use the FCC to take a comedian who made light criticism of the regime off the air.
So, silencing speech might not always be the oppressive regime, but it sometimes is.
EDIT: OK, I get the fire/theatre example is a bad one. Instead, consider incitement more broadly. For example incitement to discrimination, as prohibited by Article 20 of the International Covenant on Civil and Political Rights.
Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.
Imagine a situation where a person reasonably believes there was a fire (they saw smoke, but it turned out to be a vape or some sort of smoke machine part of the show). They're arrested for the false claim. Others know that it's too risky. Now, when there's a fire, people are reluctant to speak up just in case. It's the chilling effect that can affect speech. We have whistleblowers who are afraid to speak up. And when the means of speaking out or alerting others (ie via email or social media or smartphone) is controlled by a large corporation that may feel threatened...
> The classic example is yelling "FIRE" in a crowded theatre in the absence of a fire.
This is from an overturned US Supreme Court opinion, has no basis in anyone's jurisprudence, yet keeps coming up as an example of speech that's permissible to suppress for some reason.
Oliver Wendell Holmes created that example to support jailing a socialist for speaking out against the World War I draft.
> This is from a dissenting opinion of a US Supreme Court justice
No, its dicta which neither was part of the substantive ruling nor an accurate description of pre-existing law from the Court’s opinion (which was unanimous, so there was no “dissenting opinion”) in a case that has since been overruled and is notorious for having allowed an egregious restriction on core political speech.
Regardless, incitement remains an exception to free speech the world over to some degree. Article 20 of the International Covenant on Civil and Political Rights holds that incitement to discrimination is prohibited, for example [0].
My point stands, people of most societies globally believe certain speech should not be protected.
> yet keeps coming up as an example of speech that's permissible to suppress for some reason
Because they don't actually have an example of not imminently violence causing non-fraudulent speech that SCOTUS has upheld a ban of. And then when you call them out they'll say "but wait, it's metaphorical". If they had a better example they'd be using it.
A regime attempting to kill a large group of people is also oppressive and much worse. If the regime is able to do this because of speech then people are choosing the least worst option.
> If the regime is able to do this because of speech
Okay but that's a big "IF". I suspect a regime attempting to do that might be promulgating a significant amount of propaganda, but I doubt that they're able to be oppressive "because of speech".
What about loss of upward mobility for the middle class, or loss of living wages, mismanaged public institutions, corruption, bribery, collapse of democratic process?
All of this enables or sustains oppressive regimes and doesn't require any kind of speech from citizens. And without these kinds of serious problems, citizens barking nonsense won't result in much. Hindering free speech only makes it easier for a regime to continue to exacerbate these serious problems and continue oppression without being called out.
It can be. But there can be speech where most reasonable people would agree that it should be regulated. E.g. if some dude walks up to your 5 year old child and starts to tell them in intricate detail about his violent sexual fantasy, pretty much everybody notices that the kids right not to have to hear this outweighs the adults edgy itch to do this to a child.
And a lot of speech is like this, nearly no speech is consequence free. I am not saying we should ban any speech that has negative consequences. What I am saying is that with other rights we also have to way the active freedoms of one person ("the freedom to do a thing") against the passive freedoms of all the others ("the freedom to not have a thing done to you").
With other rights it is the same, you may have a right to carry a firearm and even shoot it. But if you shoot it for example in church, other peoples right not to have to deal with you shooting that gun in that church outweighs your right to do that.
In the German speaking part of the EU we decided that the right of literal Nazis to carry their insignia doesn't outweigh the right of the others to not have to see the insignia that have brought so much pain and suffering in these lands. To some degree this is symbolic, because it only bans symbols and not ideologies, but hey, I like my government to protect my state from a fascist takeover, because they are kind of hard to reverse without violence.
> not as an oppressive tool, but as a safety valve for the society.
This strikes me as just incorrect. What example from history shows totalitarianism being successfully avoided because of controls on speech?
The first item in the totalitarian playbook is controlling speech, and there are historical examples of that in every single totalitarian regime that I'm aware of.
> People can make judgment calls. Those are opinions.
I'm not sure that's a helpful distinction. In some sense, everything we classify as a 'fact' is a judgement call: is the sun a giant ball of fusing hydrogen? I mean, probably, but maybe we're all living in some sort simulation and it doesn't really exist at all; Or maybe you are living in your own personal "Truman Show", being fed lies by everyone who shows you scientific "evidence" about the sun's nature.
But "the sun is a giant ball of fusing hydrogen" is a different type of statement than "chocolate ice cream is better than vanilla", or "Mozart is better than Beethoven".
I think the most useful distinction is between “opinions” and “beliefs” rather than opinions and facts. A belief represents your confidence in the truth or falsehood of a statement. While an opinion has no underlying objective reality. “Apples are better than peaches” is an opinion. “More people ate apples than bananas in 2024” is a belief; it may be a true belief or a false belief but there is an answer.
"The sun is a giant ball of fusing hydrogen" has the possibility of being proven false. This means it's either a true or false fact.
If I said "NYC is the capital of the United States"* I'm either lying or mistaken
What makes it a lie vs mistaken? Whether it's a genuine belief, that I have a reason to have the belief. For example if I made the assumption it's the capital because it's the biggest city then I'm mistaken.
It's a lie if I know it's not true, if I ignore information that falsifies the fact.
*To avoid semantics I mean the official capital of the country not like "it's important"
@gwd: absolutely true; all the "facts" I know are either a long series of supporting ideas ("this is a chair and I can sit on it") or something I was told by an authority I trust ("Africa exists").
I still say there is a difference between "Africa exists" and "gwd's statement about the lack of 'facts' is heretical and they should be imprisoned".
> I’d say if you can be jailed for a particular opinion
Can you give an example of someone in a modern democracy jailed for their "opinion"?
To wit, are the examples you're thinking of "statement of opinion", "statement of fact", "pejorative insult", or "incitement"?
Saying "I think <public figure> is an idiot" is an opinion. "The earth is flat" or "The holocaust never happened" are not opinions; neither is, "Kick out all the <insert pejorative here>."
And yeah, in North Korea you'll absolutely be jailed for expressing some opinions. That may make them illegal, but it doesn't make them no longer opinions.
How is "the earth is flat" not an opinion?
People form opinions based on the information they have (or are willing to accept). For their worldview, their opinion is valid. If they don't accept certain voices of reason, they have that right. We saw people not allowed to ask about the origins of the covid19 virus because it went against a public narrative. At the beginning of the pandemic, people who expressed that masks should be worn were rejected by even government officials.
People might not have gone to jail, but they did have voices and access to society limited or removed because of their opinions.
Not really an opinion but it can be a belief. I'm not sure why we are okay with people believing that Earth is ~6000 years old, but not with someone believing that we are in a simulation and everything before e.g. year 1999 is just a collective memory fabrication.
But would you dare state out loud in Germany that, in your opinion, the official number of Holocaust victims is actually much less than what's been widely reported? Even if you had what you believed was solid evidence supporting your argument? I bet you wouldn't.
> neither is, "Kick out all the <insert pejorative here>."
How about voicing your opinion that <people from some country> should be barred from emigrating to <European country> because <crime statistics>? Bet you wouldn't try that either, because your opinion is in "hate speech" territory now.
I guarantee you that, if only they could, there are governments that would jail people for opinions. The US would have likely have done something to homosexuals, with or without sexual activity. Ditto on supporting Communism 'in your heart'. NK and other countries would do the same (OK, not NK for supporters of communism...).
This is all true, with a few exceptions. For example, incitement to violence or false allegations that do serious reputational damage. It's not sustainable to allow exactly all speech.
Although I mostly agree, I just wanted to make explicit that nuance.
GP brought up people being jailed for social media posts, but didn't reference any specifically. In the handful of cases I found via a web search, the charges were related to inciting violence.
GP also brought up the Universal Declaration of Human Rights. Article 30 reads:
Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein.
When one exercising a freedom restricts another's ability to exercise theirs, it is reasonable to expect courts to get involved to sort it out.
> Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
There are only a few European countries that jail people for wrongspeak, and I can't think of a single one of those countries whose population in general is in favor of such laws.
> Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
This argument can be made for government in general, although granted technology does make it easier for a smaller group to overreach. I'm a European and do hear your concern, but I feel comfortable supporting restrictions on speech _as long as_ there is also a functioning and just legal system that those restrictions operate within. Though there does seem to be a worrying trend towards technology bypassing the legal system and just giving enforcement agencies blanket access of late.
We all also have our own cultural biases and blind spots. I offer this not as whataboutism but as a different perspective: I'm _way_ more frightened by the authoritarian police culture (I base this on interactions with the police in a period I lived in the US) in the US than I am of the UK governments internet censorship. The internet censorship could do a lot of harm, but I think not as much potential harm as a large militarised police force willing to bust down doors on command from above.
Well, sure, it's all relative and no system is perfect. Not every mother is perfect, doesn't mean I escort mine around the house at gunpoint whenever she visits.
> It is on each individual to inform themselves, and to decide what to believe and what to disregard.
That's where the conundrum lies, requiring individual responsibility for protecting a whole society of potential bad actors using this freedom to break society apart.
How is it solved? No one knows, what we know is that relying on individuals to each act on their own to solve it won't work, it never works, we also see the effects on society from the loss of any social cohesion around what "truth" is, even though before the age of Internet and social media there were vehicles to spread lies, and manipulate people, this has been supercharged in every way: speed of spread, number of influential voices, followings, etc.
Anything that worked before probably doesn't work now, we don't know how to proceed but using platitudes from before these times is also a way to cover our eyes to what is actually happening: fractures in society becoming larger rifts, supercharged by new technologies, being wielded as a weapon.
I don't think government censorship is the answer, nor I think that just letting it be and requiring every single person to be responsible in how they critically analyse the insurmountable amount of information we are exposed to every day is the answer either.
>"Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
You do realize that this includes the freedom of people who get harassed online by others.
German journalist Dunja Hayali’s rights where violated by hate comments after social media and nuis sites misquoted her on her reporting on Charly Kirk‘s funeral
Worth mentioning exactly here the paradox of Popper: https://en.wikipedia.org/wiki/Paradox_of_tolerance sooo... what should we do about it, here and now? Because we are closing to need a decision. And let me remind you that people do get fined or jailed sometimes for making mistakes or lying.
I think this might be a misinterpretation of the word "responsible".
If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.
> If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
The platform isn't lying, any more than if I write "1 equals 2" on a letter and send it via the mail system to someone else is the mail system lying.
Social media platforms do not operate like the mail or telecommunication infrastructure. Suppose that a clique of high-follower verified users on X formed a private discord channel in which they coördinate false but plausible moral panic news events in order to foment spontaneous violent acts against minorities (for added effect, perhaps by pooling their resources into generative AI tools), and that both platforms refused to address this by shutting down the channel, banning the users, or even reducing their reach on the timeline. While there remain reasonable arguments against governing this bad behavior through legislation, it is plain that the social media platforms would be implicated in the negative outcomes of the behavior to a greater degree than a mail carrier.
What if the platform decides that people will send more mail if they piss them off by copying your letter and sending it to everyone in their service area? Is that still just you lying? At what point does the platform become responsible for amplifying what you said? Are you responsible when it's amplified to everyone if all you ever intended was sending it out into the void?
Yes, the analogy to the postal service falls apart when discussing one-to-one correspondance. That's more like a DM than a social media.
If however some of your junk mail included mass mailings of brochures to join the KKK or some neo-nazi group, I could see why people would want the postal service to crack down on that. That is a fair analogy.
Yeah, the US in particular functions on liability. If changes are made that make companies liable for the externalities from their platforms, they will almost instantly find ways to address the issues.
A “ministry of truth” would (I assume) be a part of the executive branch of government.
Whereas the creation of laws and the interpretation of laws are powers that the executive branch does not have, and are held separately by the legislative and judicial branches.
In a, well, y’know “functioning” democracy. Apparently.
Are you implying that there’s no posts on social media platforms that are plain and verifiably wrong and that any such decision needs to be made by a government created ministry of truth? There’s no middle ground? Maybe such a thing like a court?
If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?
"If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?"
Nah, he would just (shadow)ban you.
But in general we had that debate long and broad on what truth means with Covid. Who decides what the scientific consensus is for instance. (I don't remember a crystal clear outcome, though). But in case of doubt, we still have courts to decide.
There’s a lot of grey areas - statement of fact vs opinion, open scientific consensus, statements about public figures vs. private individuals, … But the post I’m responding to basically says “there is no truth, let’s give up.” and that’s just as false.
Judges. The question is mainly whether there should be some rules independent of those of the companies that the content must follow, and people who feel treated badly need to get their rights against those rules in a civil lawsuit, or whether more should be allowed first until there is a civil or penal lawsuit that might stop it. (It is already a mixture of both so it's a matter of degree.)
I personally prefer an emphasis on the first solution because it's better to combat the widespread lack of civility in social media, which I believe to harm society substantially, but I also understand the people who prefer the second model.
In most common law countries, juries fill that roll.
Currently, in the US, internet companies get a special exemption from the laws that apply to other media companies via the DMCA. If traditional media companies publish libelous material, they get sued. Facebook and Google get a "Case Dismissed" pass. Most people look at the internet and conclude that hasn't worked out very well.
It's unclear to me what a good solution would look like. If platforms didn't have those protections, they probably wouldn't be able to exist at all. Any moderation would have to be perfect, or they would be open to lawsuits. And no platform could afford that. It's a tough one.
but if the platform didnt go verify all content placed by users on it, does it count as "spreading" it?
I mean, there's nothing stopping anyone from publishing a book which spreads lies and malicious content - book banning is looked down upon these days. Why are the publishers not asked to be held to this same level? What makes a web platform different?
This survey is too vague to be worthwhile. Sure some of these are scary how many people say "yes", but people say yes all of the time to vague sounding pleasantries. When they say "responsible" say specific actions. Should people be arrested for what they post on social media when its an opinion? Should platforms automatically analyze all messages and automatically remove messages that it deems not truthful? Should the platform be liable in court for falsehoods? People will answer very differently to specifics than vagueness.
The survey could say, “given that the existence of corporate monopolies demonstrates weak and non functional governments, should governments a) cede more power to the monopolies, or b) pretend to claw power back from the monopolies?”
Moderation is a strength of the fediverse, because it is decentralized, with many moderators making possibly conflicting rules about relatively smaller amounts of content.
Moderators can block individual posts, accounts, or entire instances if they have objectionable alternate rules.
Don't like the moderation on some instance? Move to another.
Assuming that we are talking about platform of user-generated content, should the users be punished by what they post? The kind of punishment can do the government is different from what a platform can do, and somewhat they want to feel free to express themselves. This are factors taken into account by users at making decisions.
In the other hand, what the platform does (through algorithms, weights, etc) at selecting, prioritizing and making visible content by users and users themselves is something happening at platform level. There the government may have something to do. And here we are talking about the platform decisions.
There is a middle ground on coordinating/playing with the algorithms to make your content visible by users or groups that control in a way or another many users accounts. There might be some government and platform involvement in this case.
If governments punish users for content, assuming the content isn't clearly illegal, that's government censorship and may be a free speech violation depending on jurisdiction.
Platforms censoring or prioritizing content is a private entity enforcing rules for what they're willing to host and distribute on the platform. I'm not sure that's punishment at all, people don't have to post there and there's no use of force or detention being threatened.
> a private entity enforcing rules for what they're willing to host and distribute on the platform
what becomes of the public square under this doctrine tho? There cannot be more than one public square, as it's a natural monopoly, so a platform that turned itself into a public square (and extracting rents from doing so) means they get to control a narrative displayed in public (presumably suitable to their agenda). And there's no secondary public square due to the network effect.
what becomes of the public square under this doctrine tho? There cannot be more than one public square, as it's a natural monopoly
Which one platform for user generated content do you think holds the natural monopoly? I agree that network effects limit competition, absent government intervention, but among my acquaintances people are using:
- Discord
- Snapchat
- Instagram
- TikTok
- Bluesky
- YouTube
- Twitter/X
- Reddit
I think that enforcing anti-trust would be enough to keep any one platform or corporate owner from monopolizing public discourse.
the conundrum is that both sides have a bad track record in moderating content. governments in the past have used their power to silence political opponents, and businesses are silencing critical voices and undesirable content because it hurts their bottom line.
neither is acceptable.
the US made a good start by disallowing government censorship completely. europe could do the same, perhaps with a carve out for outright hate speech, and obvious falsehoods like holocaust denial. but these exceptions need to be very clearly defined, which currently is not the case.
what is missing is a restriction on private businesses, to only allow them to moderate content that is obviously illegal or age restricted, or, for topical forums, off topic, for the latter they must make clear which content is on topic.
Here in Canada, it's sad to watch what's happening in the UK.
In the UK(like OP), they are arresting people for thought crimes. An unexpected consequence of Brexit was the loss of free speech protection of Article 10.
Opinion polling has labour in steady steep decline. Given the unprecedented attack on freedom, presumptive decimation in the next election is guaranteed at this point. There's no future for the labour party beyond 2029, absurd that they would do this to their party unless they had a plan.
You obviously don't play your cards down like they have if you're intending to have a fair election 2029; or one at all.
This is a false dichotomy presented to people. By framing this as 'Giant government, or giant business?" you are going to get crap answers.
None of these are one size fits all solutions, and there should be a mix. We have a working patch-work of laws in physical space, for a reason. It allow flexibility, and adjustments as we go, as the world changes. We should extended that to virtual space as well.
Age / Content Labeling and opt-in/ opt-out for some content. Outright ban on other kinds of content. A similar "I sue when you abuse my content" for copyright, impersonation, etc.
One size does not fit all, and is not how the real world works. Online shouldn't work much differently.
In the US, this all stems from Section 230 (of the Telecommunications Act of 1996) that provided a safe harbor for companies for user-generated content. There are some requirements for this like a process for legal takedowns. Section 230 is generally a good thing as it was (and is) prohibitively expensive if not outright impossible to monitor every post and every comment.
But what changed in the last two decades or so is the newsfeed as well as other forms of recommendation (eg suggested videos on Youtube). Colloquially we tend to lump all of these together as "the algorithm".
Tech companies have very succcessfully spread the propaganda that even with "the algorithm" they're still somehow "content neutral". If certain topics are pushed to more users because ragebait = engagement then that's just "the algorithm". But who programmed the algorithm? Why? What were the explicit goals? What did and didn't ship to arrive at that behavior?
The truth is that "the algirthm" reflects the wishes of the leaders and shareholders of the company. As such, for purposes of Section 230, it's arguable that such platforms are no longer content neutral.
So what we have in the US is really the worst of both worlds. Private companies are responsible for moderation but they kowtow to the administration to reflect the content the administration wants to push or suppress.
Make no mistake, the only reason Tiktok was banned and is now being sold is because the government doesn't have the same control they have over FB, IG or Twitter.
So a survey of what people want here is kind of meaningless because people just don't understand the question.
So a survey of what people want here is kind of meaningless because people just don't understand the question.
I think they understand perfectly well. They look at an internet where internet companies aren't held responsible, conclude it's largely corrosive, and prefer a different approach. I'm not sure it's important that they don't understand the elements of a libel claim or that internet companies get a special get of jail free card that traditional media doesn't.
Two takeaways: 1. As per headline, rejection of state censorship. 2. Platforms should be responsible for user posts.
"Q17D. In your opinion, should each of the following platforms be held responsible or not responsible for showing potentially false
information that users post? Base: Total sample in each country ≈ 2000."
Around the world, appx. 70% said yes. The rub, of course, is coming up with a framework. The poll suggests that the DMCA approach of no duty is widely unpopular. However, strict liability would ensure that that these industries go away, and even a reasonableness standard seems like a headache.
Responsibility is nice, accountability is nicer. Having to testify before Congress or pay a sub percentage point of annual profits in fines is not accountability.
What if someone posts slander and libelous statements on a platform that you do not frequent? What choice do you have? What recourse do you have if someone posts something along this proud OpenAI demo with your face on truth.social? https://bsky.brid.gy/r/https://bsky.app/profile/did:plc:j5fb...
There are legal avenues in many jurisdictions for slander or libel. Ultimately, though, who cares what someone posts about you and how could we expect some arbiter to decide exactly what is okay to post, especially when the concern is potential libel?
Yep, libel laws are a rule that gets enforced by the government and the platforms should be liable to follow the laws. But the post I’m responding to asserts that people should change platforms- which is not a solution that addresses the issue at hand.
I really struggle to understand why these companies think they need to function like governments when it comes to removing content from their platforms. Does it really all boil down to protecting any engagement at all costs?
The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps. Additionally, they're complete cowards in the face of their nihilistic shareholders.
>> The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps.
Part of that is they don't want responsibility. The phone companies in the US are classified as "common carriers" which in part means they are not held responsible for misuse (drug deals, terrorist plots, whatever, discussed on their system). The flip side is that the are not allowed to discriminate - their job is to complete calls.
Online "platforms" want no responsibility for the content people post. They want to be common carriers, but that would also prevent them from discriminating (even algorithmically) what people see. Since they aren't properly classified/regulated yet they're playing a game trying to moderate content while also claiming no responsibility for content. It does not make sense to let them have it both ways.
> The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps.
The "but" is really throwing me for a loop, because to me it feels like one follows from the other.
You are posting on a platform that has a moderation policy and takes active measures to to moderate away problematic content. The guidelines are here https://news.ycombinator.com/newsguidelines.html and they clearly state what content is off limits.
How is this a "gotcha" when I'm on the internet I'm being forced to use? I pick the most agreeable platforms. I like that HN filters out politics, sports, and celebrities, it's more like a themed forum than an open board. HN doesn't deem these topics as "problematic", simply as off-topic.
Anyway my point was "problematic content" is often used as a buzzword by censorship happy people, and ends up being synonymous with "something I disagree with." We have just literally seen the real life consequences of pushing for censorship — that it will eventually be used against speech that one agrees with — and nobody quite seems to care.
If you read the guidelines, there’s quite a bit about being polite and not inciting flame wars, not only off topic items. It regularly happens that posts or even users get removed for that reason.
There are platforms that have much less strict standards with regards to that - yet you consider this the more agreeable platform. Maybe the reason is that it’s actually nicer to have a conversation in a place where you don’t need to deal with an asshole that starts yelling and insulting everyone at the table.
Think about this in real life: would you want to frequent a place where the loudest asshole gets to insult everyone present or would you rather go to a bar where at some point the Barkeeper steps in and sorts things out?
You're speaking of some idealized moderation system where tone and politeness is enforced, and human corruptibility does not entice moderators into censoring speech they disagree with? Yeah sure, I'd love to be there. It's just a rare thing to find on the modern web, and it's not guaranteed to last for any length of time.
The modern web example of your bar scenario is more like this: the bartender doesn't want to hear [opposing political/societal issue opinion] at the bar and starts kicking out everyone he disagrees with. The kicked out people go start their own bar. Now there's two neighboring bars, MAGABar and LibBar; customers are automatically filtered into attending either bar by an algorithm. If you say anything that the bartender disagrees with, you're permanently banned. The fun part is that you can be permanently banned from BOTH bars if your viewpoints don't fall in line 100% with what the bartender wants to hear.
Oh and you can't go to TechBar anymore either, the bartender heard you said something critical of furries at another bar, so now you're banned and not allowed to talk about computers.
Unmoderated platforms are pretty much useless. Having said that, I'd prefer governments to stay away from either moderating or forcing platforms to moderate content
Perhaps there's simply no middle ground between something like Voat which turned into a gathering place of bonafide racists, and something like Reddit which is essentially just a confirmation bias hugbox. If there is, it's certainly not a profitable product.
Voat turned into a gathering place for racists because reddit at first only kicked those out - and became more fundamentalist only after voat was the well-known 'bad place for bad people' where 'respectable, but misunderstood people' did not linger. The frog cooks slowly.
> If there is, it's certainly not a profitable product.
I think this is the main issue: that we walled up our discussion plazas to make them 'profitable products'.
I know I am a bit of an idealist here, but I miss the old-timey usenet, basically an agora where you could filter yourself (with the appropriately-called killfile), and which was not controlled by one institution alone). I had some hope with federated systems - but these are often built with censorship mechanisms written right into it, and again give operators too much influence on what their users may or may not see.
Ultimately authors should be held responsible for content - the governments role here is setting the laws, and funding the law enforcement mechanisms ( police and courts etc ), and the platform's role is to enable enforcement ( doing takedowns or enabling tracing of perpetrators ).
Obviously one of the challenges here is the platforms are transnational, and the laws are national - but that's the just a cost of doing business.
However this doesn't absolve the platforms from responsibility for content if they are involved in promoting content. If a platform actively promotes content - then in my view they shift from a common carrier to a publisher and thus all the normal publisher responsibilities apply.
Pretending that it's not possible to be technical responsible for platform amplification is a not the answer. You can't create something that you are responsible for, that creates harm and then claim it's not your problem because you can't fix it.
Several European governments have jailed people for social media posts. Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
As for falsehoods: some people will be mistaken, some people will lie, and sometimes sarcasm will be misunderstood. Why should anyone be liable? It is on each individual to inform themselves, and to decide what to believe and what to disregard.
Article 19 of the Universal Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
It doesn't say "if your opinion is approved by the government". It doesn't say "if your opinion is correct". It makes no exceptions whatsoever, and that is what we need to strive for.
Telegram founder Pavel Durov on Sunday accused France of asking him to remove some Moldovan channels from the social media platform ahead of the country’s presidential election last year.
In a statement on the Dubai-headquartered company, Durov claimed that the French intelligence services asked him “through an intermediary” to help the Moldovan government to censor “certain Telegram channels” before the vote on Oct. 20, in which incumbent President Maia Sandu secured a second term in office following a runoff held on Nov. 3.
He said a few channels were identified to have violated Telegram’s rules following reviews of the channels concerned and were subsequently removed.
“The intermediary then informed me that, in exchange for this cooperation, French intelligence would ‘say good things’ about me to the judge who had ordered my arrest in August last year,” Durov said, describing this as “unacceptable on several levels.”
“If the agency did in fact approach the judge — it constituted an attempt to interfere in the judicial process. If it did not, and merely claimed to have done so, then it was exploiting my legal situation in France to influence political developments in Eastern Europe — a pattern we have also observed in Romania,” he further said.
Durov also said that Telegram later received a second list of "Moldovan channels," which he noted were “legitimate and fully compliant with our rules,” unlike the initial list.
CONTINUED...
https://www.aa.com.tr/en/europe/telegram-head-accuses-france...
>Several European governments have jailed people for social media posts. Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
I think quite a few Europeans have lasting and direct experience with totalitarian, oppressive regimes. Which might also explain why they have stricter (or simply more precise) laws governing expression – not as an oppressive tool, but as a safety valve for the society.
Silencing speech IS the oppressive regime.
Most societies have decided that some speech should be illegal. The classic example is yelling "FIRE" in a crowded theatre in the absence of a fire.
I think it is good and healthy to have conversations as to what should and should not be protected speech, but I think that there is this rote reaction that kinda boils down to free speech absolutism. But of course, all the free speech absolutists find at some point or another there is some speech they want made illegal.
A great example of this is in the US where Republicans often outwardly took such as stand when they weren't in power, but recently tried to use the FCC to take a comedian who made light criticism of the regime off the air.
So, silencing speech might not always be the oppressive regime, but it sometimes is.
EDIT: OK, I get the fire/theatre example is a bad one. Instead, consider incitement more broadly. For example incitement to discrimination, as prohibited by Article 20 of the International Covenant on Civil and Political Rights.
Imagine a situation where a person reasonably believes there was a fire (they saw smoke, but it turned out to be a vape or some sort of smoke machine part of the show). They're arrested for the false claim. Others know that it's too risky. Now, when there's a fire, people are reluctant to speak up just in case. It's the chilling effect that can affect speech. We have whistleblowers who are afraid to speak up. And when the means of speaking out or alerting others (ie via email or social media or smartphone) is controlled by a large corporation that may feel threatened...
>They're arrested for the false claim.
This person would be able to provide evidence as to why they thought there was a fire, show why their belief is genuine and not a lie.
> The classic example is yelling "FIRE" in a crowded theatre in the absence of a fire.
This is from an overturned US Supreme Court opinion, has no basis in anyone's jurisprudence, yet keeps coming up as an example of speech that's permissible to suppress for some reason.
Oliver Wendell Holmes created that example to support jailing a socialist for speaking out against the World War I draft.
> This is from a dissenting opinion of a US Supreme Court justice
No, its dicta which neither was part of the substantive ruling nor an accurate description of pre-existing law from the Court’s opinion (which was unanimous, so there was no “dissenting opinion”) in a case that has since been overruled and is notorious for having allowed an egregious restriction on core political speech.
Thank you, I wasn't aware of that.
Regardless, incitement remains an exception to free speech the world over to some degree. Article 20 of the International Covenant on Civil and Political Rights holds that incitement to discrimination is prohibited, for example [0].
My point stands, people of most societies globally believe certain speech should not be protected.
[0] https://www.ohchr.org/en/instruments-mechanisms/instruments/...
Maybe people keep using it because it makes a lot of sense to them, whether or not it's accepted (US) case law or not.
> yet keeps coming up as an example of speech that's permissible to suppress for some reason
Because they don't actually have an example of not imminently violence causing non-fraudulent speech that SCOTUS has upheld a ban of. And then when you call them out they'll say "but wait, it's metaphorical". If they had a better example they'd be using it.
Child pornography
Child sexual abuse material is evidence of violence, and act of violence, all at once.
A regime attempting to kill a large group of people is also oppressive and much worse. If the regime is able to do this because of speech then people are choosing the least worst option.
> If the regime is able to do this because of speech
Okay but that's a big "IF". I suspect a regime attempting to do that might be promulgating a significant amount of propaganda, but I doubt that they're able to be oppressive "because of speech".
What about loss of upward mobility for the middle class, or loss of living wages, mismanaged public institutions, corruption, bribery, collapse of democratic process?
All of this enables or sustains oppressive regimes and doesn't require any kind of speech from citizens. And without these kinds of serious problems, citizens barking nonsense won't result in much. Hindering free speech only makes it easier for a regime to continue to exacerbate these serious problems and continue oppression without being called out.
A regime is an organization.
It can be. But there can be speech where most reasonable people would agree that it should be regulated. E.g. if some dude walks up to your 5 year old child and starts to tell them in intricate detail about his violent sexual fantasy, pretty much everybody notices that the kids right not to have to hear this outweighs the adults edgy itch to do this to a child.
And a lot of speech is like this, nearly no speech is consequence free. I am not saying we should ban any speech that has negative consequences. What I am saying is that with other rights we also have to way the active freedoms of one person ("the freedom to do a thing") against the passive freedoms of all the others ("the freedom to not have a thing done to you").
With other rights it is the same, you may have a right to carry a firearm and even shoot it. But if you shoot it for example in church, other peoples right not to have to deal with you shooting that gun in that church outweighs your right to do that.
In the German speaking part of the EU we decided that the right of literal Nazis to carry their insignia doesn't outweigh the right of the others to not have to see the insignia that have brought so much pain and suffering in these lands. To some degree this is symbolic, because it only bans symbols and not ideologies, but hey, I like my government to protect my state from a fascist takeover, because they are kind of hard to reverse without violence.
> not as an oppressive tool, but as a safety valve for the society.
This strikes me as just incorrect. What example from history shows totalitarianism being successfully avoided because of controls on speech?
The first item in the totalitarian playbook is controlling speech, and there are historical examples of that in every single totalitarian regime that I'm aware of.
> It doesn't say "if your opinion is correct".
Opinions cannot be right or wrong.
> It makes no exceptions whatsoever, and that is what we need to strive for.
It certainly does. See libel / defamation / perjury / false representation / fraud / false advertising / trademark infringement.
> Opinions cannot be right or wrong.
I’d say if you can be jailed for a particular opinion, someone has certainly made a judgement call that your opinion is wrong!
People can make judgment calls. Those are opinions. That still doesn't make yours, nor theirs, wrong.
Immoral, unethical, impractical, or contrary to human rights, perhaps.
> People can make judgment calls. Those are opinions.
I'm not sure that's a helpful distinction. In some sense, everything we classify as a 'fact' is a judgement call: is the sun a giant ball of fusing hydrogen? I mean, probably, but maybe we're all living in some sort simulation and it doesn't really exist at all; Or maybe you are living in your own personal "Truman Show", being fed lies by everyone who shows you scientific "evidence" about the sun's nature.
But "the sun is a giant ball of fusing hydrogen" is a different type of statement than "chocolate ice cream is better than vanilla", or "Mozart is better than Beethoven".
I think the most useful distinction is between “opinions” and “beliefs” rather than opinions and facts. A belief represents your confidence in the truth or falsehood of a statement. While an opinion has no underlying objective reality. “Apples are better than peaches” is an opinion. “More people ate apples than bananas in 2024” is a belief; it may be a true belief or a false belief but there is an answer.
An opinion can't be falsified.
"The sun is a giant ball of fusing hydrogen" has the possibility of being proven false. This means it's either a true or false fact.
If I said "NYC is the capital of the United States"* I'm either lying or mistaken
What makes it a lie vs mistaken? Whether it's a genuine belief, that I have a reason to have the belief. For example if I made the assumption it's the capital because it's the biggest city then I'm mistaken.
It's a lie if I know it's not true, if I ignore information that falsifies the fact.
*To avoid semantics I mean the official capital of the country not like "it's important"
@gwd: absolutely true; all the "facts" I know are either a long series of supporting ideas ("this is a chair and I can sit on it") or something I was told by an authority I trust ("Africa exists").
I still say there is a difference between "Africa exists" and "gwd's statement about the lack of 'facts' is heretical and they should be imprisoned".
> I’d say if you can be jailed for a particular opinion
Can you give an example of someone in a modern democracy jailed for their "opinion"?
To wit, are the examples you're thinking of "statement of opinion", "statement of fact", "pejorative insult", or "incitement"?
Saying "I think <public figure> is an idiot" is an opinion. "The earth is flat" or "The holocaust never happened" are not opinions; neither is, "Kick out all the <insert pejorative here>."
And yeah, in North Korea you'll absolutely be jailed for expressing some opinions. That may make them illegal, but it doesn't make them no longer opinions.
https://www.counterterrorism.police.uk/man-from-harlow-jaile...
https://www.bbc.com/news/articles/c5yl7p4l11po
https://en.wikipedia.org/wiki/Detention_of_R%C3%BCmeysa_%C3%...
https://en.wikipedia.org/wiki/Detention_of_Mahmoud_Khalil
How is "the earth is flat" not an opinion? People form opinions based on the information they have (or are willing to accept). For their worldview, their opinion is valid. If they don't accept certain voices of reason, they have that right. We saw people not allowed to ask about the origins of the covid19 virus because it went against a public narrative. At the beginning of the pandemic, people who expressed that masks should be worn were rejected by even government officials.
People might not have gone to jail, but they did have voices and access to society limited or removed because of their opinions.
> "The holocaust never happened"
Not really an opinion but it can be a belief. I'm not sure why we are okay with people believing that Earth is ~6000 years old, but not with someone believing that we are in a simulation and everything before e.g. year 1999 is just a collective memory fabrication.
I am not "okay" with either belief, but it's not my place to police other people's beliefs - so long as they don't hurt others.
If you want to visit that idiotic Noah's Ark museum, go.
If you want to prohibit teaching about evolution in schools, go to hell.
>Not really an opinion but it can be a belief.
Yes. You can believe this fact to be false but you might also be lying. How do you show this? By showing why you believe it to be false
> "The holocaust never happened" are not opinions
But would you dare state out loud in Germany that, in your opinion, the official number of Holocaust victims is actually much less than what's been widely reported? Even if you had what you believed was solid evidence supporting your argument? I bet you wouldn't.
> neither is, "Kick out all the <insert pejorative here>."
How about voicing your opinion that <people from some country> should be barred from emigrating to <European country> because <crime statistics>? Bet you wouldn't try that either, because your opinion is in "hate speech" territory now.
>Even if you had what you believed was solid evidence supporting your argument? I bet you wouldn't.
Ugh, what?
I doubt that they are jailed for opinions but for lies or threats or defamation etc.
I guarantee you that, if only they could, there are governments that would jail people for opinions. The US would have likely have done something to homosexuals, with or without sexual activity. Ditto on supporting Communism 'in your heart'. NK and other countries would do the same (OK, not NK for supporters of communism...).
This is all true, with a few exceptions. For example, incitement to violence or false allegations that do serious reputational damage. It's not sustainable to allow exactly all speech.
Although I mostly agree, I just wanted to make explicit that nuance.
For example, incitement to violence
GP brought up people being jailed for social media posts, but didn't reference any specifically. In the handful of cases I found via a web search, the charges were related to inciting violence.
GP also brought up the Universal Declaration of Human Rights. Article 30 reads:
Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein.
When one exercising a freedom restricts another's ability to exercise theirs, it is reasonable to expect courts to get involved to sort it out.
> Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
There are only a few European countries that jail people for wrongspeak, and I can't think of a single one of those countries whose population in general is in favor of such laws.
> Many Europeans support this - they don't understand how government censorship can quickly get out of hand.
This argument can be made for government in general, although granted technology does make it easier for a smaller group to overreach. I'm a European and do hear your concern, but I feel comfortable supporting restrictions on speech _as long as_ there is also a functioning and just legal system that those restrictions operate within. Though there does seem to be a worrying trend towards technology bypassing the legal system and just giving enforcement agencies blanket access of late.
We all also have our own cultural biases and blind spots. I offer this not as whataboutism but as a different perspective: I'm _way_ more frightened by the authoritarian police culture (I base this on interactions with the police in a period I lived in the US) in the US than I am of the UK governments internet censorship. The internet censorship could do a lot of harm, but I think not as much potential harm as a large militarised police force willing to bust down doors on command from above.
There has never been a functioning and just legal system in the history of mankind. Not to mention that what is "just" is very much up to debate.
Well, sure, it's all relative and no system is perfect. Not every mother is perfect, doesn't mean I escort mine around the house at gunpoint whenever she visits.
> It is on each individual to inform themselves, and to decide what to believe and what to disregard.
That's where the conundrum lies, requiring individual responsibility for protecting a whole society of potential bad actors using this freedom to break society apart.
How is it solved? No one knows, what we know is that relying on individuals to each act on their own to solve it won't work, it never works, we also see the effects on society from the loss of any social cohesion around what "truth" is, even though before the age of Internet and social media there were vehicles to spread lies, and manipulate people, this has been supercharged in every way: speed of spread, number of influential voices, followings, etc.
Anything that worked before probably doesn't work now, we don't know how to proceed but using platitudes from before these times is also a way to cover our eyes to what is actually happening: fractures in society becoming larger rifts, supercharged by new technologies, being wielded as a weapon.
I don't think government censorship is the answer, nor I think that just letting it be and requiring every single person to be responsible in how they critically analyse the insurmountable amount of information we are exposed to every day is the answer either.
>"Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
You do realize that this includes the freedom of people who get harassed online by others.
German journalist Dunja Hayali’s rights where violated by hate comments after social media and nuis sites misquoted her on her reporting on Charly Kirk‘s funeral
Worth mentioning exactly here the paradox of Popper: https://en.wikipedia.org/wiki/Paradox_of_tolerance sooo... what should we do about it, here and now? Because we are closing to need a decision. And let me remind you that people do get fined or jailed sometimes for making mistakes or lying.
I think this might be a misinterpretation of the word "responsible".
If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.
> If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
The platform isn't lying, any more than if I write "1 equals 2" on a letter and send it via the mail system to someone else is the mail system lying.
Social media platforms do not operate like the mail or telecommunication infrastructure. Suppose that a clique of high-follower verified users on X formed a private discord channel in which they coördinate false but plausible moral panic news events in order to foment spontaneous violent acts against minorities (for added effect, perhaps by pooling their resources into generative AI tools), and that both platforms refused to address this by shutting down the channel, banning the users, or even reducing their reach on the timeline. While there remain reasonable arguments against governing this bad behavior through legislation, it is plain that the social media platforms would be implicated in the negative outcomes of the behavior to a greater degree than a mail carrier.
What if the platform decides that people will send more mail if they piss them off by copying your letter and sending it to everyone in their service area? Is that still just you lying? At what point does the platform become responsible for amplifying what you said? Are you responsible when it's amplified to everyone if all you ever intended was sending it out into the void?
> Are you responsible when it's amplified to everyone if all you ever intended was sending it out into the void?
You didn't send it out into the void.
Yes, the analogy to the postal service falls apart when discussing one-to-one correspondance. That's more like a DM than a social media.
If however some of your junk mail included mass mailings of brochures to join the KKK or some neo-nazi group, I could see why people would want the postal service to crack down on that. That is a fair analogy.
Yeah, the US in particular functions on liability. If changes are made that make companies liable for the externalities from their platforms, they will almost instantly find ways to address the issues.
> If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance.
Does the government then setup a ministry of truth? Who gets to decide that?
A “ministry of truth” would (I assume) be a part of the executive branch of government.
Whereas the creation of laws and the interpretation of laws are powers that the executive branch does not have, and are held separately by the legislative and judicial branches.
In a, well, y’know “functioning” democracy. Apparently.
Are you implying that there’s no posts on social media platforms that are plain and verifiably wrong and that any such decision needs to be made by a government created ministry of truth? There’s no middle ground? Maybe such a thing like a court?
If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?
"If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?"
Nah, he would just (shadow)ban you.
But in general we had that debate long and broad on what truth means with Covid. Who decides what the scientific consensus is for instance. (I don't remember a crystal clear outcome, though). But in case of doubt, we still have courts to decide.
> Nah, he would just (shadow)ban you
Censorship!
There’s a lot of grey areas - statement of fact vs opinion, open scientific consensus, statements about public figures vs. private individuals, … But the post I’m responding to basically says “there is no truth, let’s give up.” and that’s just as false.
At a minimum, keep in mind, perjury, libel and slander get litigated in courts of law. No ministry of truth is required in these cases.
Judges. The question is mainly whether there should be some rules independent of those of the companies that the content must follow, and people who feel treated badly need to get their rights against those rules in a civil lawsuit, or whether more should be allowed first until there is a civil or penal lawsuit that might stop it. (It is already a mixture of both so it's a matter of degree.)
I personally prefer an emphasis on the first solution because it's better to combat the widespread lack of civility in social media, which I believe to harm society substantially, but I also understand the people who prefer the second model.
In most common law countries, juries fill that roll.
Currently, in the US, internet companies get a special exemption from the laws that apply to other media companies via the DMCA. If traditional media companies publish libelous material, they get sued. Facebook and Google get a "Case Dismissed" pass. Most people look at the internet and conclude that hasn't worked out very well.
It's unclear to me what a good solution would look like. If platforms didn't have those protections, they probably wouldn't be able to exist at all. Any moderation would have to be perfect, or they would be open to lawsuits. And no platform could afford that. It's a tough one.
> Most people look at the internet and conclude that hasn't worked out very well.
Cite?
> If a platform lies or spread malicious content
but if the platform didnt go verify all content placed by users on it, does it count as "spreading" it?
I mean, there's nothing stopping anyone from publishing a book which spreads lies and malicious content - book banning is looked down upon these days. Why are the publishers not asked to be held to this same level? What makes a web platform different?
This survey is too vague to be worthwhile. Sure some of these are scary how many people say "yes", but people say yes all of the time to vague sounding pleasantries. When they say "responsible" say specific actions. Should people be arrested for what they post on social media when its an opinion? Should platforms automatically analyze all messages and automatically remove messages that it deems not truthful? Should the platform be liable in court for falsehoods? People will answer very differently to specifics than vagueness.
Absolutely.
They hand-wave tremendously complex questions, in such a way that the respondent is free to interpret them any way you wish.
A similar survey question might be:
> I am thinking of a number. Is it, a) too high or b) too low?
I’d like governments to instead enforce monopoly laws and the FTC to sue for crappy business practices. I don’t want them playing speech police.
Yes, absolutely.
The survey could say, “given that the existence of corporate monopolies demonstrates weak and non functional governments, should governments a) cede more power to the monopolies, or b) pretend to claw power back from the monopolies?”
Moderation is a strength of the fediverse, because it is decentralized, with many moderators making possibly conflicting rules about relatively smaller amounts of content.
Moderators can block individual posts, accounts, or entire instances if they have objectionable alternate rules.
Don't like the moderation on some instance? Move to another.
Exactly. And moderators need to be responsible and liable for the content they allow through on their instances.
There is a problem of agency here.
Assuming that we are talking about platform of user-generated content, should the users be punished by what they post? The kind of punishment can do the government is different from what a platform can do, and somewhat they want to feel free to express themselves. This are factors taken into account by users at making decisions.
In the other hand, what the platform does (through algorithms, weights, etc) at selecting, prioritizing and making visible content by users and users themselves is something happening at platform level. There the government may have something to do. And here we are talking about the platform decisions.
There is a middle ground on coordinating/playing with the algorithms to make your content visible by users or groups that control in a way or another many users accounts. There might be some government and platform involvement in this case.
> should the users be punished by what they post?
What kind of punishment exactly?
If governments punish users for content, assuming the content isn't clearly illegal, that's government censorship and may be a free speech violation depending on jurisdiction.
Platforms censoring or prioritizing content is a private entity enforcing rules for what they're willing to host and distribute on the platform. I'm not sure that's punishment at all, people don't have to post there and there's no use of force or detention being threatened.
> a private entity enforcing rules for what they're willing to host and distribute on the platform
what becomes of the public square under this doctrine tho? There cannot be more than one public square, as it's a natural monopoly, so a platform that turned itself into a public square (and extracting rents from doing so) means they get to control a narrative displayed in public (presumably suitable to their agenda). And there's no secondary public square due to the network effect.
what becomes of the public square under this doctrine tho? There cannot be more than one public square, as it's a natural monopoly
Which one platform for user generated content do you think holds the natural monopoly? I agree that network effects limit competition, absent government intervention, but among my acquaintances people are using:
- Discord
- Snapchat
- Instagram
- TikTok
- Bluesky
- YouTube
- Twitter/X
- Reddit
I think that enforcing anti-trust would be enough to keep any one platform or corporate owner from monopolizing public discourse.
the conundrum is that both sides have a bad track record in moderating content. governments in the past have used their power to silence political opponents, and businesses are silencing critical voices and undesirable content because it hurts their bottom line.
neither is acceptable.
the US made a good start by disallowing government censorship completely. europe could do the same, perhaps with a carve out for outright hate speech, and obvious falsehoods like holocaust denial. but these exceptions need to be very clearly defined, which currently is not the case.
what is missing is a restriction on private businesses, to only allow them to moderate content that is obviously illegal or age restricted, or, for topical forums, off topic, for the latter they must make clear which content is on topic.
Here in Canada, it's sad to watch what's happening in the UK.
In the UK(like OP), they are arresting people for thought crimes. An unexpected consequence of Brexit was the loss of free speech protection of Article 10.
Opinion polling has labour in steady steep decline. Given the unprecedented attack on freedom, presumptive decimation in the next election is guaranteed at this point. There's no future for the labour party beyond 2029, absurd that they would do this to their party unless they had a plan.
You obviously don't play your cards down like they have if you're intending to have a fair election 2029; or one at all.
We only tolerate totalitarianism in the private sector!
This is a false dichotomy presented to people. By framing this as 'Giant government, or giant business?" you are going to get crap answers.
None of these are one size fits all solutions, and there should be a mix. We have a working patch-work of laws in physical space, for a reason. It allow flexibility, and adjustments as we go, as the world changes. We should extended that to virtual space as well.
Age / Content Labeling and opt-in/ opt-out for some content. Outright ban on other kinds of content. A similar "I sue when you abuse my content" for copyright, impersonation, etc.
One size does not fit all, and is not how the real world works. Online shouldn't work much differently.
In the US, this all stems from Section 230 (of the Telecommunications Act of 1996) that provided a safe harbor for companies for user-generated content. There are some requirements for this like a process for legal takedowns. Section 230 is generally a good thing as it was (and is) prohibitively expensive if not outright impossible to monitor every post and every comment.
But what changed in the last two decades or so is the newsfeed as well as other forms of recommendation (eg suggested videos on Youtube). Colloquially we tend to lump all of these together as "the algorithm".
Tech companies have very succcessfully spread the propaganda that even with "the algorithm" they're still somehow "content neutral". If certain topics are pushed to more users because ragebait = engagement then that's just "the algorithm". But who programmed the algorithm? Why? What were the explicit goals? What did and didn't ship to arrive at that behavior?
The truth is that "the algirthm" reflects the wishes of the leaders and shareholders of the company. As such, for purposes of Section 230, it's arguable that such platforms are no longer content neutral.
So what we have in the US is really the worst of both worlds. Private companies are responsible for moderation but they kowtow to the administration to reflect the content the administration wants to push or suppress.
Make no mistake, the only reason Tiktok was banned and is now being sold is because the government doesn't have the same control they have over FB, IG or Twitter.
So a survey of what people want here is kind of meaningless because people just don't understand the question.
So a survey of what people want here is kind of meaningless because people just don't understand the question.
I think they understand perfectly well. They look at an internet where internet companies aren't held responsible, conclude it's largely corrosive, and prefer a different approach. I'm not sure it's important that they don't understand the elements of a libel claim or that internet companies get a special get of jail free card that traditional media doesn't.
Two takeaways: 1. As per headline, rejection of state censorship. 2. Platforms should be responsible for user posts.
"Q17D. In your opinion, should each of the following platforms be held responsible or not responsible for showing potentially false information that users post? Base: Total sample in each country ≈ 2000."
Around the world, appx. 70% said yes. The rub, of course, is coming up with a framework. The poll suggests that the DMCA approach of no duty is widely unpopular. However, strict liability would ensure that that these industries go away, and even a reasonableness standard seems like a headache.
Responsibility is nice, accountability is nicer. Having to testify before Congress or pay a sub percentage point of annual profits in fines is not accountability.
It is accountability, but not sufficient to be a significant deterrent.
Unpopular opinion, but what about our own responsibility to choose better platforms?
What if someone posts slander and libelous statements on a platform that you do not frequent? What choice do you have? What recourse do you have if someone posts something along this proud OpenAI demo with your face on truth.social? https://bsky.brid.gy/r/https://bsky.app/profile/did:plc:j5fb...
There are legal avenues in many jurisdictions for slander or libel. Ultimately, though, who cares what someone posts about you and how could we expect some arbiter to decide exactly what is okay to post, especially when the concern is potential libel?
Yep, libel laws are a rule that gets enforced by the government and the platforms should be liable to follow the laws. But the post I’m responding to asserts that people should change platforms- which is not a solution that addresses the issue at hand.
Second this. The article reads as a false dichotomy of choosing a "lesser evil" master while remaining a slave. Vote with your engagement (?)
I really struggle to understand why these companies think they need to function like governments when it comes to removing content from their platforms. Does it really all boil down to protecting any engagement at all costs?
The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps. Additionally, they're complete cowards in the face of their nihilistic shareholders.
>> The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps.
Part of that is they don't want responsibility. The phone companies in the US are classified as "common carriers" which in part means they are not held responsible for misuse (drug deals, terrorist plots, whatever, discussed on their system). The flip side is that the are not allowed to discriminate - their job is to complete calls.
Online "platforms" want no responsibility for the content people post. They want to be common carriers, but that would also prevent them from discriminating (even algorithmically) what people see. Since they aren't properly classified/regulated yet they're playing a game trying to moderate content while also claiming no responsibility for content. It does not make sense to let them have it both ways.
Turns out that echo chambers with manufactured consensus are quite profitable.
> The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps.
The "but" is really throwing me for a loop, because to me it feels like one follows from the other.
“Think tank”.
If disinformation would treated as copyrighted content then world would be better place.
Facts are public domain and can't be copyrighted (you can only argue your creative expression of them, ex in a textbook, is yours).
But I suppose that wouldn't apply to disinformation. So your made up bullshit is yours to keep!
Interesting point.
It’s a kind of reframing (or consequence of) the bullshit asymmetry principle.
Actually, no, reasonable people do not want either platforms or governments to moderate content.
Who defines what "problematic content" is?
You are posting on a platform that has a moderation policy and takes active measures to to moderate away problematic content. The guidelines are here https://news.ycombinator.com/newsguidelines.html and they clearly state what content is off limits.
We're posting on a platform that has a moderation policy defined by the private entity hosting the service, not the government.
YC is free to censor on their own platform, the only issue is when the government is involved in censoring speech.
The post I’m responding to explicitly states that platforms should not moderate content.
How is this a "gotcha" when I'm on the internet I'm being forced to use? I pick the most agreeable platforms. I like that HN filters out politics, sports, and celebrities, it's more like a themed forum than an open board. HN doesn't deem these topics as "problematic", simply as off-topic.
Anyway my point was "problematic content" is often used as a buzzword by censorship happy people, and ends up being synonymous with "something I disagree with." We have just literally seen the real life consequences of pushing for censorship — that it will eventually be used against speech that one agrees with — and nobody quite seems to care.
If you read the guidelines, there’s quite a bit about being polite and not inciting flame wars, not only off topic items. It regularly happens that posts or even users get removed for that reason.
There are platforms that have much less strict standards with regards to that - yet you consider this the more agreeable platform. Maybe the reason is that it’s actually nicer to have a conversation in a place where you don’t need to deal with an asshole that starts yelling and insulting everyone at the table.
Think about this in real life: would you want to frequent a place where the loudest asshole gets to insult everyone present or would you rather go to a bar where at some point the Barkeeper steps in and sorts things out?
You're speaking of some idealized moderation system where tone and politeness is enforced, and human corruptibility does not entice moderators into censoring speech they disagree with? Yeah sure, I'd love to be there. It's just a rare thing to find on the modern web, and it's not guaranteed to last for any length of time.
The modern web example of your bar scenario is more like this: the bartender doesn't want to hear [opposing political/societal issue opinion] at the bar and starts kicking out everyone he disagrees with. The kicked out people go start their own bar. Now there's two neighboring bars, MAGABar and LibBar; customers are automatically filtered into attending either bar by an algorithm. If you say anything that the bartender disagrees with, you're permanently banned. The fun part is that you can be permanently banned from BOTH bars if your viewpoints don't fall in line 100% with what the bartender wants to hear.
Oh and you can't go to TechBar anymore either, the bartender heard you said something critical of furries at another bar, so now you're banned and not allowed to talk about computers.
Unmoderated platforms are pretty much useless. Having said that, I'd prefer governments to stay away from either moderating or forcing platforms to moderate content
Perhaps there's simply no middle ground between something like Voat which turned into a gathering place of bonafide racists, and something like Reddit which is essentially just a confirmation bias hugbox. If there is, it's certainly not a profitable product.
Of course, there is a middle ground. You just need better and stricter moderation, like on HN, for example.
Voat turned into a gathering place for racists because reddit at first only kicked those out - and became more fundamentalist only after voat was the well-known 'bad place for bad people' where 'respectable, but misunderstood people' did not linger. The frog cooks slowly.
> If there is, it's certainly not a profitable product.
I think this is the main issue: that we walled up our discussion plazas to make them 'profitable products'.
I know I am a bit of an idealist here, but I miss the old-timey usenet, basically an agora where you could filter yourself (with the appropriately-called killfile), and which was not controlled by one institution alone). I had some hope with federated systems - but these are often built with censorship mechanisms written right into it, and again give operators too much influence on what their users may or may not see.
> reasonable people do not want
I'm glad you're both a reasonable person, and available to identify all others, so that the set of reasonable people want the same thing.
I think it's a bit more nuanced that this.
Ultimately authors should be held responsible for content - the governments role here is setting the laws, and funding the law enforcement mechanisms ( police and courts etc ), and the platform's role is to enable enforcement ( doing takedowns or enabling tracing of perpetrators ).
Obviously one of the challenges here is the platforms are transnational, and the laws are national - but that's the just a cost of doing business.
However this doesn't absolve the platforms from responsibility for content if they are involved in promoting content. If a platform actively promotes content - then in my view they shift from a common carrier to a publisher and thus all the normal publisher responsibilities apply.
Pretending that it's not possible to be technical responsible for platform amplification is a not the answer. You can't create something that you are responsible for, that creates harm and then claim it's not your problem because you can't fix it.