People want platforms, not governments, to be responsible for moderating content

reutersinstitute.politics.ox.ac.uk

63 points by giuliomagnifico 6 hours ago


bradley13 - 5 hours ago

Several European governments have jailed people for social media posts. Many Europeans support this - they don't understand how government censorship can quickly get out of hand.

As for falsehoods: some people will be mistaken, some people will lie, and sometimes sarcasm will be misunderstood. Why should anyone be liable? It is on each individual to inform themselves, and to decide what to believe and what to disregard.

Article 19 of the Universal Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."

It doesn't say "if your opinion is approved by the government". It doesn't say "if your opinion is correct". It makes no exceptions whatsoever, and that is what we need to strive for.

jgeada - 5 hours ago

I think this might be a misinterpretation of the word "responsible".

If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".

Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.

ecshafer - 5 hours ago

This survey is too vague to be worthwhile. Sure some of these are scary how many people say "yes", but people say yes all of the time to vague sounding pleasantries. When they say "responsible" say specific actions. Should people be arrested for what they post on social media when its an opinion? Should platforms automatically analyze all messages and automatically remove messages that it deems not truthful? Should the platform be liable in court for falsehoods? People will answer very differently to specifics than vagueness.

exabrial - 5 hours ago

I’d like governments to instead enforce monopoly laws and the FTC to sue for crappy business practices. I don’t want them playing speech police.

markstos - 5 hours ago

Moderation is a strength of the fediverse, because it is decentralized, with many moderators making possibly conflicting rules about relatively smaller amounts of content.

Moderators can block individual posts, accounts, or entire instances if they have objectionable alternate rules.

Don't like the moderation on some instance? Move to another.

gmuslera - 5 hours ago

There is a problem of agency here.

Assuming that we are talking about platform of user-generated content, should the users be punished by what they post? The kind of punishment can do the government is different from what a platform can do, and somewhat they want to feel free to express themselves. This are factors taken into account by users at making decisions.

In the other hand, what the platform does (through algorithms, weights, etc) at selecting, prioritizing and making visible content by users and users themselves is something happening at platform level. There the government may have something to do. And here we are talking about the platform decisions.

There is a middle ground on coordinating/playing with the algorithms to make your content visible by users or groups that control in a way or another many users accounts. There might be some government and platform involvement in this case.

em-bee - an hour ago

the conundrum is that both sides have a bad track record in moderating content. governments in the past have used their power to silence political opponents, and businesses are silencing critical voices and undesirable content because it hurts their bottom line.

neither is acceptable.

the US made a good start by disallowing government censorship completely. europe could do the same, perhaps with a carve out for outright hate speech, and obvious falsehoods like holocaust denial. but these exceptions need to be very clearly defined, which currently is not the case.

what is missing is a restriction on private businesses, to only allow them to moderate content that is obviously illegal or age restricted, or, for topical forums, off topic, for the latter they must make clear which content is on topic.

tycho-newman - 5 hours ago

We only tolerate totalitarianism in the private sector!

incomingpain - an hour ago

Here in Canada, it's sad to watch what's happening in the UK.

In the UK(like OP), they are arresting people for thought crimes. An unexpected consequence of Brexit was the loss of free speech protection of Article 10.

Opinion polling has labour in steady steep decline. Given the unprecedented attack on freedom, presumptive decimation in the next election is guaranteed at this point. There's no future for the labour party beyond 2029, absurd that they would do this to their party unless they had a plan.

You obviously don't play your cards down like they have if you're intending to have a fair election 2029; or one at all.

FarMcKon - 2 hours ago

This is a false dichotomy presented to people. By framing this as 'Giant government, or giant business?" you are going to get crap answers.

None of these are one size fits all solutions, and there should be a mix. We have a working patch-work of laws in physical space, for a reason. It allow flexibility, and adjustments as we go, as the world changes. We should extended that to virtual space as well.

Age / Content Labeling and opt-in/ opt-out for some content. Outright ban on other kinds of content. A similar "I sue when you abuse my content" for copyright, impersonation, etc.

One size does not fit all, and is not how the real world works. Online shouldn't work much differently.

jmyeet - 5 hours ago

In the US, this all stems from Section 230 (of the Telecommunications Act of 1996) that provided a safe harbor for companies for user-generated content. There are some requirements for this like a process for legal takedowns. Section 230 is generally a good thing as it was (and is) prohibitively expensive if not outright impossible to monitor every post and every comment.

But what changed in the last two decades or so is the newsfeed as well as other forms of recommendation (eg suggested videos on Youtube). Colloquially we tend to lump all of these together as "the algorithm".

Tech companies have very succcessfully spread the propaganda that even with "the algorithm" they're still somehow "content neutral". If certain topics are pushed to more users because ragebait = engagement then that's just "the algorithm". But who programmed the algorithm? Why? What were the explicit goals? What did and didn't ship to arrive at that behavior?

The truth is that "the algirthm" reflects the wishes of the leaders and shareholders of the company. As such, for purposes of Section 230, it's arguable that such platforms are no longer content neutral.

So what we have in the US is really the worst of both worlds. Private companies are responsible for moderation but they kowtow to the administration to reflect the content the administration wants to push or suppress.

Make no mistake, the only reason Tiktok was banned and is now being sold is because the government doesn't have the same control they have over FB, IG or Twitter.

So a survey of what people want here is kind of meaningless because people just don't understand the question.

tiahura - 5 hours ago

Two takeaways: 1. As per headline, rejection of state censorship. 2. Platforms should be responsible for user posts.

"Q17D. In your opinion, should each of the following platforms be held responsible or not responsible for showing potentially false information that users post? Base: Total sample in each country ≈ 2000."

Around the world, appx. 70% said yes. The rub, of course, is coming up with a framework. The poll suggests that the DMCA approach of no duty is widely unpopular. However, strict liability would ensure that that these industries go away, and even a reasonableness standard seems like a headache.

cptskippy - 4 hours ago

Responsibility is nice, accountability is nicer. Having to testify before Congress or pay a sub percentage point of annual profits in fines is not accountability.

brap - 5 hours ago

Unpopular opinion, but what about our own responsibility to choose better platforms?

- 5 hours ago
[deleted]
micromacrofoot - 5 hours ago

I really struggle to understand why these companies think they need to function like governments when it comes to removing content from their platforms. Does it really all boil down to protecting any engagement at all costs?

The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps. Additionally, they're complete cowards in the face of their nihilistic shareholders.

andrewstuart - 5 hours ago

“Think tank”.

ddmma - 5 hours ago

If disinformation would treated as copyrighted content then world would be better place.

floundy - 5 hours ago

Actually, no, reasonable people do not want either platforms or governments to moderate content.

Who defines what "problematic content" is?

DrScientist - 5 hours ago

I think it's a bit more nuanced that this.

Ultimately authors should be held responsible for content - the governments role here is setting the laws, and funding the law enforcement mechanisms ( police and courts etc ), and the platform's role is to enable enforcement ( doing takedowns or enabling tracing of perpetrators ).

Obviously one of the challenges here is the platforms are transnational, and the laws are national - but that's the just a cost of doing business.

However this doesn't absolve the platforms from responsibility for content if they are involved in promoting content. If a platform actively promotes content - then in my view they shift from a common carrier to a publisher and thus all the normal publisher responsibilities apply.

Pretending that it's not possible to be technical responsible for platform amplification is a not the answer. You can't create something that you are responsible for, that creates harm and then claim it's not your problem because you can't fix it.