Facebook endorses Terra Nullius

mostlysignssomeportents:

The people who perpetrated genocidal settler colonialism needed a way to square their slaughter and theft with their conception of themselves as moral actors. They settled their consciences with the doctrine of “terra nullius.”

https://locusmag.com/2019/03/cory-doctorow-terra-nullius/

Terra nullius - empty land - is a variation on Locke’s labor theory of value, the idea that the only thing you can be said to truly own is  your body and its labor, and when you blend your labor with natural resources, the finished product is yours.

Locke is a key grifter thinkfluencer, because at the core of Locke’s theory is the idea that there are natural things that no one else is using for you to come along and pick up and blend with your labor to turn into property.

Inevitably “stuff no one is using” turns out to be a fancy way of saying “stuff that is widely used by people I consider to be subhuman.”

So when settler colonialists arrived in Australia, they declared it to be empty land, and the people who’d lived there for as long as behaviorally modern humans have existed to be non-persons.

This relegation to subhuman status created the conditions for genocide, enslavement, torture, rape, and more.

The Australian establishment’s means of escaping this legacy is to simply pretend it doesn’t exist.

That’s why Australian PM (and noted piece of shit) Scott Morrison’s response to the Black Lives Matter protests in Australia by flatly stating that there was “no slavery in Australia” and accusing protesters of not being “honest about our history.”

https://www.theguardian.com/australia-news/2020/jun/11/was-there-slavery-in-australia-yes-it-shouldnt-even-be-up-for-debate

Morrison’s claims are easily refuted. For example, the State Library of Western Australia has an 1896 image of enslaved aboriginal people in neck-chains, outside Roebourne Gaol. This image was posted to Facebook as part of the discourse of Australia’s history of slavery.

But it was immediately removed by Facebook’s nudity filter, a fully automated machine learning system that enforces the system’s “community standards.” The user who posted it had his account restricted in punishment for violating these standards.

https://www.theguardian.com/technology/2020/jun/13/facebook-incorrectly-removes-picture-of-aboriginal-men-in-chains-because-of-nudity

Facebook removed 39.5m “nudity” images in the first quarter of this year. At that rate, you won’t be surprised to learn that more than 99% of these removals were fully automated and untouched by human hands.

It’s true that FB is largely free from nudity, and this fact is often cited by advocates of other kinds of filtering - say, copyright filtering - as evidence that FB COULD block the content they object to, but it chooses not to.

In a sense, those critics are correct. FB has demonstrated that it is willing to accept immense collateral damage from its filtering - whether that’s censoring survivors of terrorist atrocities in the name of filtering out “extremist content.”

Or blocking images that prove the genocidal history of a nation at a moment when its leadership is denying that history even exists.

FB doesn’t intend to block this speech, but it knows this “overblocking” is inevitable when it turns over moderation to automated filters.

By its actions, FB is telling us that this is an acceptable price to pay.

But that doesn’t mean that filtering on broader criteria - harassment, profanity, libel, copyright infringement - is just more of the same. These categories are FAR broader than “nudity.”

What’s more, the consequences of overblocking are far more damaging to the purpose these filters are supposed to serve. A copyright filter that is supposed to protect artists and then goes on to censor artistic work that’s mistaken for infringement HARMS artists.

An “extremist content” filter that is supposed to protect us from terrorist violence and then goes on to block the images and stories of survivors of that violence literally adds insult to injury.

An anti-harassment filter that blocks the discussions of harassment targets who describe the words used to harass them helps harassers, not their victims.

FB got 2.5 million takedown appeals in Q1/2020, and restored 613,000 pieces of content. Even if you accept the dubious claim that FB’s human checkers got it right, that’s 613,000 acts of illegitimate censorship.

As for the photo of the enslaved aboriginal people, it was restored too – after The Guardian’s Josh Taylor asked FB embarrassing questions about the removal.

“Getting reporters to take up your cause” is not a scalable solution to errors in mass automated filtering.

FB can’t moderate at scale. No one can. Adding filters “works” in the sense that you can block most “bad content” if you don’t care how much good content gets blocked by mistake alongside of it.

And the fallout from this overblocking is not evenly distributed. Not only are some disfavored minorities (sex workers, queer people, people of color) more likely to have their discussions censored.

They’re also less likely to have access to reporters who’ll embarrass FB and it into taking action.

The answer isn’t to lard FB with more censorship duties for it to fuck up even worse - it’s to cut FB down to size, to a scale where communities can set and enforce norms.

Because the problem with FB isn’t merely that Mark Zuckerberg is uniquely unsuited to making decisions about the social lives and political discourse of 2.6 billion people.

It’s that NO ONE is capable of doing that job. That job should not exist.

PS: Scott Morrisson retracted his no-slavery claim:

https://www.theguardian.com/australia-news/2020/jun/12/scott-morrison-sorry-for-no-slavery-in-australia-claim-and-acknowledges-hideous-practices

PPS: if you try to post about this to FB, your post will be blocked for “nudity”.

https://twitter.com/AbyDarling/status/1271546216501768192