ROTTERDAM, March 11 -- Only a fool would cheer the banning of Tommy Robinson by Facebook and Instagram.
It doesn’t matter if you like or loathe him. It doesn’t matter if you think he’s a searing critic of the divisive logic in the politics of diversity or Luton’s very own Oswald Mosley in Jack Wills clobber. The point is that his expulsion from social media confirms that corporate censorship is out of control. It speaks to a new kind of tyranny: the tyranny of unaccountable capitalist oligarchs in Silicon Valley getting to decide who is allowed to speak in the new public square that is the internet.
Robinson, having already been thrown off Twitter and Patreon, was unceremoniously cast out from Facebook and Instagram yesterday. He had one million followers. So we are not talking about some bedroom-bound imbecile who says mad things to 27 fellow losers on Twitter, but about a public figure, someone who commands an audience and enjoys political influence. His crime, in the eyes of Facebook’s and Instagram’s community-standards cops, was to nurture ‘organised hate’ towards Muslims. He used ‘dehumanising language’ and made ‘calls for violence’, the social-media giants decreed. And therefore he had to go.
There are many disturbing things about this latest act of Silicon Valley silencing of an awkward public voice. The first is the apparent involvement of Mohammed Shafiq, CEO of the Ramadhan Foundation. Yesterday Shafiq boasted about having met with Facebook representatives to encourage them to ban Robinson over his ‘brainwashing’ of his followers into feeling ‘racism’ towards Muslims. This is the same Mohammed Shafiq who once attended an event with Hassan Haseeb ur Rehman, a Pakistani cleric who praised the murder in 2011 of the governor of Punjab, Salman Taseer, by a radical Islamist who despised Taseer for his opposition to Pakistan’s blasphemy laws and his calls for the Christian ‘blasphemer’, Asia Bibi, to be released from jail. Shafiq also called for the anti-extremist campaigner Maajid Nawaz to be dumped as a parliamentary candidate for the Liberal Democrats after Nawaz committed the speechcrime of tweeting a cartoon of Jesus & Mo. All of which raises a question: why is Facebook reportedly taking advice from someone like that? Does it also meet with campaigning Christians and jot down which critics of Christ they would like to see removed from its website?
The other disturbing thing is the growing power of internet corporations to police public speech. Not content with banning the likes of Alex Jones and Milo Yiannopoulos, and feminists like Meghan Murphy who criticise the politics of transgenderism, and various white nationalists, now the social-media giants are going after right-wing political activists whose views they don’t like. Alongside Robinson it has been reported that some leading UKIP activists have had their Facebook accounts suspended. This has a very strong whiff of political censorship. It is perverse and dispiriting to see ostensible leftists and progressives whoop with delight as corporate tech giants suspend or censor political undesirables. First because since when was the left in favour of the exercise of property rights against people’s rights? In their applauding of Silicon Valley’s censorious rampage these left-wing anti-fascists sound an awful lot like right-wing libertarians. They are effectively saying, ‘Hey, these are private companies, so they can ban whoever they want to’. Suddenly their traditional concern with reining in the unaccountable power of big business goes out the window and they find themselves standing with the bosses against the individual.
For example, a message asking people to "share and watch" a clip could be a spambot. But it could also be a protester -- maybe one like those in Ferguson, Mo. -- asking people to spread awareness about a crime.
"We don't want to gamble on potentially silencing that crucial speech by classifying it as spam and suspending it," she said. "That means we evaluate hundreds of parameters when looking at account behaviors, and even then, we can still get it wrong and have to reevaluate."
(The full talk is 10 minutes long, but lays out the basics of Twitter's philosophy pretty well.)
Even with graphic, disturbing and violent images, Harvey said, there are a lot of gray areas. And, as my Switch colleague Brian Fung pointed out, there is no industry standard. Each company makes its own rules based on how it wants to shape its community. Facebook and Instagram, for example, have rules against nudity in photos. Twitter has no such rules -- a decision that it's made as a company to live by its assertion that, in nearly all cases, the "tweets must flow."
While companies continue to debate what they can and should do at the administrative level to stop certain kinds of images, there are some things that individual users can do to take action on their own accounts. On Facebook, for example, you can block users, apps or pages if you don't want to see the content they publish.
Twitter offers users the option to change their media settings themselves. Here, if users want to see images that could be considered "sensitive" and skip seeing Twitter's warning message, they can opt to do so. Users can also decide to mark their own media as sensitive by default, if they think their pictures could upset others.
When asked why it took the images down, Twitter said it does not share information on specific cases. But it did refer reporters to its policies regarding takedown requests from the family members of deceased users. That indicates that the firm took down the photos and videos of Foley's death not because of a request from the U.S. government--which reached out to social media sites regarding the video on Tuesday--but because the family had asked it to, shortly after the news broke.
The policy, recently changed, outlines the guidelines Twitter follows to remove imagery of deceased individuals at the request of immediate family members -- a policy enacted after some Twitter users bullied the daughter of comedian Robin Williams off the network by sending her altered images supposedly depicting her father's corpse.
How does Twitter scrub its network of the offensive images, which tend to spread online very quickly? The process is actually low-tech. When a request is submitted, Twitter employees on the company's safety and legal teams look at the image that's been flagged, reviewing each one individually, and evaluating the public interest and newsworthiness of each message. That, for example, could explain why some images of Foley's death were taken down immediately, while others bore a warning:
It may seem incomprehensible to many users that in an age of algorithms and high technology, Twitter would remove offensive images by hand rather than by automation. Twitter and other companies like Facebook, Microsoft and Google all have and use an automated technology that lets them identify and flag images based on certain criteria.