LibertyNews.com is no stranger to Facebook’s censorship of radical Islam related content. I’ve personally been blocked out of my Facebook account 3 times now for posting images that someone, somewhere deemed not appropriate for sharing through social media via Facebook. The last ban lasted for a full week and Facebook gave me no ability to communicate with them, no appeals process and no real understanding of how they came to block me. What kind of content did I post that Facebook censored? The last time it was an image pointing out that a certain percentage of Islam was dangerous, radical and a threat to the world. It was a 100% accurate image and its meaning could not be disputed. Of course, Facebook always manages to find some sort of gray area in this type of scenario, allowing it to say a religion might be offended and there for it violates their terms. Nevermind that I’ve NEVER seen something that offends Christians removed or censored. That’s a whole other story. Another instance involved a pro-life post I made. What was the headline?
A new example that just arrived takes the censorship to a whole new level. Check out the post below.
The above might be offensive to some, but it’s hard news. This is not some sort of satire or cartoon. These are news clips/links that point to real news of real world events that are occurring. So, Facebook is now actively censoring real news when it’s related to the actual acts of terrorism committed by Muslims.
In previous stories I’ve written about this, I’ve pointed at what I believed to be a “automatic” process that most Facebook users believe exists. That is, the algorithm through which when a certain number of users (I call them Facebook user mobs) report a status update it triggers an automatic removal of the content and temporary block of the user who posted it. It’s a system that, at the time, I thought allowed Facebook some wiggle room when it comes to the question of censorship.
That is all changed now, however, because we now know there is no automation of the censorship process. In fact, it’s all 100% manual. Meaning that every single flagged piece of content is manually reviewed by a real employee of Facebook. Which also means that Facebook is manually censoring content.
Dublin is Facebook’s most important headquarters outside California. The Community Operations team based here does not just cover Europe, but also examines reports sent in by millions of users across the Middle East, Africa and large parts of Latin America. In the words of Sonia Flynn, the managing director of Facebook Ireland, they are “the front line between Facebook and the people who use Facebook”.
She adds that while the “vast majority” of reports received require no further action, when a serious concern is raised the team needs to act quickly and decisively. For this reason, a Community Operations person covering Spain cannot simply get away with speaking fluent Spanish – they must also have a good cultural knowledge of the country. Forty-four different nationalities are represented in the Dublin team alone.
“We put emphasis on hiring people from the different countries with the right language expertise and cultural understanding,” says Flynn. “When someone creates a piece of content – whether it’s a photo or a comment – there’s what’s said and what’s meant. That’s why it’s really important for us to have people who understand not just the language, but the culture of the country that they’re supporting.”
However, the company is keen to stress that every single report of abuse is read and acted upon by a human being, not a computer – a fact that might surprise most users. The system is constantly monitored by staff based across four time zones in California, Texas, Dublin and Hyderabad in India, so there is never a “night shift” with fewer staff on hand.
When a user clicks “report”, it is graded for its severity and guided to the right team. “If there’s a risk of real-world harm – someone who is clearly cutting themselves, or bullying, anything touching child safety in general, any credible threat would be prioritised above everything else,” says Julie de Bailliencourt, Facebook’s safety policy manager for Europe, the Middle East and Africa.
In other words, Facebook content does not get censored without an employee approving the censorship. Which confirms that Facebook is intentionally censoring content.
Facebook is a company, not a government. It can do as it pleases. But Facebook’s CEO has promised, time and again, that it does not censor content based on ideological disagreements. Clearly this is a lie. And clearly Facebook’s employees prefer a lot of beliefs and ideas just remain hidden from public view.
Which is very, very dangerous in what many like to consider a free world.