A person would have to be extremely naive not to realize that social media companies like Facebook have a vested interest in limiting the amount of politically incorrect material being shared on their platform. Limiting politically incorrect and offensive content translates to potentially billions of dollars at stake from advertisers.
This is relevant to Muslims since many Islamic teachings are considered offensive, distasteful, and, ultimately, politically incorrect according to the degenerate Western hegemonic culture in which we find ourselves.
The question is: Can a Muslim even preach orthodox Islam on social media?
Given the high stakes, many have speculated that social media companies take secret measures to penalize, block, and throttle content and pages that do not meet the arbitrary “community guidelines” platforms like Facebook, Twitter, and Youtube set. There seems to be good evidence of this.
Project Veritas has obtained and published documents and presentation materials from a former Facebook insider. This information describes how Facebook engineers plan and go about policing political speech. Screenshots from a Facebook workstation show the specific technical actions taken against political figures, as well as “[e]xisting strategies” taken to combat political speech.”
Also in the in the documents was a presentation, authored by Facebook engineers Seiji Yamamoto and Eduardo Arino de la Rubia, titled “Coordinating Trolling on FB.” Yamamoto is a Data Science Manager, and de la Rubia is a Chief Data Scientist at Facebook. The presentation appears to describe the current actions, as well as potential future actions, Facebook can take to combat alleged abusive behavior on the platform.
Yamamoto, who is responsible for “News Feed Reduction Strategy,” also authored a post where he said Facebook should address “…quite a bit of content near the perimeter of hate speech.” Said the Facebook insider, the “perimeter of hate speech” means “things that aren’t actually hate speech but that might offend somebody. Anything that is perceived as hateful but no court would define it as hate speech.”
The insider believes Yamamoto’s plans appears to be political in nature, rather than in response to abusive behaviors, “[i]t was clearly kind of designed… aimed to be the right wing meme culture that’s become extremely prevalent in the past few years. And some of the words that appeared on there were, using words like SJW… MSM… the New York Times doesn’t talk about the MSM. The independent conservative outlets are using that language.”
Yamamoto and de la Rubia’s presentation says that “troll accounts,” can have their internet bandwidth limited and experience forced glitches like frequent “auto-logout[s]” and the failed upload of comments. These “special features” would be triggered “leading up to important elections.”
Ok, so this is not a baseless conspiracy theory. It would be surprising if Facebook weren’t taking these measures.
Since Trump’s surprising win in the 2016 elections, the consensus has been that social media played a significant role in influencing the election outcome. As some have put it, Trump landed in the White House on a wave of memes, “fake news,” and Russian bots. Preventing Trump from a second term means cutting off or seriously handicapping that avenue of public influence.
Facebook’s Mark Zuckerberg even admitted last year — after initially denying it — that Facebook played an “unfortunate” role in allowing “misinformation” to change the outcome of the election:
After the election, I made a comment that I thought the idea misinformation on Facebook changed the outcome of the election was a crazy idea. Calling that crazy was dismissive and I regret it.
Well, leading up to the 2020 elections, the social media platforms are sure not going to make the same mistake.
But, of course, the million dollar question is, what is “misinformation”? What is “hate speech”? What is “fake news”? And who gets to decide the answers to these questions?
Is it the liberal, ivy-league educated Silicon Valley executives and software professionals who decide?
When I was an undergrad at Harvard, I met and interacted with many students who went on to be influential executives at Facebook and these other tech giants. Despite their other talents, I would certainly not want them making portentous decisions on these ethical questions. For one, they often were not very ethical people themselves, even by liberal standards. And then, beyond that, their understanding of ethics on the theoretical level verged on the nihilistic. Many took a relativistic approach to right and wrong and viewed religion as nothing more than cultural identity, to be respected for “diversity reasons” but offering not much else. Their attitudes reflected their own socio-economic backgrounds as well as what they were getting from their Harvard classes. Harvard professors, after all, are not known for their respect for religious sensibilities, much less moral conservatism.
So, how would such people view Islam?
I don’t need to speculate because I saw their reactions when I talked to them in person. They not only viewed traditional Islamic principles as retrograde and hateful, they viewed them as representative of precisely what was wrong with the world.
- Sex outside marriage
- Religious pluralism
- Elimination of gender roles
- Progressive morality
On these issues and more, Islam is directly opposed to the ethical sensibilities of the liberal, post-religious elite, who have their fingers on the dials of these platforms. And these platforms unfortunately have become the primary ways most people online get their information.
But these companies are not going to admit to censoring Islam. They describe it as countering “extremism.” Only “extremist,” “hateful” Muslims say that man-on-man sex is an abomination. Only “intolerant” Muslims think that men and women have distinct gender roles. Etc.
Muslims who are going to contravene Islam and adopt the liberal orthodoxy have the green light. Muslims who are willing to avoid these topics altogether and preach cotton-candy “Islam” have the green light. They aren’t threatening liberal sensibilities and are, therefore, not “offensive” in the way those extremist Muslims are, so their “dawah” is 100% kosher. In fact, the kosher Muslims are a useful prop because the tech giants can point to them and say, “See! We value diversity!”
Google has been the most explicit in its “fight” against Muslim “extremism.” Since 2016, they have been open about how they manipulate search results to bury “extremist” content. Their algorithms even extend to Youtube.
In its continued fight against terrorist video content, YouTube announced it has rolled out a new search feature based on the Redirect Method technology designed by the Google tech incubator Jigsaw.
According to the announcement, YouTube will now display a playlist of videos aimed at debunking “violent extremist recruiting” content when people search for certain keywords.
The announcement did not include specifics on what the “certain keywords” are, but Jigsaw’s site covering its Redirect Method project listed the following statement explaining how it worked with Moonshot CVE (an initiative that uses data to counter violent extremism messaging) to determine relevant keywords:
For the English campaign, Moonshot CVE created 30 ad campaigns comprising 95 unique ads and over 1,000 keywords. The keyword generation was focused on terms suggesting positive sentiment towards ISIS.
What makes no sense is that if the Youtube algorithm is identifying ISIS videos that are encouraging terrorism, then it should be easy to ban accounts associated with the videos. Why bother with redirects and video suggestions?
What is more likely is that these algorithms cast a massive net, picking up anything and everything that even borders on what government protocols consider “extremist.” This is indicated by the fact that 1000 terms or more are monitored. How are there 1000 terms suggesting “positive sentiment” towards ISIS? What would these terms be? What does “positive sentiment” mean? If someone posts a video about the importance of the khilafa in Islam, does that count as “positive sentiment” towards ISIS? How about a video about the wisdom of the Sharia?
It is easy to see how many aspects of orthodox Islam would get effectively banned because of this algorithmic manipulation.
The kosher Muslims, however, don’t have to worry. Well, at least as long as the tech giants don’t expand their definitions of hate and extremism. Who says that teaching about hell-fire is not hateful and offensive? Who says that encouraging Muslim women to dress modestly is not the most vile victim shaming? Who says that preaching about abstaining from fornication is not causing psychological distress and trauma?
When the standards change, the kosher Muslims will have to reduce the scope of their dawah to keep their precious kosher status.
So What Can Practically Be Done?
Nothing other than dua. As long as orthodox Muslims are not powerful enough to influence these companies or change public consciousness, there is nothing that can directly be done. The last thing we should do is make “free speech” arguments. Free speech is not our value because free speech is an incoherent concept. See here for an explanation:
Beyond sincere dua for success in this life and the next, Muslims should work to create their own platforms for communication and the mass dissemination of ideas. This is a tall order, but the first step is for Muslims to recognize that their days on mainstream platforms are numbered. If they don’t (continue to) evolve into liberal, perennialist, LGBT-loving feminists, these platforms are not going to tolerate them.
And we should always remember that Allah’s message does not need any of us nor any platform. The Truth can never be suppressed alhamdulillah.