On the Unreasonable Influence of Fringe Communities

Manoel Horta Ribeiro (@manoelribeiro),


So, one of my resolutions for 2019 was that I wanted to started to blog more. It may seem late for new year resolutions, but if you are from Brazil, then you do know that the year only begins after carnival ends.

Today I will be reviewing some recent papers and a book on the subject of fringe communities. The papers are “On the Origins of Memes by Means of Fringe Web Communities” (best paper at IMC 2018 woot) and “The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources”. The book goes by the provocative title of “Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right”. While the book is very different from the papers, both analyze, in their own way, these communities of the new age, which are sometimes determined by a website e.g. 4chan or /r/TheDonald and other times determined by tribes, such as incels, the alt-right, MGTOWs.

Now is the moment that I confess, that, since I did my research for my paper on hate-speech (self-promotion one click away), I hate-watch a bunch of YouTube channels related to these communities. I won’t needlessly promote those channels here, but you may find (remarkably funny) commentary on them by watching Contra-points, Three-arrows or Shaun. I think my excitement to consume this kind of content comes from the often-moments where you think to yourself: “how can this people go so far?”, which, considering the amount of documentaries about serial killers and cults, seems rather common.

With this in mind, I bring you exhibit A: Roosh V, ex-pickup artist and overall cuckoo from the manosphere, which said in a impossible to watch 15 minutes video video and I quote:

I don’t think women should be educated beyond reading and writing. She only needs enough education to have children.

Given the absurdity of this quote (and I’ll assume throughout the post that these fringe communities usually talk outrageous stuff) the questions that arise are:

  1. How did we get here?
  2. What are the consequences of having such fringe communities?
  3. What does this means for the research community.

I think the book gives a lot of insight on (1) while the papers show alarming evidence related to (2). Last, but not least, as someone interested in research on topics such as hate speech, and content moderation, I ponder on what are the implications of these things for current approaches for hate speech (and fake news) detection (3).

DISCLAIMER: Topics discussed in this article are somewhat controversial. In section 1 I (atleast do my best to) represent Angela’s take on the issue. There was some controversy about the book (hit piece 1, hit piece 2, response 1). Having said that, I do have an opinion in many of these topics, and I try to make myself as clear as possible when stating my opinion by using italic. Section 3 is largely my own opinion, so I refrain from using it.

1. How did we get here?

How did we get from the “earnest hopeful days” where the interest of the average american was fairly aligned with mainstream artists and media to the current disgust of anything mainstream by a large part of the population? Angela Nargle argues that some actors were involved: (1) A weird group of internauts with a particular love for transgression. (2) The rise of Tumblr-style PC-identity politics among a share of users and newspieces. (3) Users that followed anti-semitic, race segregationist & mysoginistic ideologies.

Nargle’s thesis is that these three actors interacted to create what we currently have: The liberal lack of nuance constant call-out, often to things many considered absurd (e.g. Ramen is Racist), created a breeding ground for online mockery, often in form of memes or YouTube videos. This created content very critical of PC-culture, but not necessarily close to fringe ideologies. In the book, Nagle gives Milo as an example for this, but in my opinion, there could be many other examples, like Dave Rubin or Thunderfoot.

Amidst this wave of memes and jokes, the real wolves eventually arrived in the form of an openly white nationalist alt-righters, such as Richard Spencer. This approximation of the two groups, united by the hate of PC-culture, was what popularized fringe ideas, such as the desire for a white-ethnostate where non-whites would “peacefully” leave, or that a group of globalist jews would be running the world and plotting intricate plans related to muslim immigration in Europe.

A comment to be made here is that, while Nargle’s often uses the umbrela alt-right, there are many other umbrelas of groups with different levels of fringe beliefs that joined this anti-PC movement and whose views shifted increasingly to the extreme right. Take Roosh V for example. In 2001, he was a guy teaching other guys to pick up girls, a group of people named “pick up artists”, often with manipulative and “this just sounds wrong” procedures, but something very far from someone who says women should be educated only to be fit partners. The more you look into the internet, the more cases like this you find. Stefan Molyneux, for instance, which once held weird libertarian beliefs, has started to reproduce increasingly more alt-right-esque views. This transition is so common, it even received a name: the libertarian-to-alt-right pipeline.

It is hard to get a complete view of this subject on a mere blog post, but the core idea I will get here is that these anti-establishment web communities and information sources (e.g. Breitbart) gained power in this culture wars against PC culture. This distanced public opinion from mainstream opinion. Meanwhile, fringe communities that already existed in the internet (for example white nationalists), flourished in these web communities (or atleast in their vicinities), creating real “radicalization pipelines”, as illustrated below.

Why not make a diagram out of it?

Lastly, it is worth remembering that this process is probably best seen as a “piece in a jigsaw”, as several other phenomena likely had a lot of influence in the decline of the mainstream media and the radicalization of a large group of individuals. Things like the struggles of mainstream media to survive the online information era, the financial and health issues of a white rural america, and algorithms prioritizing content that receive a lot of engagement definitely play a big role here. YouTube, for example, has been known to show videos of increasingly “extreme” opinions as one follows the recommendations.

2. Consequences of fringe communities

Now we leave aside the question of “how did we get these fringe communities” to the more practical question of: “ok, these communities exist, so what are the consequences?” Before skipping to the academic papers, it is important to realize that two recent shootings can be traced back to these communities. The one which received more media coverage, in New Zealand, saw a shooter which released a 70+ page manifesto full of white nationalist references and vocabulary that is very common in the board. The least known, a school shooting in Suzano, a city in the state of Sao Paulo, Brazil, was largely influenced by brazilian chans, such as 55chan and dogola chan. Known to carry harassment campaigns similar (if not worse) to those faced by female journalist and gamers in #GamerGate.

Knowing that these fringe communities can have influence in real life, lets analyse two papers that measure their influence to the online discourse. The first, The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources, attempts to measure how these communities spread alternative (and often fake) news into the information ecosystem. Researchers looked at four sources of information: Twitter, Reddit, 4chan and news sites. News were separated by researchers between alternative and mainstream. In this scenario, what the researchers found (and is made evident in the figure below), is that /pol/ and the selected sub-reddits exhibit a really high percentage of “alternative” news.

Normalized daily occurrence of URLs for alternative news. Image reproduced with permission from the authors.

Yet more interestingly, the authors go further and explore the question of who influences who? This can be studied by tracking where a newspiece appeared first, and using modelling techniques to quantify the influence of each community over another. Without getting into the gory details of Hawkes Processes, the authors find evidence that /r/TheDonald/ and /pol/ where responsible for aroung 6% of mainstream URLs posted to Twitter and 4.5% of alternative URLs posted on Twitter. This is huuuge (pun intended) considering Twitter’s relative size. The full influence matrix can be seen below:

Mean estimated percentage of alternative URL events caused by alternative news URL events (A), mean estimated percentage of mainstream news URL events caused by mainstream news URL events (M), and the difference between alternative and mainstream news (also indicated by the coloration). Image reproduced with permission from the authors.

The second paper, On the Origins of Memes by Means of Fringe Web Communities (which has largely the same pool of authors) investigate a similar, yet in my opinion trickier problem. While in the first paper they consider the impact of these fringes communities in the news information environment, in this, they consider the impact of these fringe communities in the meme information environment. This is more important than it seems, as the power of memes was crucial in the 2016 election, and has recently been shown to be able to radicalize even Facebook moderators. Another very cool thing about this paper is the framework they develop to process memes (for example, calculating the similarity between two images using phashing). The code is available here, and the pipeline is shown below.

High-level overview of their processing pipeline. Image reproduced with permission from the authors.

After collecting a bunch of memes from the (cool) Know Your Meme website, they use their pipeline to cluster memes (using pHashing and pairwise distance) and annotate them (as for example racist/political). With this, and using data from web communities such as reddit, 4chan, twitter and gab, they are able to get a good idea of the meme ecosystem of each of the different social networks. The meme clustering they did resulted in the beautiful image you can see below.

Meme clustering at its finest. Image reproduced with permission from the authors.

Lastly, in a similar fashion to what they do in the previous paper, they use Hawkes Processes to model the inter-community influence in terms of memes. They find that /pol/, 4chans infamous “politically incorrect” imageboard is really strong at disseminating racist memes across other communities (more than non-racist, which is kinda of an exception when compared to other communities). But also that /pol/ is not very efficient, as hundreds of memes get created and posted there, and never leave. This resonate with the existing theory that /pol/ creates a survival of the fittest meme engineering environment.

3. What does this means for us as researchers?

The papers discussed give insight that those fringe communities have significant impact on the news and the memes online ecosystem. This is a big deal for anyone studying hate speech, polarization and misinformation online (which is a lot of people). I quickly discuss two issues that I see considering these fringe communities along with these important and timely research areas. Notice that in here I consider the problems of detecting and fighting hate speech and fake news online, rather than characterizing it.

Adversarial Nature of Hate Speech and Fake News. The agendas which exist in these fringe communities turn the moderation of hateful and fake content into adversarial problems. There has been some interesting approaches tangentially related to that (for example Magu’s paper trying to find code words for hate speech), but it seems to me that a more reasonable approach would try to create models that prevent this toxic content of “bleeding” from those fringe communities into the mainstream (YouTube and Twitter, for example). This would need (or at least I believe it would) constant monitoring of these fringe communities and constant updates of hate/fake detection models (as well as moderation instructions). This may sound too much, but it is totally feasible with the amount of resources big tech companies are pouring into the fight against fake news and hate speech.

From KYM: “Happy Merchant is a cartoon portraying a male Jew based on anti-Semitic views, giving it characterizations such as greed, manipulative, and the need for world domination. Mainly posted on political image-boards such as 4chan’s /pol/ and 4chan’s /new/, it is used both ironically and seriously.”

Context vs Content. Content in social media is often racist or leads to fake/super-polarized narratives, but (and here is the catch) not to the average user. Memes are a great example of this. Take for example the happy merchant meme shown above, the 3rd most popular meme in /pol/. This simple image is used for hate speech against jews and supports the fringe belief of a secret jew circle that runs the world. Yet, many would argue it does not breach Twitter’s hate speech guidelines, as it is not hateful and fake by its content, but hateful and fake by its context. This problem of content vs. context has been captured, for example, by Davidson’s paper “Automated Hate Speech Detection and the Problem of Offensive Language”, where authors show that a huge problem for textual detection of hate speech is to distinguish between hateful and offensive speech (and also a motivator of my paper, which considers moderating users rather than content). I don’t think there is a silver bullet against this problem, but again, I think that these fringe communities may be a great place for researchers and policy-makers to get a better grasp on this “context” and make better models and moderation instructions/pipelines.

Overall, I think that my main take away is that research dealing with hate speech and fake news can be greatly enhanced by incorporating knowledge of these fringe communities, as they are often the source propagating fringe narratives and abusive behavior.

Written on March 20, 2019