4
Manufacturing Consensus
How Algorithms Create Illusion of Popularity
and How Media Can Break the Vicious Circle
Why is it so difficult to ‘crack’ the social media algorithms, and how come the propagandists are able to do so? How information laundering works on the Internet? What is the danger of the illusion of popularity? Why do social media platforms fail to identify bots? Why is passive fact checking not so efficient? Can OS be ethical? Why are WhatsApp or Telegram groups becoming the most dangerous ways of spreading disinformation?
Samuel C. Woolley, Photo Credit: Town Hall Seattle
The answers to these questions are based on the interview with Samuel C. Woolley, a professor with a focus on emerging media technologies and propaganda in the School of Journalism and the School of Information—both at the University of Texas (UT) in Austin, ​ and Program Director of propaganda research at the Center for Media Engagement at UT.

His latest book, “The Reality Game: How the Next Wave of Technology Will Break the Truth,” explores the ways in which emergent technologies — from deep fakes to virtual reality — are leveraged to manipulate public opinion, and how they are likely to be used in the future.
Social media systems have been dressed up as somehow objective, but Woolley said that despite the companies behind the social media systems saying that they are not the arbiters of truth, they hide behind their algorithms. And these algorithms dictate the information that people see online.
It is a mathematical process, and it is somehow objective. However, we have quickly learned and know that these algorithms are curatorial processes and make decisions based on human behaviour

— Samuel Woolley.

Social media algorithms are also created with particular values and biases of the people who built them. They have proven to be highly subjective and sometimes produce racist or politically biased content. That is the crucial part of computational propaganda, a term used by Woolley.
Nano Influencers
and Information Laundering
There are many ways to manipulate the algorithms and prioritize some information or news that is made to look popular but, in reality, is not. This strategy is called manufacturing consensus, said Woolley. Various tools can be used to create the illusion of popularity for a story, idea, or politician.

We in general understand how so-called nano influencers operate. These are small-scale influencers with under five thousand followers; they are paid some money or given access to a campaign or candidate and then spread messaging about a particular issue in a coordinated fashion.

This has led to legitimate users with followings, which are sometimes quite local, working on behalf of a politician or political campaign and making connections with the people they are talking to.
They know people in their community geographically, but also highly niche, so, demographically, that is beneficial. You are then able to target demographics based on the influencers you hire. More than that, they have a relational impact, what we call relational organizing in marketing. But the issue is that oftentimes they do not disclose that they are being paid. And so it is another way of manipulating public opinion through this manufacturing consensus, making things look more popular than they are. They are actually not popular, they are just being paid to look popular.
In this case we speak about the real users, who act not ethically.

While bots are different. If you are one person on social media — you have one voice. And if you are one person that is able to coordinate five thousand other accounts — be they bots, sock puppet accounts, or otherwise —
you have much more of an influence. You are able to spread your content much more widely. It is a network effect.

With the use of bots, this magnification can be done to the level of tens of thousands. Woolley said that he saw fifty thousand fake followers being used in the past.

According to him, often there is a misconception that these bots are somehow built to communicate with people in order to change their political perspectives, that they are AI bots.

In fact, the bots are mostly built to talk directly to the algorithm, to manipulate these curatorial processes and convince the algorithm that something is more popular than it actually is.
Suddenly, you have fifty thousand bots tweeting about a particular hashtag or sharing a particular story. It gives this story, to the algorithm, the illusion of popularity through various mechanisms. In many-many circumstances across the world, we have seen situations in which the social media companies then re-curate that content as a trend on their website as this organically, or truly, popular content amongst the people. In fact, it is not

— explains Woolley.

What happens there is the system of information laundering as it is referred to by some, Woolley said. This is a system where the information goes through different circles and circuits, people, and processes to suddenly look like it is more popular to regular people on the ground across the U.S., the UK, or Ukraine. This, in turn, makes regular people think that it is more popular than it actually is.

The intention all along had been to create a so-called bandwagon effect, to get people to say “oh, my friends are doing this,” or “this looks legitimate,” or “this somehow is popular, so I should probably do this.” Psychologically, this is a fairly effective strategy, said Woolley, and so that is the way the process works in terms of computational propaganda. It is a “game of cat and mouse.”

Woolley noted that Ukraine has also had a problem with both bots- and human-organized manipulation for a very long time, a problem that spread from Russia, he said. “A few years ago, there was an analysis of Russia, saying that over half of the accounts on Russian Twitter are actually bots.”
We have spoken to well over a hundred people who work in this industry, building these kinds of tools across the world. They are constantly testing and inventing new strategies for getting their content out of there. It might be some changes as simple as they find out that algorithms on Facebook from detection of automation

— Woolley.

For example, according to Woolley, one cannot post on Twitter or send a message on Facebook more frequently than every minute. When the people behind the bots find out that that is the case, they make the bots post or tweet every one minute and one second. They keep up to date with the changes that are happening on the social media platforms because the social media platforms across the board and their policies are all “fragmented and different.” This means that there are various strategies that are more or less applicable to different platforms.

While those who use bots in order to hack the algorithms have advantages compared to the regular users or media. Media also try to understand algorithms, however they do not have enough to test ‘the keys’.

Woolley instists the more sophisticated use of bots is becoming increasingly apparent. They are built to spread messaging on a more flexible time schedule and do not do as much regurgitation of the same messages. With advances in AI and the ability to train these bots to speak more effectively, they actually sound more human. There is more use of bot accounts that are increasingly much harder to detect by the algorithms on the company side.

There are also clever ways for journalists to use bots as social scaffolding or in an infrastructural role to connect people to one another. But journalists and journalistic organizations do not want to begin being part of computational propaganda, in the sense of just spreading spam, Woolley said.
‘Hate Office’ In WhatsApp
Woolley’s primary project is focused on closed networks and encrypted messaging applications — WhatsApp, Viber, Line, Signal, Telegram – and looking into the way in which mis- and disinformation and propaganda spread in those systems.

Woolley said that discussions need to happen about how this content spreads over encrypted, too. There cannot just be a focus on Facebook and Twitter because, although they make up a massive portion of the information ecosystem, they do not make it up entirely.

Propaganda campaigns waged across Facebook and Twitter often start on WhatsApp or Telegram encrypted chat applications. These sites can be very useful spaces for incubating hate and propaganda.

Yet so far the researchers do not have special tools to grab massive data from the encrypted messaging apps because they are built to be private. Therefore, Woolley’s team has begun a thorough study of encrypted media spaces like WhatsApp through qualitative investigative methods, field research, and journalistic methods like open-source intelligence.

They particularly pay attention to countries such as Brazil — a few years ago hailed as one of the most exciting emerging democracies, much like India and the BRICS countries.
Activists use those systems to communicate democratically, without fear of a regime getting their content. But simultaneously — particularly in places like India and Brazil, but also around the world — leaders and governments, as well as other groups, have started to spread coordinated disinformation, manipulation, and trolling campaigns via encrypted spaces in the last several years.
There are a lot of political strongmen emerging in places that looked to be promising democracies but where there is a need for suppressing ideas that are democratic or oriented towards human rights. These people in power are illiberal or are toying with authoritarianism and so, they do not mind using these tools, said Woolley.
Brazil also has a massive problem with Jair Bolsonaro. They have made this internal, inter-state, and state government structure. Brazilians call it the office of hate. The sort of organized trolling campaigns that are led by humans and bots that are trying to attack opponents. The Philippines is another example with Duterte

— adds Woolley.

These campaigns mostly do two things: they either amplify or suppress particular information, with the aim of pushing a particular idea.

This was apparent in China with the 50 Cent Army, which mostly comprised humans. They may also suppress information by spreading tons of spam, noise, and garbage so that people do not want to tune into a particular hashtag or listen to a conversation. The users get apathetic while 50 Cent Army is getting used to attacking people, so they can suppress ideas by attacking journalists in a coordinated way. That is another major cause of concern.

Coordinated attacks on journalists is another tool of suppressing the ideas.

The media and journalists play a crucial role in combating these issues, according to Woolley. When these problems first emerged in the United States in 2016, most journalists did not report on them. There was no widespread knowledge of what was happening. Nowadays, there is much more knowledge internationally.
If we look to cases like Ukraine, or like Mexico, we can learn a lot because this has been going on in Ukraine for quite a bit longer than it has been in the United States. There is a lot to learn, and the media can do a lot to teach the public.
The Way Out: More Empathy
Instead of Passive Fact-Checking
There has also been the emergence of international fact-checking, where many journalists go into the position of starting to fact-check content and communicate to users directly that the things they are spreading are oftentimes false.

Facebook, for instance, has had partnerships with the Associated Press and Snopes in the U.S, as well as other organizations in other countries that have set up active fact-checking initiatives

One of the things that the media needs to do, according to Woolley, is get more creative in their fact-checking methods. One of the problems with fact-checking and targeting people who are likely to be spreading disinformation or conspiracy content is that those people are not very likely to buy into a top-down fact check from a news-making organization. They are likely to think this organization is somehow controlled by the government, the U.S., or similar. The research shows that the approach to fact-checking and media literacy needs to change.

Part of this has to be people starting to think about empathy and making the media work toward more connections between people; making space for discussion and deliberation, and understanding people who might have different political beliefs from one another.
A lot of it is an emotional wedge, a manipulative wedge that has been driven into people—whether by Russia, Ukraine and others or by various far-right political groups in the United States.
After studying this subject for more than ten years, Woolley thinks that one of the big roles is that passive fact-checks are potentially beneficial in the short term, in some ways, and to some people. But a longer-term view has to be taken, and that longer-term view has to consider ways that will better engage readers and bring them together.

Up until very recently, algorithms on social media were content-agnostic: they prioritized just as many fake stories as they did real news stories. Yet political scandals in the US forces high tech owner to change the approach in this case.

In his book “The Reality Game” Woolley said that social media have to design their algorithms with different priorities like human rights and democracy in mind.

They should not be optimized to try to force and manipulate people into spending as much time as possible on the platform or have ads sold to them.

They need to optimize for the user's well-being — a very broad thing.
Together with Jane McGonigal, a U.S.-based researcher and author, Woolley co-created a toolkit called The Ethical OS — a system for technology designers to read through and think about the potential risk zones for early-stage technology as they build them.
The Ethical OS Toolkit Overview
There are eight different risk zones; talking through provocative questions, telling people how this tool can be misused for racism, and other harmful agendas, Woolley explained. As well 14 scenarios to spark conversation and stretch imagination about the long-term impacts of tech companies are building could be tested.
Additionally, RAND corporation has a list of vetted tools on its website.

These tools can be used to fight disinformation. These are apps or plugins which may identify bots, certified vetted content, or provide tools which can check whether video is genuine or better analyze мetadata.
However, part of the problem is that many of these tools only exist in English, and they only work on one platform or another. For instance, they will be built just for Twitter and they will be built just in English. Thus, Wolley says, tools should be built in other languages and for multiple platforms. This is an international problem.”

Many social media companies are beginning to expand internationally, but many are not. For instance, there may be a major and overwhelming focus on the UK and North America. To scale teams in other countries requires cultural context: going into these countries and doing the work that they are doing. This would mean hiring people from Ukraine or the Philippines who know what is going on politically and socially.

According to Woolley, Facebook received an undue amount of criticism whereas Google, YouTube, and many other products — many of them search products — also have a lot of responsibility and a lot to answer for.

Another focus should be devoted to underrepresented or marginalized groups who, oftentimes, are disproportionately or primarily targeted through these manipulation campaigns.”

Today it is not enough just to report on the issues that minorities or marginalized communities are facing; it is also important to work with them and understand what they need the most and what they most hope to be told.

During the last years the media devoted a lot of attention to the danger of disinformation. Now it’s time to test practical solutions. While computing propaganda depends on technology, the media need to understand it better.

To understand how things could be improved Wolley also recommends the book “Design Justice” by a Massachusetts Institute of Technology scholar Sasha Costanza Chock which covers the perspective that many of these communities are already coming up with unique and creative ways of combating the problems that they face. These include informational problems and disinformation problems. By working with these communities more directly — particularly, at a local level — one can make their voices heard much more effectively, said Woolley. One needs to work with them, hear about what they want and give them airspace.

Thus technology is not just able to ‘break the truth’ (as the title of Wolley’s book says), but could be able ‘to break injustice’
The article was prepared with the support of the Alumni Leadership Action Projects Program of the German Marshall Fund of the United States. The views of the author do not necessarily reflect the views of the Foundation.