TikTok Censorship and Algorithmic Justice

The invisible hand of algorithmic authority

TikTok remains a cultural juggernaut, refining and enhancing the power of its algorithm, and expanding the automated tools that make creating on the platform as easy as possible.

However the spectre of censorship is always looming on TikTok, both anecdotally and systemically.

For example, there is always talk among creators of their content being removed or censored. This is a symptom of both creators deliberately bushing boundaries, but also an automated moderation system that is arbitrary and inconsistent.

The algorithm giveth, and the algorithm taketh away. That is the dual nature of algorithmically generated attention. If a creator is average, the algorithm doesn’t reward them. However if the creator pushes the bounds of what is acceptable and therefore what gets attention, the algorithm will initially reward them, until they cross an invisible and otherwise undetectable line. At which point the algorithm can be brutal and merciless in administering justice.

While there are ample examples of this, the following has some obvious irony:

“It seems to me that TikTok is the nastiest place on social media for trolls and misogynists” — groups she says include kids and older women — “and all it takes is a couple of misleading reports or flags for your video to be taken down.” She adds, “Once again, this seems like an obvious attempt from social media platforms to make their spaces ‘safe’ from nudity and sexuality, where the actual threat comes from abuse, harassment, and disinformation.” Are has previously written about just this issue for an academic journal.

“You have zero communication with the platform to understand what’s going on as a user. Other pole dancers and instructors said the same — some were even deleted” from the app, she says. “Right now I’m speaking to you about this because I’m a researcher, and this is ironic. But if you have no access to media, or any other platform to share your content on, your livelihood and voice are gone.

“It’s also frankly quite appalling that in the war against ‘dangerous’ content, a video of my ass gets demoted quicker than, say, extremism or trolling,” she says. “Priorities!”

This isn’t the first time that TikTok has banned a person conducting research about the platform’s goals to keep users safe. In October, Eva Fog Noer, a Danish child safety expert, was permanently banned from TikTok due to multiple purported violations of the app’s community guidelines. After TikTok was approached by a reporter, Fog Noer’s account was reinstated.

That too was the case for Are. Input contacted TikTok on Friday for clarification as to why her account was suspended; today, it was reinstated. The social media company declined to comment, but Input understands that TikTok reviewed Are’s account, reinstated videos that had been wrongly removed, and lifted the temporary ban.

This raises the larger question as to why we put up with the widespread injustice and tyranny that masquerades as content moderation policies on digital platforms.

It’s inhuman, inaccurate, and projects values that are toxic and not reflected among the diverse range of users.

In this case, as in many others, a creator’s content is removed not because it violates any policies but because it upsets enough people who report it and trigger automated responses. Eventually those automated responses are overruled or overturned, but not always, and with considerable effort that can be exhausting for a creator.

Assuming they even have access to justice in the first place.

It is certainly encouraging that there is a growing range of activists and advocates who try to pressure these companies into doing the right thing.

Of course, long term, pressure is not sufficient. We deserve rules that prevent these companies from having discriminatory policies or content moderation models that are as inaccurate, hypocritical, and stupid as the current ones.

In my paper “A corpo-civic space: A notion To address social media’s corporate/civic hybridity“, published in First Monday, I argue that social media governance needs to address platforms’ hybrid nature: that of being both a corporate and a civic space. Social media may have been created and owned by private companies, but they perform civic space functions too – like a shopping mall, or like the mainstream media. As a result, the freedoms and rights of those who exist and interact in those spaces need to be protected.

As I wrote in a recent piece for The Conversation, seeing social media as “corpo-civic” spaces would mean applying international human rights standards to content moderation, putting the protection of people above the protection of profits. This is not unlike what we’d expect from shopping centres, which may have their own private security policies but which must nevertheless abide by state law. Because social media are global platforms, and because many states agree to respect international human rights standards at least on paper, it will be important for the laws that govern social networks to be transnational.

While my experience of censorship is very specific, and while people may disagree with my governance frameworks, a problem remains: currently, social media are not open to everyone. Some accounts and some content is being unfairly banished, without explanations or possibility for appeal. Some of the mechanisms meant to protect users are being used against them. All the above highlight the lack of users’ rights in social media moderation, and this is why a social network governance based on human rights and on platforms’ corpo-civic status is at least a start in a conversation we need to be having.

Dr Are raises some important and relevant arguments that we can and will explore in a future issue.

However I do feel it is worth providing greater context, in particular how the technology behind this moderation system works.

The team I was part of, content moderation policymakers, plus the army of about 20,000 content moderators, have helped shield ByteDance from major political repercussions and achieve commercial success. ByteDance's powerful algorithms not only can make precise predictions and recommend content to users — one of the things it's best known for in the rest of the world — but can also assist content moderators with swift censorship. Not many tech companies in China have so many resources dedicated to moderating content. Other user-generated content platforms in China have nothing on ByteDance.

Many of my colleagues felt uneasy about what we were doing. Some of them had studied journalism in college. Some were graduates of top universities. They were well-educated and liberal-leaning. We would openly talk from time to time about how our work aided censorship. But we all felt that there was nothing we could do.

There’s a wide range of reasons why TikTok has found global success. However this article suggests that in learning how to please authorities in Beijing, ByteDance was able to build a more responsive and capable moderation system. At least with regard to censorship.

When it comes to day-to-day censorship, the Cyberspace Administration of China would frequently issue directives to ByteDance's Content Quality Center (内容质量中心), which oversees the company's domestic moderation operation: sometimes over 100 directives a day. They would then task different teams with applying the specific instructions to both ongoing speech and to past content, which needed to be searched to determine whether it was allowed to stand.

During livestreaming shows, every audio clip would be automatically transcribed into text, allowing algorithms to compare the notes with a long and constantly-updated list of sensitive words, dates and names, as well as Natural Language Processing models. Algorithms would then analyze whether the content was risky enough to require individual monitoring.

All of this suggests that TikTok censorship is a far safer form of content governance than the kind of moderation that Western companies employ. Why err on risky content when you can err on removing that content and restoring it later.

A dangerous but potent model for governing a platform. We elaborated on this in a subject from our Twitch show.