Social Credit: Gaming the System

Why the observer effect applies to points systems

There appears to be a growing awareness and acknowledgement that North American society already has a social credit system that is comparable if not worse than the Chinese equivalent. What makes it worse is that it remains largely hidden. Collected and used in private, to judge us, and determine our eligibility for services and status.

This is our second issue in our ongoing series examining the rise of social credit systems. While the focus of this issue is on gaming the social credit system, and how the principle of the observer effect applies, let’s first take a moment to catch up on the latest news on social credit systems:

This is a fascinating article by Violet Blue, whom I’ve always considered to be a fantastic author, as she does not shy away from writing about some of the more unpleasant if not deliberately ignored dynamics of the Internet.

Violet Blue® @violetblue
"Those with a high score are rewarded, while those with a low score are punished."
engadget.com/2020/01/17/you… Links to superlative work by @_MarkBlunden, @ejdickson, @eyecantina, @erinisaway, @BarbaraOrtutay. Includes what happened to @lilearthangelk, @AndreShakti, @thecadencelux.Your online activity is now effectively a social ‘credit score’Airbnb is using AI and social ‘credit scores’ to vet its users... in an Orwellian fashion.engadget.com

In this article, Violet digs deeper into reports that Airbnb uses a scoring system to evaluate users of its network, and traces the origins of this functionality to a start-up that Airbnb acquired called Trooly (as if it were part of the HBO show Silicon Valley):

"The way we do that is we use public and legally permissible digital footprints," Trooly co-founder and CEO Savi Baveja told press in 2016. "In about 30 seconds — using very little input information about the individual or business — we return a scorecard that does three things: it verifies whether the input information is authentic; it screens for any relevant and seriously antisocial or pro-social prior behavior; and then it runs a series of predictive models on that footprint to say what is the propensity of this individual or small business for future antisocial or pro-social behavior."

Violet goes further to connect this practice with predictive policing and the impact it is having on marginalized users:

Trooly — nee Airbnb — is combining social credit scores with predictive policing. Tools like PredPol use AI that combines data points and historical events, factors like race and location, digital footprints and crime statistics, to predict likelihood of when and where crimes will occur (as well as victims and perpetrators). It's no secret that predictive policing replicates and perpetuates discrimination.

Combine this with companies like InstagramFacebookYouTube, and yes, Airbnb deciding what legal behaviors are acceptable for service, and now we're looking at groups of historically marginalized people being denied involvement in mainstream economic, political, cultural and social activities — at scale.

The companies claim that they use such predictive tools to make sure people don’t do bad things on or via their platforms, however there’s growing evidence that their actions go far beyond that, and punish people who are doing nothing wrong. Violet provides an anecdote from a friend, but it’s clear this is not an isolated incident.

But I was thinking about a queer friend of mine who is a sex worker, recently driven out of SF by the high rents and scarcity. One year after Airbnb started working with Trooly she was banned from Airbnb after three years of stellar reviews from hosts. She is a sex worker and queer activist, and had never done sex work in any Airbnb. Airbnb did not respond to requests for comment on her expulsion.

An important and relevant link found in Violet’s article is to this post about sex workers and their struggles with surveillance and algorithmic enforcement:

The article profiles a collective of sex workers called Hacking//Hustling, who focus on the intersection of technology and justice.

The group has just released a report based on their research on the impact of FOSTA-SESTA: The Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) which was designed to protect those who are vulnerable to sex trafficking but instead makes people more vulnerable to human trafficking and exploitation.

The twitter thread that includes the two tweets above also summarizes the findings of this report. There’s a lot of really interesting and insightful comments, that could perhaps be summed up with:

This sentence can be inverted. If the technology is not safe for sex workers, we’ve likely created technology that is not safe for anyone.

The kind of struggles that sex workers are facing with regard to surveillance and automated policing reflects a future that we will all face if we do not recognize the slippery slope being established via automated social credit scores.

For example, let’s circle back to the Airbnb/Trooly incident:

While I recognize that mental health issues are being destigmatized, and this is partly happening because people are being open about their mental health, I would never be open about my mental health, and I definitely encourage you to do the same.

Your mental health should be your business, and your private info, and to release that info is to put yourself in danger, literally harm’s way, because of the discriminatory automatic systems that are emerging and assigning you a score.

Instead my research leads me to believe that these systems are flawed, and that we should do everything we can to game them. After all, hacking is just lying to the machine, and in this world, it is often our moral duty to do so.

For example check out this relevant research:

What we found was that employees introduced various changes to their behaviour as a result of implementing analytics. One of the most prominent and at the same time spurious effects was gaming: manipulating numbers and data to improve the appearance of performance without necessarily making the actual work practices better. For example, in order to look busy and productive, users in the system could be clicking on different content items without actually studying them. Some revealed that instead of reading required documents, scrolling through them quickly would register as completing a given activity, so that their non-compliance would not be picked up by managers. Others observed that in order to boost performance data, colleagues were posting very short but frequent forum updates (one of the performance metrics in use) as the length or intellectual depth of the content was not measured by the system.

It’s all about juking the stats. Manipulating the numbers to your benefit. The more people understand the presence of social credit systems, and the influence they have on their lives, the more they will put in effort to game those systems and get a score that benefits them.

Especially in the context of the augmented despotism that the contemporary workplace has become.

In principle, organisations hoping to employ people analytics can set out to design analytics systems and metrics that cannot be gamed by their employees. Sometimes, there may be a way of observing and evaluating performance that will not trigger reactivity, and consequently will not lead to gaming. However, our findings suggest that this will often be difficult, if not impossible, given the fact that people are reflexive: employees constantly adjust their behaviours in response to how they perceive their own circumstances and what is beneficial to them individually.

This is the observer effect of social credit systems. The idea that the mere knowledge or observation of a social credit system inevitably changes the system itself.

Just knowing that you will be assigned a score changes the score. Knowing how scores go up and down changes how the scores go up and down.

The false assumption that fuels social credit systems is that they would be accurate or honest. Yet increasingly we find that they are neither, as the subject’s desire to impact their score inevitably corrupts the system.

This is part of the moral of the research around the online experience of sex workers. Measures that were meant to protect them, inevitably made their lives quite a bit more unsafe. In many cases their only option is to then game the system, and attempt to hide or obfuscate their work and identity.

While they’re at the front lines of algorithmic oppression, they’re also at the front lines of resisting that oppression and innovating ways to escape or defy it.

This applies and connects to what we’ve discussed in the context of disability and diversity, as well as with regard to ageing, that the most severe and disturbing impact of AI and automated decision making is on people who are not part of the dominant majority. Those on the margins of society are experiencing a dystopia that is already here, it’s just not evenly distributed (yet).

To what extent is our disdain of China’s social credit system a distraction from the emergence of cruel and discriminatory social credit systems in our own society? If such systems cannot be evaded, should we perhaps instead contemplate what it would take to design ones that were inclusive and inline with our morals and values?

After all, it is, in theory, possible to construct a social credit system that is dynamic, participatory, transparent, and legitimate in the eyes of the subjects. However that’s a subject for a future issue in our social credit series.

What do you think? What kind of scoring systems are you aware of or actively try to game? Does your employer or educator score you? Do you give out scores to other people? Does our knowledge of these scores impact our behaviour in a good or bad way?

View comments

Speaking of scores, please do interact with the content I produce by posting comments, hitting the heart or like button, and sharing via social media. These “points” help influence me when it comes to the content I produce, and they impact my score on this platform. Thanks eh!

Share Metaviews

And if you’re interested in learning more about Hacking//Hustling, check out this event and discussion that happened at Eyebeam in NYC in September 2018: