The legal status of robot criminals

Is society ready for the automation of crime?

Earlier this week we discussed the ongoing rise of ransomware. Today we zoom out from that issue and discuss the rise of robot criminals.

If the ransomware is automated, or operating semi-autonomously, does it not then qualify as a robot criminal?

Corporations are considered persons under the law, and as such can be considered criminal when they break the law. The humans responsible for the corporation are the ones held accountable. But as we discussed last week, what if the corporation itself is autonomous?

Doesn’t that make the corporation a kind of robot? Yes, if the corporation is just software or a suite of software operating on the Internet.

How long before robots have the same rights as corporations? What about the same rights as people?

There’s already been a robot recognized as a person and granted citizenship:

This example from Saudi Arabia always seemed a bit distasteful and problematic given the status of women in the kingdom and the overall authoritarianism.

It’s interesting then to see that not only has a robot been granted membership in a union in Switzerland:

But the EU is also considering granting them personhood, which would include social security payments:

In particular it is the rise of robots in health care that is driving this issue. Health related litigation is big business, and the issue of liability can be crucial when it comes to deciding who pays for a medical error or complication.

Minimally invasive robotic-assisted surgery has increased in the last few years. In fact there is one specific surgical robot that has had a predominant position in this field, kind of a “monopoly”: the da Vinci, a master-slave system manufactured by Intuitive Surgical. Its practical application raises liability issues for hospitals using it.

As of March 31, 2019, 5,114 da Vinci robots were installed around the world: 3,283 of them in the United States, 893 in Europe, 661 in Asia, and 277 in the rest of the world.

Along with the increase on the number of da Vinci installed in the last six months, in the U.S. there have been already cases involving the use of surgical robots (Taylor v. Intuitive Surgical or O’Brien v. Intuitive Surgical).

The liability of health care related robotics is where this debate is playing out, precisely because it is an issue of life and death.

Yet, if we look at the larger picture, it is fundamentally about holding corporations accountable.

That may be the metaphor behind the robot criminal: out of control corporations. As automation becomes more accessible and essential to industrial production and success, the temptation for corporations to go rogue may be too much to resist.

Robot criminals are really just a subset of corporate crime, but one that may overwhelm a bogged down legal system, and a vulnerable (and aging) society.

The first documented example of a robot criminal was in 2014, the result of an art project rather than an illicit scheme:

The Random Darknet Shopper, an automated online shopping bot with a budget of $100 a week in Bitcoin, is programmed to do a very specific task: go to one particular marketplace on the Deep Web and make one random purchase a week with the provided allowance. The purchases have all been compiled for an art show in Zurich, Switzerland titled The Darknet: From Memes to Onionland, which runs through January 11.

The concept would be all gravy if not for one thing: the programmers came home one day to find a shipment of 10 ecstasy pills, followed by an apparently very legit falsified Hungarian passport— developments which have left some observers of the bot's blog a little uneasy.

The project was successful in catalyzing a broader discussion on what a robot criminal is and who should be responsible.

Then in 2015, an experiment with a hitch hiking robot out of Ryerson University spawned the question of whether robots can be murdered:

As regular readers of this newsletter know, killer robots are in use, so we do know that they can, and have, killed humans. Although we’re still waiting for the war crimes tribunal to determine who is actually responsible. Tragically that may not happen anytime soon.

However a trial that involves a robot criminal will happen, and it’s maybe not that far off.

The research around robot criminals is starting to grow, and there’s some really interesting arguments being developed.

From the abstract:

When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot is capable of making, acting on, and communicating the reasons behind its moral decisions. If such a robot fails to observe the minimum moral standards that society requires of it, labeling it as a criminal can effectively fulfill criminal law’s function of censuring wrongful conduct and alleviating the emotional harm that may be inflicted on human victims.

Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.

What I find most interesting about this paper is the emphasis on the robot’s potential ability to communicate the reasons behind its moral decisions.

On the one hand I could see such a feature being mandated by regulators. That manufacturers, developers, and even (enterprise) users be compelled to have such explainability as part of the robot.

This is a big battle right now when it comes to fairness and transparency in automated decision making systems. Whether search engine or social media, there’s increasing evidence as to why the logic of these systems should be accessible, at least by regulators, if not also by users. The EU GDPR has a right to explain clause, that while untested, demonstrates this clear trend in the governance of AI and automation.

However on the other hand, that’s exactly why many AI and robot companies may avoid having its robots be able to communicate the reasoning behind its decisions.

We get a glimpse here of a divergent path of robotic development. The legal robots that are able to explain themselves. And illegal or outlaw robots that don’t explain shit. They just do their thing. Might is right.

The surgical robots referenced above are a great example of what will probably be highly regulated robots, with clear ability to communicate reasons, given the high stakes of medicine and surgery.

Yet while liability may be a big driver of this debate around robot criminals, it is not the only factor.

The following paper attempts to move beyond liability and synthesize the growing research literature around AI crime or AIC, as well as explore necessary responses.

Some excerpts from the conclusion:

The digital nature of AI facilitates its dual-use (Moor 1985; Floridi 2010), making it feasible that applications designed for legitimate uses may then be implemented to commit criminal offences. This is the case for UUVs, for example. The further AI is developed and the more its implementations become pervasive, the higher the risk of malicious or criminal uses. Left unaddressed, such risks may lead to societal rejection and excessively strict regulation of these AI-based technologies. In turn, the technological benefits to individuals and societies may be eroded as AI’s use and development is increasingly constrained (Floridi and Taddeo 2016).

I suspect we’re starting to see the early days of this as people increasingly distrust technology companies and worry about what is being done with their data. We’re now starting to understand the dual-use of social media. Perhaps this will help us interrogate the dual uses of AI and automation.

A clear example of this dual use are automated cybersecurity applications which are rapidly being developed and deployed:

The AIC literature reveals that, within the cybersecurity sphere, AI is taking on a malevolent and offensive role—in tandem with defensive AI systems being developed and deployed to enhance their resilience (in enduring attacks) and robustness (in averting attacks), and to counter threats as they emerge (Yang et al. 2018; Taddeo and Floridi 2018a).

This is where we see the real promise of automation: accessibility, and making things easier for anyone to do. The barrier to entry is lowered or eliminated. In this case, making crime easier and more accessible:

The AIC literature indicates that AI may play a role in criminal organisations such as drug cartels, which are well-resourced and highly organised. Conversely, ad hoc criminal organisation on the dark web already takes place under what Europol refers to as crime-as-a-service. Such criminal services are sold directly between buyer and seller, potentially as a smaller element in an overall crime, which AI may fuel (e.g., by enabling profile hacking) in the future.10 On the spectrum ranging from tightly-knit to fluid AIC organisations there exist many possibilities for criminal interaction; identifying the organisations that are essential or that seem to correlate with different types of AIC will further understanding of how AIC is structured and operates in practice. Indeed, AI poses a significant risk, because it may deskill crime, and hence cause the expansion of what Europol calls the criminal sharing economy.

The rise of the criminal sharing economy is why robot criminals matter.

Scalper bots are a modest yet relevant example of robot criminals already impacting your ability to see a live concert or event.

The problem of course being they’re not illegal. Maybe they should be. That’s part of why this debate is important.

What do you think? Should robots be considered potential criminals? How can liability and responsibility be assigned in the age of automation?

View comments


Today’s issue was brought to you by our friends at Zacks Law

Exceptional and uncompromising advocacy  in business, media, and professional disputes.

As you may already know, Dan Zacks was this newsletter’s first sponsor. He’s purchased ads in three different issues, spread out over three months, with this being the third.

I’m excited at the prospect of attracting sponsors who want to support my research and writing, as well as connect with the valuable network of subscribers that together we’re all building. If you are one, or know of any, do let me know.

I’ve known Dan for well over 15 years and have learned and continue to learn a great deal from him about the law and the legal profession.


Purchase a robot criminal here

And if you’re still reading this newsletter (rather than the documentation for your newly purchased robot criminal), then please share this issue with your friends and social media enemies!

Share