The War Will Be Won By Fooling AI

Yes, you really are smarter than the machine

If there’s one thing I seek to share with you dear subscriber, it is the belief that machines are dumb, automated machines especially, and that your future and freedom as a human will largely come down to your ability to fool these machines.

You might regard hacking as the art of lying to machines, and by extension using machines to then lie to other human beings. While as a society we are growing increasingly aware of the potential to use machines to deceive, we also need to develop our ability to deceive the machines themselves.

Let’s be clear. Machines will never be conscious. Deceiving them is not about changing their mind or influencing their emotions. No. Fooling machines may be as simple as survival. The difference between life and death. Or on a more mundane level, the ability to maintain autonomy in a pervasive surveillance society.

There is presently a global arms race underway when it comes to machine learning and AI. Governments and military organizations around the world are hustling to develop automated weapons and infuse machine learning into every system possible.

Yet as a subscriber to this newsletter you probably already know that AI is neither omnipotent nor without significant flaws and shortcomings. This creates an interesting vulnerability as war machines feel pressure to adopt automation, and yet are not in a position to properly test or secure their technology.

Hence the cyber war that is currently being waged across the Internet is almost guaranteed to escalate, perhaps dramatically. Today we see penetration testing of critical infrastructure combined with widespread electronic espionage. Tomorrow we’ll see penetration testing of weapons systems and widespread electronic attacks.

Adversarial algorithms will be constantly jousting and testing each others capabilities, if they’re not already doing so. A passage from the article above:

Using adversarial algorithmic camouflage, tanks or planes might hide from AI-equipped satellites and drones. AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets. Information fed into intelligence algorithms might be poisoned to disguise a terrorist threat or set a trap for troops in the real world.

The Fog of War created by technology is real, and while we cannot see the battlefield, you better believe we’re smack dab in the middle of it. Here in Canada we may not currently fear that the drones over head will misidentify us as a target and execute a kill command, but such a scenario may not be that far off.

This is why it is crucial to recognize that these systems are flawed, and rather than fear them, we should understand them, if only so we can properly defend ourselves.

So let’s take a moment to do something we’ve yet to really do in this newsletter, and break down what exactly AI does, if only so we can understand how to fool it:

Primarily, AI engages in pattern recognition. It accomplishes this largely via sorting and ranking information. What enables AI to achieve this is scale. Where humans get bored, machines can process information without losing attention. Incredible amounts of information, and as a result of this volume AI can find patterns and then act on them.

Therefore the key to beating or fooling AI is to understand and engage these patterns. For each of these seven, there are methods we could use to fool or subvert the logic behind the system. The more we know about the system, the easier it is to subvert or defeat it.

This is a big reason why AI companies insist on using black boxes. They know that if people knew how their algorithms worked, they’d be able to manipulate them. An example of this is the (increasingly discredited) Search Engine Optimization (SEO) industry. They spend considerable resources reverse engineering the logic behind a search engine so they can then fool it.

Now imagine that method applied across society.

Is an AI acting as the first level of human resources screening resumes to determine who gets the job or at least an interview with a human? No problem. Hire the company that knows how to fool that AI so you can get the job or the interview.

Algorithms working for Canadian Revenue Agency to determine who gets audited? Better hope your accountant knows the logic that algorithm uses so you can be sure to avoid the scrutiny of the tax man.

Facing criminal charges? Best to hire a lawyer or legal firm that understands how the algorithm will judge your plea or determine your eligibility for parole. Or maybe before that understand how the predicting policing algorithms chose where to patrol and who to arrest.

The image above is from AI Now’s 2019 report that shows the efforts underway to resist harmful AI. While it is encouraging, it also illustrates how widespread AI has become in our society. If I believed in due process, I might believe that automated decision making will recede and pause as we understand what the right balance is between human and machine.

Tragically I don’t think that will happen. Instead our moral imperative is to understand how to fool the machines. And more importantly, share that knowledge with our friends and loved ones.

So what are you waiting for. Spread the word! What are some of the ways that you fool automated systems? What are some methods that you think could work, and that we should try when it comes to fooling machines? #metaviews

Share

Give a gift subscription

And have a great weekend eh!