Artificial Intelligence is a genuinely powerful tool that will dramatically impact any and all sectors, yet there’s still significant reason for you to be skeptical and disbelieve the hype.
AI is neither a magic spell that will fix all of our problems, nor is it a god slouching towards Bethlehem to be born. Rather it is just a tool, that is primarily effective at managing, navigating, and possibly organizing vast volumes of information.
The mythology of technology that is currently driving the sale of AI seems to be showing cracks and signs of decay. While this global (and rather sensational) marketing campaign mostly targets business and government executives, our popular culture is the by-product.
While big bucks are in play with the purchase or upgrade of autonomous systems, the real impact is on society: how we view it, and how we believe it should be governed.
Which is why it is relevant and interesting that the opposition to the AI hype machine is gaining ground and visibility, making modest inroads into the public consciousness.
While this growing chorus of researchers and academics are relatively small, it is interesting to see their collective voices, in the form of a social movement, that is rising in response to the myth of AI. In particular they’re addressing the assumptions that underpin and corrupt the popular conception of automation.
On the one hand this growing critique of AI is political. It is a revolt against the authoritarian political power we seem to be creating via automated decision making systems. On the other hand, this emerging critique is also technical, drawing attention to some of the flawed methods and poor results found behind the smoke and mirrors
It’s empowering to understand the negative effects and harms from using machine learning models that are hyped but not entirely effective. It provides us with the evidence and rhetoric to push back in our own areas, in our own spheres, and spread the incredulity that is necessary when dealing with all the hype and boosterism.
The following slide helps break down the differences between the substance of AI and the potential snake oil:
Professor Narayanan’s talk (and the panel that followed) was well received among the growing movement critical of AI. I like to watch the cognitive ripple effects of events like this as the ideas and arguments presented make their way into the public discussion.
The panel afterwards was just as fruitful and interesting as the main presentations, and Crystal Lee did a great job of taking notes and making them publicly available (via a google doc). This insight was particular interesting:
Perhaps that could be seen as one of the fundamental goals of this quasi-academic social movement rising against the AI hype? The right to reject, the right to say no, and as a human being, demand a human alternative?
I’m not sure that’s possible in the present context. The language and momentum surrounding AI makes it really hard to say no. To suggest that we should have and continue to provide human alternatives seems like a difficult thing to ask for, let alone demand.
Yet what is most significant about the research cited above is that in some if not many cases, human or manual decision making (still) outperforms automated decision making. That’s an important area to pay attention to. Especially before we move forward into a system that claims to be better but may actually be worse.
Just because someone says they’re helping or doing good, even if they believe it to be the case, does not mean that it actually is helping or doing good.
We should not and arguably cannot accept AI at face value, but instead must subject it to the rigours of science and the oversight of good government.
If only it were that easy.
Part of the challenge, is that culture consistently outperforms politics or technology. It doesn’t matter if the tech is flawed, and it doesn’t matter if the politics are authoritarian.
If the culture believes that AI is inevitable, that mass unemployment is inevitable, and that robot supremacy is inevitable, then we might as well resign ourselves to being ruled by robots (and the human masters who control them). Wouldn’t you want a robot dog like Spot?
It’s the culture of AI that deceives us. And it is this culture that we must dissect, engage, and transform.
Tactical Tech @Info_ActivismYes, hyena robots are scary. But they're also a cunning marketing ploy https://t.co/8SRUCZFFba
As the debate around AI and automation advances, the effectiveness of the hype machine wanes. We’re inoculating ourselves against the myths, while also learning how to use the tool, and becoming aware of those who wish to use the tool on us.
The rise of the AI ethics industry is an important response, although some of the proposed solutions are not necessarily viable:
There’s great irony that while digital media is helping us understand the subjective nature of reality, we’re simultaneously promoting automated digital media as a potential objective truth?!
Why should we accept AI results as if they’re any different from the results we’d get from a human or random oracle?
AI is just a tool. However in an age of excess information and limited literacy it can be an incredibly powerful tool. How we chose to use that tool is important, and should be part of a larger discussion around responsibility and humanity.
The paradox of course is that the social movement I’ve written this issue about, and shared with you today, is a tiny (yet influential) minority within the larger discourse surrounding AI. Just like contemporary politics, I anticipate that it will increasingly become polarized.
From the perspective of language and culture, this could mean entirely different dialects when it comes to how we describe and speak of AI. Perhaps after reading this issue you will find yourself a bit more immune to the hype, and accessing a vocabulary that will aid in your ability to spread that immunity to others.
The larger polarization of AI and the values that drive it may give us a glimpse into the political conflicts we face in the 21st century. Rather than a left wing or right wing as defined by the French Revolution and our relationship with history, it may instead revolve around our relationship with technology, and the authority we seek to give it.
What do you think? What can and should be done to demystify and dissect the AI revolution? #metaviews
This issue is sponsored by our friends at Heavy Computing
HeavyComputing.ca offers clueful bandwidth provisioning, full server and virtual personal server (VPS) hosting at 151 Front St, Toronto's premier 'telco hotel' and colocation facility.
Ken Chase is a close friend and long time Metaviews member. His skills and knowledge have been an invaluable part of the Metaviews network for well over a decade. To prove it, here’s a video of a teleseminar he and I led on bitcoin and cryptocurrency in 2011!?
After I published my issue last week about my past relationship with the CBC, I got a wave of new subscribers who signed up for the free list, but didn’t go through with the full paid subscription.
Since there were so many of you who did that, I now feel a bit of pressure to feed you, and will therefore send out the occasional issue to everyone.
If you have read this far, I ask that you consider subscribing, so as to support my writing and research, or at the very least share this post with your networks, so we can continue to grow this one.