Artificial Intelligence (AI) is often sold to us as the ultimate tool for efficiency, accuracy, and innovation. Advocates promise smarter cities, faster decisions, and even moral clarity in an increasingly complex world. But beneath this glittering narrative lies a deeper, more troubling reality: AI is not just a technological phenomenon—it is an ideological project. And its ultimate goal, whether intentional or incidental, is to shift authority and autonomy away from individuals and communities and consolidate it in centralized structures of power.
The False Necessity of AI
The first myth we must challenge is that AI is necessary. Why do we need to delegate so many decisions to machines? From hiring and firing to diagnosing illnesses, AI is often presented as a neutral arbiter, free from human bias. But this assumption obscures a crucial point: the decision to use AI is itself a biased choice, driven by the interests of those who control its development and deployment. Instead of asking whether these systems should exist at all, we are told the only question is how to make them better, faster, and more ubiquitous.
In reality, AI’s expansion into decision-making processes is not about solving problems. It’s about redefining problems in ways that only AI can “solve,” creating a self-perpetuating cycle of dependence. This isn’t progress; it’s enclosure—a process by which common, collective capacities are privatized and placed under centralized control.
Power Without Accountability
The centralization of authority in AI systems represents a profound shift in how decisions are made and who gets to make them. Consider the opacity of many machine learning models. These systems are often described as “black boxes,” their inner workings so complex that even their creators struggle to explain their decisions. In practice, this means that those affected by AI-driven decisions—whether they are denied a loan, flagged as a security risk, or recommended for a medical treatment—have no meaningful recourse to challenge or understand the reasoning behind these outcomes.
This opacity is not a bug; it’s a feature. It shields those in power from accountability while eroding the capacity of individuals to assert their own agency. If you can’t question the system, you’re left to accept its verdict as final. This dynamic erodes not just personal autonomy but also collective democratic oversight, replacing it with a techno-bureaucratic order.
The Myth of the Finish Line
Another persistent illusion is the idea of “completing” AI, as though we’re racing toward some finish line where the technology will finally be perfected and stable. But AI is never static; it’s an evolving process, continuously shaped by the priorities of those who fund and build it. This perpetual motion isn’t inherently bad, but it underscores a critical point: we, too, must remain adaptable, questioning not just what AI does but why it’s being applied in the first place.
A Different Path Forward
The narrative of AI as an inevitable and necessary force for progress is a distraction from more important questions about authority, autonomy, and justice. What if, instead of asking how AI can make decisions for us, we asked how we can create systems that empower individuals and communities to make their own decisions? What if we focused on tools that enhance collective agency rather than supplant it?
This is not a call to abandon technology but to reclaim it. AI should not be the engine of decision-making; it should be a tool, subordinate to human needs and democratic oversight. To achieve this, we must resist the false promises of inevitability and demand a shift in focus: from centralization to decentralization, from control to empowerment, from opacity to transparency.
AI is not just code and algorithms; it is a mirror reflecting the values of those who create and deploy it. As it stands, it serves the interests of centralization, hierarchy, and control. But this is not inevitable. By questioning the ideological assumptions that underpin AI’s development, we can imagine a different future—one where authority and autonomy are not ceded to machines but reclaimed by people, working together to shape the world they want to live in.
This article reminds me of how posthumanism examines the blurring boundaries between human and machine and how this affects our understanding of autonomy and authority. The article speaks to the deconstruction of humanism and the reimagining of humanity in the context of posthumanism, highlighting the importance of considering non-human and inorganic intelligent subjects alongside humans (a nod to Bruno Latour). It emphasizes that AI is never static but continuously shaped by the priorities of its creators. This perspective resonates with posthumanism’s understanding of technology as a dynamic and evolving entity that interacts with and influences human society. The perpetual evolution of AI underscores the importance of remaining adaptable and critically questioning its applications and impacts.