Dr. Frankenstein, and the Politics of Artificial Intelligence
By Keith Porteous
Artificial intelligence (AI) has moved rapidly from speculative technology to a pervasive force shaping economics, militarism, culture, and civil liberties. While its applications in healthcare, finance, and communication are often framed in terms of efficiency and innovation, the deeper question is political: who controls AI, how is it governed, and what power relations does it reinforce or disrupt? The political implications of AI reach into state power, global competition, governance, and the very idea of human agency.
Governments view AI both as a tool of governance and as a strategic resource. Nation states have adopted AI-driven surveillance systems to monitor populations, track dissent, and manage social behaviour. “Social credit” systems, facial recognition technologies, and predictive policing illustrate how AI can consolidate centralized authority, while deploying AI in law enforcement and immigration control. The political danger lies in the normalization of surveillance and the erosion of civil liberties, especially when such technologies are adopted without transparency or public debate.
AI has become a core element of international rivalry. The United States, China, Russia, and the European Union are racing to dominate AI research, infrastructure, and standards. This race has implications similar to the nuclear arms race of the 20th century, but with subtler tools: data monopolies, control over semiconductor supply chains, and influence in setting global AI regulation. States that lead in AI may gain massive economic and military advantages, growing and creating new asymmetries in international relations. Smaller states and cultures risk dependency on AI systems designed and controlled elsewhere, raising even more threats of digital colonialism.
Domestically, AI reshapes processes of governance in both overt and hidden ways. Algorithmic curation of information affects political discourse, amplifying certain voices while minimizing others. Mainstream disinformation campaigns across the ideological spectrum are powered by generative AI that are coercive to public opinion, with convincing deep-fakes and automated propaganda. Beyond media, AI-driven decision-making in welfare distribution, credit scoring, or policing, can deepen social inequalities if underlying systems reflect pre-existing biases, and those that are newly manufactured.
The Canadian government, like those of most Western countries, has moved to bring forward legislation that ratifies the full spectrum of intrusions into privacy and civil liberty. This is often done under the guise of “security”, and to thwart human trafficking and the exploitation of children, where there are already existing laws and means of dealing with these issues. By elevating perceived threat levels, the populace can be coerced into accepting unnecessary and previously unacceptable digital surveillance and manipulations, using the tactic of manufacturing unwarranted fears of an array of risks and perils.
The political struggle over AI is also a struggle over regulation. Should AI be governed nationally, through democratic institutions, or internationally, through treaties and standards bodies? Global consensus is far from assured. In the absence of robust resistance, private corporations—particularly large technology companies—set de facto rules through the design and deployment of AI systems. This raises questions about accountability, as corporations increasingly wield wildly excessive powers.
At the deepest level, AI challenges all political philosophies by shifting the boundaries between human and machine decision-making. When algorithms influence judicial rulings, financial markets, or battlefield strategy, responsibility becomes blurred. Who is accountable for an AI decision: the programmer, the corporation, the state, or the algorithm itself? This diffusion of agency undermines traditional notions of free expression, sovereignty, justice, and accountability.
While it may already be too late to mitigate the worst implications of AI and its outcomes, the task for politics in the 21st century is therefore not to adapt to AI, but to ensure that AI cannot and will not further corrupt and diminish our human experience.