In the 1970s, Michel Foucault unsettled the idea that knowledge simply reflects reality. Instead, he argued that knowledge is produced—and always entangled with power. His term “power/knowledge” emphasized that what we accept as “truth” is shaped by institutional forces, discourses, and practices. Now, in the age of artificial intelligence, Foucault’s insight feels less like theory and more like a diagnostic manual.
As algorithms increasingly mediate decisions about employment, policing, healthcare, and credit, the Foucauldian lens offers a vital way to understand what’s at stake. AI does not merely process neutral data. It reflects, encodes, and often amplifies the existing hierarchies of the societies that build it.
Power/Knowledge and Discourse
Foucault’s concept of power/knowledge insists that power is not just top-down domination. It works through language, norms, institutions, and everyday practices. Importantly, it operates by shaping what can be said, thought, or known within a particular context—that is, through discourse.
Discourse, for Foucault, doesn’t just describe the world; it constructs it. Medical discourse, for example, doesn’t just report on bodies; it defines what counts as illness, deviance, normality. Similarly, AI systems are not just technical tools—they participate in the formation of what Foucault would call “regimes of truth.”
Algorithms as Discursive Machinery
Algorithms today function like contemporary discursive infrastructures. They decide what content you see, which loan you qualify for, whether you’re flagged as a threat, or how your resume is ranked. On the surface, these systems appear objective—neutral pipelines of rationality. But beneath that veneer lies a mess of social assumptions and historical data sets.
Take algorithmic bias. Facial recognition systems have been shown to misidentify Black faces at far higher rates than white ones. Predictive policing tools target neighborhoods already over-policed. Resume-screening AIs inherit gender bias from previous hiring data. These aren’t flaws; they are expressions of the power/knowledge nexus. The data reflects historical inequalities; the algorithms perpetuate them.
By embedding past norms into present-day decisions, AI systems naturalize and automate inequality. They make historically produced outcomes appear as logical outputs. In Foucault’s terms, they manufacture truths that reinforce dominant discourses while marginalizing others.
AI, Surveillance, and the Digital Panopticon
Surveillance, another core Foucauldian theme, finds new life in algorithmic governance. In Discipline and Punish, Foucault examined how modern power works not through violence but through observation, normalization, and internalized discipline—the logic of the Panopticon.
Today’s digital platforms operate as dispersed, data-driven Panopticons. Our clicks, purchases, emotions, and movements are tracked and analyzed, producing knowledge that serves commercial and governmental interests. This surveillance isn't passive; it is formative. It shapes behavior through subtle nudges and predictive interventions, defining what is visible, valuable, and possible.
Why It Matters
Understanding AI through Foucault doesn’t mean rejecting technology. It means rejecting the myth of neutrality. If knowledge is always political, then so is data. If power shapes discourse, then AI—our new discourse engine—must be examined not just for accuracy but for ideology.
Rather than treating algorithmic decisions as mere technical outcomes, we should ask: Whose truths do they encode? Whose voices are excluded? Only then can we begin to imagine more just, transparent, and accountable uses of technology.
Foucault never saw an algorithm, but he understood that power rarely announces itself. It whispers through norms, codifies itself in language, and now, increasingly, speaks in code.
See also: Algorithmic Culture and Surveillance Capitalism: what we’re really selling when we scroll