Showing posts with label digital. Show all posts
Showing posts with label digital. Show all posts

Monday, December 29, 2025

The New Jim Code: Ruha Benjamin on How Technology Reimagines Race and Justice

In The New Jim Code: Race, Carceral Technoscience, and Liberatory Imagination, scholar Ruha Benjamin asks a deceptively simple question with deep consequences: What happens when the technologies we build reflect the biases we carry? Her answer is both urgent and subtle—technology doesn’t merely automate decisions; it reshapes social hierarchies in ways that often reinforce old injustices under the guise of innovation and neutrality.

Benjamin’s work is situated at the intersection of science and society, a place where assumptions about objectivity often obscure entrenched patterns of inequality. The “New Jim Code” she describes is a play on the infamous Jim Crow laws—legalized systems of racial segregation in the United States—and yet it signals something distinct: a coded, seemingly technical logic that licenses discrimination through data, algorithms, and “smart” systems that quietly perpetuate the biases of their human architects.


When Neutrality Isn’t Neutral

One of the central themes of Benjamin’s analysis is the myth of technological neutrality. We like to think that algorithms are impartial because they run on code, not sentiment. Yet Benjamin shows how data is never raw; it is collected from social contexts already shaped by inequality. Predictive policing tools, for example, use historical crime data to forecast where crimes are likely to occur next. But when that historical data reflects disproportionate policing of Black and brown neighborhoods, the algorithm doesn’t correct for bias—it amplifies it. The result? “Smart” policing that is anything but smart about justice.

Similarly, facial recognition systems are celebrated as tools of security and convenience, yet perform far worse on darker-skinned faces—so much so that error rates can be several times higher compared to lighter-skinned faces. Here, a technology marketed as universally useful betrays deeply racialized assumptions embedded in its training data and design priorities. The problem isn’t just misclassification. It’s a form of exclusion: who counts as legible to the system, and who remains invisible or misrecognized?


Carceral Technoscience: New Tools, Old Patterns

Benjamin uses the term “carceral technoscience” to describe how technology often extends the reach of punitive systems rather than dismantling them. Bail algorithms, risk assessment scores, and court sentencing tools are frequently adopted with promises of efficiency and fairness. But in practice they can harden disparities in the criminal justice system, transforming subjective judgments into “objective” metrics that are harder to contest.

Imagine a bail algorithm that scores a defendant as high-risk based on zip code, employment status, or prior convictions. Though presented as neutral, these factors are inflected with socio-economic and racial biases. The defendant isn’t just judged by their individual circumstances; they are judged by the social history of their community—a history marked by differential policing, economic exclusion, and unequal investment. Technology, in this sense, does not transcend inequality; it entrenches it.


Liberatory Imagination: Beyond Critique

Yet Benjamin’s work is not only diagnostic; it is generative. She insists that understanding the New Jim Code must go hand in hand with imagining alternatives. What would technologies designed not to reinforce carceral logics but to promote justice and equity look like? This is where her idea of liberatory imagination becomes essential.

A liberatory approach doesn’t start with technology as given, but with people and communities. Instead of asking how to make policing more efficient, we might ask how to reduce harm and build safety through investment in housing, education, and health—areas that technology can support rather than replace human judgment and care. Rather than using risk scores to predict “dangerous” individuals, data could be used to identify sites of need and opportunity. The shift is profound: from managing populations to empowering them.


Why it Matters 

In an era increasingly defined by algorithms—from credit scoring to job hiring systems, from predictive policing to educational gatekeeping—The New Jim Code reminds us that we are not passive recipients of technological change. The values and assumptions we embed in systems today will shape social realities tomorrow.

Ruha Benjamin’s book is a call to intellectual vigilance and collective imagination. It asks us not just to critique the technologies that surround us, but to envision technologies—and social orders—that reflect our highest aspirations for justice, dignity, and shared humanity. In doing so, she turns the question back to us: What kind of future are we coding?


See also: 



Friday, November 14, 2025

The Anomie of the Influencer: Selfhood in the Attention Economy

Durkheim warned us, in the chilly prose of 19th-century sociology, that when a society loses its moral framework—its shared norms, its collective sense of purpose—individuals suffer. They become unmoored. He called this condition anomie: a state of normlessness, where the social scripts we rely on for meaning dissolve. Now fast-forward to today, where the influencer—that curious avatar of hyper-visibility and curated intimacy—stands at the bleeding edge of this crisis.


Durkheim Meets the Algorithm

Influencers are not simply entrepreneurs of self; they are symptoms. In a world where meaning is increasingly extracted through metrics, they embody a terrifying paradox: total social presence and profound personal instability. Their job is to exist, perform, and be consumed—always.

For Durkheim, stable societies produce stable selves. But the influencer’s society is a digital marketplace in permanent flux. Trends shift hourly. Algorithms tweak behavior with opaque indifference. Audience moods swing like weather. There is no stable "we" to belong to, only a crowd to please. In this context, selfhood becomes a hustle, and identity is a brand under constant threat of irrelevance.


The Perils of Curated Intimacy

Influencers are paid to be relatable, which is another way of saying they are paid to simulate friendship. This creates a form of professionalized vulnerability: confessional captions, raw honesty, mental health check-ins—but on schedule, with lighting. The emotional labor of being "authentic" online, day after day, fractures the boundary between self and performance.

Anomie creeps in here, disguised as freedom. With no clear boundary between work and life, or public and private, the influencer loses the moral anchors Durkheim believed were essential. The more their content is validated by likes and comments, the more the "real" self becomes uncertain—a ghost behind the engagement metrics.


Suicide and Spectacle

Durkheim's Suicide identified different types, including anomic suicide, caused by social instability and sudden dislocation. Today, we see modern echoes: influencer burnout, breakdowns livestreamed, tragic deaths announced via Notes App screenshots. These are not just personal tragedies—they are structural symptoms. A society that demands constant exposure without offering collective support breeds this kind of psychic erosion.


The Audience is the Institution

For the influencer, there is no school, church, or workplace to confer social legitimacy. There is only the audience. And that audience is fickle, fragmented, and algorithmically filtered. Praise one day, backlash the next. Parasocial love turns into public execution with terrifying ease.

Durkheim taught that we become who we are through others. But what happens when those "others" are invisible, numerical, and largely unknown? When validation is externalized into a dashboard, and community is abstracted into followers, the self risks dissolving entirely.


Can Solidarity Be Streamed?

There are flickers of resistance: creator unions, digital sabbaticals, collective calls for better mental health support. But these are patchwork solutions to a deeper wound. What the influencer reveals is not just the fragility of individual identity in the attention economy, but the fragility of the society that made them necessary in the first place.

Durkheim's ghost hovers over every selfie. His lesson is clear: without shared meaning, without rituals that bind us beyond the market, we unravel. The influencer is not the disease, but the fever dream of a culture sick with loneliness and spectacle.

Saturday, October 25, 2025

Meaning of algorithmic governance: On Algorithmic Governance and Governance of Algorithms

In recent decades, a new form of governance has emerged—one that operates not only through political institutions but through algorithms. Intelligent systems now analyze, evaluate, classify, and decide—sometimes replacing human judgment. The book Algorithmic Governance and Governance of Algorithms, edited by Martin Ebers and Marta Cantero Gamito, offers a comprehensive exploration of this phenomenon, distinguishing between two intertwined dimensions:
The first is governance by algorithms—the use of algorithms to manage areas such as healthcare, education, security, and welfare; the second is governance of algorithms—the effort to impose regulation, ethics, and legal oversight on these algorithmic systems.

Algorithms as Social Power

Ebers and Cantero Gamito argue that algorithms are not merely technical tools but new social forces. They determine who gets a loan, who is hired, and who is flagged as “high risk” by financial or security systems. Beneath their cold, seemingly objective logic lie values, assumptions, and human biases. For example, an algorithm trained on historical data can easily replicate existing patterns of discrimination against minorities or women—not because it is “evil,” but because it learns from an already biased social reality.


The Democratic Challenge

This raises a fundamental question of legitimacy: how can we preserve accountability, transparency, and fairness in a world where decisions are made by computational “black boxes”? The book argues that democracy rests on the principle of public justification—the idea that every decision can be explained and contested. Yet when algorithms decide, the reasoning behind those decisions often becomes opaque. Hence, governance by algorithms demands governance of algorithms: a framework of oversight, regulation, and the development of principles such as algorithmic transparency, the right to explanation, and social sensitivity.


Between Efficiency and Justice

One of the book’s central tensions lies between technological efficiency and social justice. Algorithms promise speed and cost-effectiveness, but the price is often the loss of human discretion and empathy. The authors cite cases where algorithmic management improved efficiency in welfare or healthcare systems but undermined fairness or accuracy. The question, then, is not whether to use algorithms, but how—to what extent, under what boundaries, and with what ethical and legal safeguards.

Beyond the legal discussion, the book touches on deeper philosophical questions: what does governance mean in an era where decision-making power is increasingly automated? Can moral responsibility be assigned to algorithms? And are we still sovereign subjects when our public and private lives are governed by continuous computation of risk and optimization?

Algorithmic Governance and Governance of Algorithms is a key text for understanding an era in which the algorithm has become the silent legislator of our lives. The book is not anti-technology; rather, it calls for boundaries grounded in responsibility and humanity. It invites us to rethink the meaning of democracy in a world where power is measured not only by who makes the decisions—but by who programs them.


Know more:


Monday, October 20, 2025

Your Phone Is an Organ Now: Technogenesis and the Myth of the 'Offline Self'

You can't log off your liver. You can't swipe away your spleen. And yet, every few months, someone tries to sell you on the fantasy of "going offline" as if it's a kind of spiritual cleanse, a return to some Edenic pre-digital state. Let me be clear: there is no offline. There hasn't been for a while. Your phone is not just an external tool; it has become an internal organ—an extension not just of your body, but of your cognition, your memory, your very sense of self. Welcome to technogenesis.

Technogenesis, a term popularized by N. Katherine Hayles, describes the co-evolution of humans and technology. It's not just that we shape tools; the tools shape us back. Glasses changed how we read, the clock changed how we perceive time, and the smartphone—well, it has changed everything. Attention, intimacy, navigation, even the way we mourn. Ever scroll through an old friend's Instagram, years after they've passed? That's not just nostalgia. That's techno-cultural necromancy.

But let’s not get spooky yet. The more immediate point is that the divide between "online" and "offline" is a comforting lie. There is no pristine, unmediated self waiting to be recovered if you just uninstall Instagram or take a yoga retreat. What you experience as "yourself" is already inextricably bound up with digital platforms, cloud servers, haptic feedback, and algorithmic nudges. In this sense, the self is no longer merely embodied; it is technologically distributed. You are partly made of circuits.

This isn’t meant to be dystopian or paranoid. It’s just the anthropology of now. Think of it like this: you don’t remember phone numbers anymore, not because you’re lazy but because your brain has adapted to outsource that function to the device. Cognitive offloading is the norm, not the glitch. And if you've ever Googled a symptom instead of calling your doctor, congratulations—you've used the technosphere as an epistemological prosthetic.

The philosopher Bernard Stiegler warned that when memory is externalized (in writing, in devices, in databases), it always comes at a cost. Something is gained—speed, efficiency, scalability—but something is lost too: slowness, depth, forgetfulness-as-a-feature. But unlike Stiegler’s melancholic tone, I don’t think the point is to judge this transformation. The point is to notice it.

Because once you see your phone as an organ, you start asking different questions. Not: "How do I detox from tech?" but "What kind of hybrid creature am I becoming?" Not: "Is this real connection?" but "How has connection itself been reshaped by mediation?" The goal isn't to escape the technosphere. It's to become literate in it. To treat the interface not as a veil, but as part of your skin.

So, no, you're not logging off. But maybe you can learn to log on with more intention, more criticality, more weird grace. Your spleen doesn't have an update cycle. Your phone does. But they're both keeping you alive in their own way. Act accordingly.


More to think about:

The Invisible Workforce: Tiziana Terranova and the Politics of Digital Labour




Sunday, October 19, 2025

The meaning of Technosphere, Technogenesis and Technicity - explained and summarized

What is the technosphere?

Think of the technosphere as Earth’s newest “sphere,” alongside the lithosphere and biosphere. It’s the vast, coupled system made of all our machines, infrastructures, energy networks, software, and the humans who keep them running. Geoscientist Peter Haff coined the term to stress that this is not just a heap of gadgets but a quasi-autonomous system with its own dynamics—fuel flows, information loops, supply chains—that can constrain what individuals and even states can do. 

In this lens, your phone isn’t just “your” device. It’s a node in larger grids—cloud services, rare-earth mining, logistics, platforms—whose momentum often outruns personal intentions. That’s why policy debates about AI, energy, or content moderation can feel like steering a tanker with a canoe paddle: local choices meet system-level inertia. 


What is technogenesis?

Technogenesis names the co-evolution of humans and technics: tools shape our bodies, attention, and culture as we in turn design tools that reflect our habits and ideals. N. Katherine Hayles popularized this framing in the digital humanities, arguing that contemporary media literally reconfigure our cognitive styles (e.g., hyperattention vs. deep attention) even as we reconfigure media. 

This is not a brand-new story—fire, writing, and the clock already rewired human life—but digital networks accelerate feedback. The everyday example is search: we externalize memory to the web, which then optimizes results for engagement, which in turn rewards certain forms of writing and thinking. Our “minds” increasingly include servers, recommender systems, and interfaces. 


What is technicity?

Technicity is the quality or mode of operation that makes something technical—its propensity to form systems, to be redesigned, and to transform its milieu. French thinkers help here. For Gilbert Simondon, technicity appears most clearly in technical elements (gears, circuits, code primitives) whose functions can transfer and recombine across devices; technology “individualizes” through ongoing integration with an associated milieu (energy, users, norms). 

Bernard Stiegler pushes further: human becoming must be thought through technicity. Tools are not mere add-ons; they’re constitutive supplements that exteriorize memory (from knotted strings to cloud drives). This exteriorization is a pharmakon—both remedy and poison—expanding capability while risking dependency and loss of individuation. 


How the three ideas interlock

  • Scale: Technogenesis describes micro-level co-evolution (bodies, habits, cognition). Technicity names the operational logic that enables such evolution. The technosphere is the macro-level system where these logics aggregate and acquire momentum.

  • Agency: Individuals innovate, but large-scale technological systems develop path dependencies that can limit what any actor can choose—think fossil-fuel infrastructures or platform economies. Recognizing technosphere dynamics reframes governance as system design rather than mere personal virtue. 

  • Ethics and politics: If technicity is constitutive of the human, ethics cannot be tech-optional. Education, regulation, and design are sites of care for how externalized memory and algorithmic attention are shaping selves and publics. 


Why this matters now

  1. Policy realism: Climate tech, AI safety, and data governance must target system couplings (energy, computation, incentives), not only user behavior. 

  2. Design for cognition: Interfaces should respect human attentional limits and support plural styles of thinking, rather than exploiting hyperattention alone. 

  3. Cultural literacy: Understanding technogenesis helps educators and parents move beyond nostalgia or panic toward practices that scaffold memory, attention, and judgment in networked life. 

  4. Institutional foresight: Organizations live inside the technosphere; resilience demands mapping dependencies (cloud providers, supply chains, standards) and planning for their shifts. 


Know more: 






The Invisible Workforce: Tiziana Terranova and the Politics of Digital Labour

In the early 2000s, before social media had fully reshaped everyday life, Tiziana Terranova offered a prophetic diagnosis of the digital economy. Her essay “Free Labor: Producing Culture for the Digital Economy” introduced a concept that has since become central to critical media theory: digital labour. Two decades later, her insights feel more relevant than ever. While the digital world promised liberation, participation, and creativity, Terranova revealed a hidden continuity with older forms of exploitation—only now it was our very participation that had become the site of profit.


The Gift That Isn’t Free

At the heart of Terranova’s argument lies a paradox. The internet was celebrated as a space of freedom and voluntary creation—a commons of shared information. People posted reviews, built fan communities, contributed code to open-source projects, and uploaded endless content to early platforms. This activity seemed voluntary, even joyful: a new kind of cultural gift economy.

But Terranova asked: who benefits from this free labour? Her answer was unsettling. The “free” in free labour, she argued, doesn’t mean “without cost.” It means unpaid. Every online contribution—every post, like, or click—feeds a larger system of value extraction. Digital labour blurs the line between work and play, between economy and culture, producing value for corporations while masking its own exploitation behind pleasure and participation.


Capital’s New Ecology

Terranova extends Marx’s analysis of labour into the informational age. In classical capitalism, labour produced commodities through visible, waged work. In digital capitalism, value emerges from the collective intelligence of connected users. The boundaries of work have dissolved: creativity, attention, even emotion are harnessed as productive forces.

What makes this system so insidious is its immateriality. There is no factory gate, no clocking in or out. Yet, the labour is constant: maintaining social profiles, training algorithms with our clicks, curating content streams. Terranova calls this the informational mode of production, an “open system” where the social and the economic merge. Digital networks absorb affect, knowledge, and interaction, transforming them into data and profit.

This is why Terranova’s thought often sits at the intersection of autonomist Marxism (especially the Italian operaismo tradition) and post-structuralist theory. Like Antonio Negri and Maurizio Lazzarato, she sees the new economy as one that exploits the general intellect—the collective capacity for thought and creativity. Yet, unlike techno-utopians, she refuses to romanticise this transformation. Connectivity has not freed us; it has subtly reorganised the logic of control.


From Free Labour to Platform Capitalism

What Terranova anticipated has now become the architecture of what Shoshana Zuboff calls surveillance capitalism. The attention economy, influencer culture, content moderation, gig work—all are extensions of the same dynamic she identified: the commodification of social life itself. Every gesture online generates value, whether through data mining or advertising revenue. The user is both the worker and the product.

Still, Terranova resists despair. Her work contains an undercurrent of potentiality—the idea that digital networks could yet be reclaimed as spaces of collective creation, if detached from corporate ownership. The same “free labour” that fuels capital could also power new forms of commons-based production, decentralised knowledge, or digital solidarity.


The Legacy of a Concept

To read Terranova today is to realise that the digital economy was never merely about technology—it was always about labour, value, and power. Her vision cuts through the rhetoric of innovation and disruption to expose the economic logic beneath. “Free labour” remains one of the most generative ideas in media theory precisely because it describes our condition: a society that works constantly without recognising itself as working.

In the end, Terranova leaves us with a challenge as much as a critique: if we are all digital labourers, how might we transform this invisible work into visible power?


See also:




Thursday, September 25, 2025

From Culture Industry to Culture Data: The Economics of Aesthetic Control

In 1944, Theodor Adorno and Max Horkheimer famously declared that culture had become an industry. Art, once a realm of critical reflection and spiritual nourishment, was now manufactured like soap or automobiles—standardized, predictable, and ultimately pacifying. Their critique of the “culture industry” was not just aesthetic but economic: under capitalism, cultural production obeyed the logic of the commodity.

That logic hasn’t disappeared—it’s mutated. Where the mid-century culture industry sold movies, music, and magazines, today’s platforms trade in services and, increasingly, data. Spotify doesn’t just sell you music—it personalizes your sonic environment. Netflix doesn’t sell films—it sells seamless streaming, frictionless choice, algorithmic intimacy. And TikTok? It doesn’t sell content at all—it collects your behavioral residues and turns them into predictive models.

From culture as commodity to culture as service to culture as data—we are witnessing a structural transformation of aesthetic economies. Adorno’s original framework still haunts this evolution, but it needs updating.


Culture as Commodity: The Classic Model

Adorno and Horkheimer’s “culture industry” was built on the Fordist model of production: cultural goods were mass-produced and mass-distributed, designed for maximum consumption and minimal disruption. The goal wasn’t art, but compliance. The illusion of choice masked a deeper uniformity—different genres, same formula.

In this world, the economic transaction was relatively clear: money for a cultural product. You bought a ticket, a record, a magazine. The consumer was passive, the product fixed.

But as digital infrastructures replaced physical formats, something shifted.


Culture as Service: The Platform Turn

With the rise of streaming, cultural products became services. You no longer owned the song or film—you accessed it. This shift wasn’t just about convenience; it redefined the temporal logic of cultural consumption. Instead of discrete purchases, users entered into ongoing relationships with platforms.

Spotify and Netflix don’t just deliver content—they manage your experience. Curation, recommendation, and personalization become the value-add. The old culture industry sold standardization; the new one sells differentiation at scale. But this personalization isn’t a liberation of taste. It’s the production of feelings of freedom within tightly controlled frameworks.

Adorno would likely see through the veneer. Personalization is just the latest mask of conformity: your “Discover Weekly” is drawn from a statistical average; your binge session is a behavioral feedback loop optimized for retention.

This service logic paved the way for a more abstract economy—one where culture itself becomes secondary to the extraction of data.


Culture as Data: The Invisible Commodity

Enter the age of culture-as-data. Here, the cultural product is no longer the end goal—it’s the interface for data capture. TikTok doesn’t need to sell you a song or a show. It needs you to scroll, pause, like, replay—each gesture feeding a machine that doesn’t care about art, only prediction.

This is where Adorno’s analysis becomes eerily prescient. He argued that the culture industry was not about cultural enrichment but about behavioral conditioning. Today, the conditioning is continuous and participatory. You create content, but more importantly, you train the algorithm. Culture is no longer a product but a training ground for models that anticipate—and monetize—desire.

The consumer is both subject and resource.


Why It Matters: The Disappearance of Aesthetic Autonomy

The shift from culture industry to culture service to culture data marks the deepening of capitalism’s grip on imagination. What began as mass-produced distraction has become an intimate, datafied choreography of attention.

Adorno mourned the loss of aesthetic autonomy—art’s capacity to resist, to estrange, to provoke thought. In the data age, that autonomy is not just eroded; it's rendered invisible. Culture is everywhere, but its purpose has changed. No longer a space for collective meaning-making, it is now the substrate for behavioral prediction, micro-targeting, and algorithmic governance.

What remains of critique in a world where every deviation is already mapped, modeled, and monetized?

Wednesday, August 13, 2025

Power/Knowledge in the Age of AI: The Algorithmic Structuring of Discourse

In the 1970s, Michel Foucault unsettled the idea that knowledge simply reflects reality. Instead, he argued that knowledge is produced—and always entangled with power. His term “power/knowledge” emphasized that what we accept as “truth” is shaped by institutional forces, discourses, and practices. Now, in the age of artificial intelligence, Foucault’s insight feels less like theory and more like a diagnostic manual.

As algorithms increasingly mediate decisions about employment, policing, healthcare, and credit, the Foucauldian lens offers a vital way to understand what’s at stake. AI does not merely process neutral data. It reflects, encodes, and often amplifies the existing hierarchies of the societies that build it.


Power/Knowledge and Discourse

Foucault’s concept of power/knowledge insists that power is not just top-down domination. It works through language, norms, institutions, and everyday practices. Importantly, it operates by shaping what can be said, thought, or known within a particular context—that is, through discourse.

Discourse, for Foucault, doesn’t just describe the world; it constructs it. Medical discourse, for example, doesn’t just report on bodies; it defines what counts as illness, deviance, normality. Similarly, AI systems are not just technical tools—they participate in the formation of what Foucault would call “regimes of truth.”


Algorithms as Discursive Machinery

Algorithms today function like contemporary discursive infrastructures. They decide what content you see, which loan you qualify for, whether you’re flagged as a threat, or how your resume is ranked. On the surface, these systems appear objective—neutral pipelines of rationality. But beneath that veneer lies a mess of social assumptions and historical data sets.

Take algorithmic bias. Facial recognition systems have been shown to misidentify Black faces at far higher rates than white ones. Predictive policing tools target neighborhoods already over-policed. Resume-screening AIs inherit gender bias from previous hiring data. These aren’t flaws; they are expressions of the power/knowledge nexus. The data reflects historical inequalities; the algorithms perpetuate them.

By embedding past norms into present-day decisions, AI systems naturalize and automate inequality. They make historically produced outcomes appear as logical outputs. In Foucault’s terms, they manufacture truths that reinforce dominant discourses while marginalizing others.


AI, Surveillance, and the Digital Panopticon

Surveillance, another core Foucauldian theme, finds new life in algorithmic governance. In Discipline and Punish, Foucault examined how modern power works not through violence but through observation, normalization, and internalized discipline—the logic of the Panopticon.

Today’s digital platforms operate as dispersed, data-driven Panopticons. Our clicks, purchases, emotions, and movements are tracked and analyzed, producing knowledge that serves commercial and governmental interests. This surveillance isn't passive; it is formative. It shapes behavior through subtle nudges and predictive interventions, defining what is visible, valuable, and possible.


Why It Matters

Understanding AI through Foucault doesn’t mean rejecting technology. It means rejecting the myth of neutrality. If knowledge is always political, then so is data. If power shapes discourse, then AI—our new discourse engine—must be examined not just for accuracy but for ideology.

Rather than treating algorithmic decisions as mere technical outcomes, we should ask: Whose truths do they encode? Whose voices are excluded? Only then can we begin to imagine more just, transparent, and accountable uses of technology.

Foucault never saw an algorithm, but he understood that power rarely announces itself. It whispers through norms, codifies itself in language, and now, increasingly, speaks in code.


See also: Algorithmic Culture and Surveillance Capitalism: what we’re really selling when we scroll

Tuesday, August 5, 2025

Algorithmic Culture and Surveillance Capitalism: what we’re really selling when we scroll

Data‑driven advertising isn’t just a new business model; it’s a cultural and political revolution. In her book The Age of Surveillance Capitalism (2019), Shoshana Zuboff argues that tech giants have created a new economic logic built on the extraction and monetisation of human experience. Far from simply providing search engines, social networks or video platforms, these companies systematically gather behavioural data, predict our actions and sell those predictions to the highest bidder. The result is a transformation not only of the market but of subjectivity itself – an evolution of what Theodor Adorno and Max Horkheimer once criticised as the culture industry: a system that turns art and thought into commodities, shaping consciousness for profit.


What is surveillance capitalism?

Unlike industrial capitalism, which commodified labour, surveillance capitalism commodifies behaviour. Zuboff traces how Google discovered that the “data exhaust” users leave behind when they type queries or navigate maps could be analysed to predict future behaviour. Over time, the company moved from simply improving services to predicting and influencing users, selling targeted advertising based on what it knows we will do next. Facebook, Amazon and hundreds of data brokers followed suit. This process, she argues, is not consensual; users’ data are taken without meaningful permission and used to train algorithms that shape what news we see, who we date and what we buy. The system thus operates through asymmetrical power: companies know everything about us, while we know little about how their predictions are made.


Algorithmic culture and the data economy

Scholar Ted Striphas coined the term algorithmic culture to describe how cultural forms are now curated by code. Streaming platforms decide which songs we hear and which films are recommended; social media feeds rank and filter posts. These processes are not neutral. They are designed to maximise engagement and, consequently, advertising revenue. The algorithms learn our preferences, not just to satisfy them, but to nudge them in profitable directions.

This dynamic mirrors Foucault’s idea of power/knowledge: power is productive, creating new desires and identities rather than simply repressing them. By analysing and organising our activities, algorithms generate knowledge about us that can then be used to influence us. In this sense, we participate in our own governance; we train the systems that will shape our future choices. Zuboff calls the resulting behavioural predictions surveillance assets – raw materials that can be packaged and sold in markets we rarely see.

From the culture industry to digital panopticon

Adorno and Horkheimer warned that mass‑produced culture turns individuals into passive consumers. Zuboff extends this critique: we are not just consuming standardised products; we are producing data that enable corporations to anticipate and mould our behaviour. Michel Foucault’s image of the panopticon – a prison design in which inmates never know when they are watched – is a useful metaphor here. In the digital realm, surveillance is continuous and often invisible. As Foucault observed, power produces subjects who police themselves. Today’s apps encourage us to share our locations, monitor our workouts or track our sleep, embedding surveillance into everyday life.


Why it matters

Surveillance capitalism raises urgent questions about democracy, autonomy and resistance. When a handful of companies control the infrastructure through which we communicate, shop and learn, they exercise unprecedented influence over public discourse. The data economy can exacerbate social inequalities, as predictive policing algorithms target marginalised communities or loan algorithms deny credit based on opaque criteria. As Zuboff notes, the danger is not just privacy loss but a future in which our behaviour is subtly engineered by entities we cannot hold accountable.

Understanding these dynamics is crucial for cultural studies. Theories of power, ideology and subject formation provide tools to analyse how algorithms are reshaping society. By recognising the continuities between the culture industry of the mid‑20th century and today’s digital economy, we can better grasp the stakes of our seemingly trivial acts of liking, scrolling and sharing. Informed critique is the first step toward developing forms of collective resistance and alternative infrastructures that prioritise human autonomy over profit.


See also: 

The Invisible Workforce: Tiziana Terranova and the Politics of Digital Labour

Sunday, February 9, 2025

Yanis Varoufakis’ Technofeudalism - Summary and Explanation

Yanis Varoufakis’ Technofeudalism: What Killed Capitalism presents a bold thesis: capitalism, as we know it, is already dead, replaced by a new system he calls technofeudalism. The book argues that capitalism’s defining features—markets and profits—have been marginalized, with digital platforms and rent-extraction mechanisms taking over the economy. This transformation, he claims, has made us modern-day serfs, enriching a new class of feudal overlords: the owners of cloud capital.


The Death of Capitalism and the Rise of Technofeudalism

Varoufakis traces capitalism’s historical metamorphoses, showing how it has evolved over time. Traditionally, capitalism functioned through competitive markets where profits were extracted from wage labor. However, recent economic shifts—particularly the privatization of the internet by Big Tech and the state’s response to the 2008 financial crisis—have fundamentally altered this system.

Instead of markets, we now have platforms, which resemble markets but operate as feudal fiefdoms controlled by tech giants like Google, Amazon, and Facebook. Instead of profit, the primary economic driver has become rent extraction. Large tech firms no longer compete to sell goods and services but rather to control access to digital spaces, monetizing our online interactions and harvesting data. This is what Varoufakis calls cloud rent—the new currency of technofeudalism.


The Role of the State and the Financial Crisis

A key argument in the book is that governments and central banks, particularly in the wake of the 2008 financial crisis, have accelerated capitalism’s transformation into technofeudalism. The crisis, which should have been a moment of financial reckoning, instead led to an unprecedented expansion of state intervention. Governments and central banks funneled trillions into financial markets, not to stimulate productive investment but to prop up existing corporations and financial institutions.

Rather than reinvesting in productive enterprises, large firms used this influx of cheap money to buy back their own shares and invest in speculative assets, further shifting the economy away from production and towards rent-seeking. This process has concentrated wealth and power in the hands of a few, reinforcing the feudal-like dynamics of the new system.


Big Tech as the New Feudal Lords

Varoufakis emphasizes that the digital economy has created a new ruling class, which he calls cloudalists. Unlike traditional capitalists, cloudalists do not generate profits by selling goods or services. Instead, they control access to platforms where economic transactions take place, much like medieval lords controlled access to land.

In this model, corporations and even traditional capitalists have become vassals—dependent on Big Tech’s platforms to reach consumers, advertise products, and conduct business. Meanwhile, ordinary people have been reduced to digital serfs, providing free labor in the form of data generation, content creation, and online engagement, all of which are monetized by the cloudalists.


The Global Impact and the New Cold War

The book extends this analysis to geopolitics, arguing that technofeudalism is reshaping international relations. The competition between the United States and China is framed as a struggle over control of digital platforms, supply chains, and data networks. Varoufakis suggests that this New Cold War is not just about economic supremacy but also about controlling the infrastructure of the digital age.

At the same time, he explores how traditional economic tools—such as central banking, monetary policy, and taxation—are becoming ineffective in a world where economic activity is dominated by private digital fiefdoms. Governments are struggling to regulate these new economic structures, as they lack the means to tax or control the digital rent-extracting empires.


Escaping Technofeudalism

In the final chapters, Varoufakis explores possible ways to resist and escape technofeudalism. He argues that traditional leftist approaches—such as higher taxation on corporations or stronger labor protections—are insufficient, as they fail to address the fundamental shift in economic power. Instead, he advocates for radical solutions, such as the socialization of digital platforms and the creation of alternative economic structures that prioritize democratic control over digital infrastructure.

He suggests that only by reclaiming control over the digital commons and challenging the rent-extracting power of cloud capital can society break free from the new feudal order. This, he argues, would require political movements that recognize the urgency of confronting technofeudalism head-on.

In summary, Varoufakis’ Technofeudalism is both a warning and a call to action. He argues that capitalism is no longer the dominant economic system; it has been replaced by a more exploitative and monopolistic form of control centered around digital rent extraction. The book challenges readers to recognize this shift and to consider new forms of resistance that move beyond traditional economic and political frameworks.

By weaving together economic history, political theory, and cutting-edge analysis of digital platforms, Varoufakis presents a compelling case that we are living through a new feudal era—one where power is no longer concentrated in industrial capitalists but in the owners of digital infrastructure. Whether society can break free from this system remains an open question, but Varoufakis makes it clear that ignoring technofeudalism’s rise is not an option.


Summary of Chapter 1: Hesiod’s Lament


See also:

The Invisible Workforce: Tiziana Terranova and the Politics of Digital Labour