
When AI Stops Assisting and Starts Defining Reality
Imagine asking an artificial intelligence system which candidate to hire, which content to show, or which path to follow. At first, it looks like nothing more than a tool that helps us decide faster. In reality, today AI is no longer used only to support human decisions. More and more often, it takes on the role of an authority that defines what is valid, what matters, and what is considered “normal.”
The shift is not only about who makes decisions, but about how their legitimacy is built. When a choice is produced by a model, it is often accepted as objective, technical, and neutral. Yet models do more than predict outcomes. They shape the criteria by which we judge truth, relevance, and appropriate behavior. In practice, they don’t just tell us what will happen, but also what is worth paying attention to.
In this process, power slowly begins to move. It is no longer concentrated only in visible institutions such as governments, corporations, or public authorities, but embedded in complex and opaque systems that sit beneath our everyday actions. Because they are largely invisible, these systems can become pervasive over time, extending their influence even into the cultural sphere. Many common AI tools present themselves as neutral, but in fact they reflect the choices, values, and worldviews of those who designed, trained, and funded them.
So the real question is no longer whether artificial intelligence exercises some form of governance, an idea that might once have sounded exaggerated. The central issue is understanding how governance is turning into infrastructure: something invisible, built into daily processes, and difficult to challenge precisely because it no longer looks like explicit control. This is where power becomes most effective, and at the same time hardest to recognize and contest.
From Decision Support to Decision Supremacy
Artificial intelligence entered institutions under the promise of assistance. Early systems were framed as tools that would help humans decide faster, with more information, and fewer errors. This framing was never neutral, but it was reassuring. It preserved the fiction of human sovereignty while quietly restructuring the decision making process beneath it. Over time, the advisory layer thickened, recommendations became defaults, and defaults hardened into norms.
What marks the current phase is not improved accuracy, but a reversal of hierarchy. Human judgment is no longer the reference point that models serve, it is the deviation that must be justified. When a system produces a recommendation, declining it now requires explanation, documentation, and often personal liability. Accepting it requires nothing. In this asymmetry, support mutates into supremacy.
Decision supremacy does not announce itself as authority. It presents as efficiency, consistency, and risk reduction. Yet these virtues are selectively defined by the system itself. What counts as an error, what qualifies as success, and which outcomes are optimized are all preconfigured upstream. The human operator interacts only with the surface, a polished interface that disguises the depth at which choices have already been made.
This shift redefines responsibility without redistributing power transparently. Institutions continue to claim accountability, individuals remain formally in charge, but meaningful agency dissolves into procedural compliance. The model does not decide in the legal sense, but it structures the field so thoroughly that alternative decisions appear irrational, unsafe, or unprofessional.
Decision supremacy is therefore not about machines replacing humans. It is about humans being repositioned as executors of machine logic, while still carrying the symbolic weight of authorship. Control survives as a narrative, even as it disappears in practice.
The Myth of Neutral Computation
The authority of artificial intelligence rests heavily on a claim of neutrality. Models are presented as impartial processors of data, free from intention, bias, or interest. This claim is not sustained by technical reality, but by narrative convenience. Neutrality functions as a shield, deflecting scrutiny away from the political and economic choices embedded in systems long before they produce outputs.
Every model encodes a theory of the world. Data selection reflects historical asymmetries, labeling practices formalize contested categories, and optimization targets privilege certain outcomes over others. None of these steps are neutral, yet once compressed into statistical abstraction, their origins become difficult to trace. Computation transforms decisions into facts, and assumptions into inevitabilities.
The myth persists because it is operationally useful. Institutions can outsource judgment while retaining moral distance. When outcomes are challenged, responsibility is redirected toward the model, which in turn is framed as merely following data. This circular logic empties accountability of substance. No single actor appears to decide, yet decisions occur with increasing force and scale.
Neutral computation also reshapes how disagreement is perceived. Contesting a human decision invites debate, contesting a model output is framed as irrational or uninformed. The system’s confidence becomes epistemic pressure. Doubt is reclassified as error, and critique as inefficiency. In this environment, neutrality is less a property of computation and more a technique of governance.
By naturalizing its outputs, the model does not simply describe reality, it stabilizes it. What exists in the data becomes what is expected in the world. Neutrality thus operates as a mechanism of closure, narrowing the space in which alternative interpretations, values, or futures can be articulated.
Infrastructural Power and the Vanishing Institution
As decision supremacy normalizes and neutrality shields its operations, power begins to migrate into infrastructure itself. What once required explicit institutional authority is now exercised through systems that shape possibilities in advance. Rules are no longer only enforced, they are preempted. The institution does not disappear, but it thins out, leaving behind a technical spine that governs without deliberation.
Infrastructural power operates by setting conditions rather than issuing commands. Access thresholds, scoring mechanisms, risk flags, and automated workflows define what can occur before any human intervention takes place. By the time a decision is visible, the range of acceptable outcomes has already been narrowed. Institutions remain present as logos, departments, and procedures, but their discretionary core is increasingly absent.
This shift alters how legitimacy is perceived. Traditional institutions derived authority from law, mandate, or representation. Infrastructural systems derive it from functionality. If the system works, meaning it runs continuously, scales efficiently, and reduces friction, its authority is rarely questioned. Governance is recoded as maintenance, and political choices are reframed as technical necessities.
The vanishing institution is not a collapse, but a transformation. Responsibility fragments across vendors, platforms, compliance layers, and update cycles. No single site appears powerful enough to confront, yet the overall system exerts more control than its predecessors. Power becomes ambient, embedded in defaults and dependencies rather than directives.
In this environment, resistance becomes difficult to articulate. There is no clear decision to oppose, no official to confront, no policy text to reinterpret. What governs is the arrangement itself. Infrastructural power thus achieves a peculiar stability, not by asserting dominance, but by becoming indistinguishable from the conditions of normal operation.
Contesting Authority in Model Driven Worlds
If power now operates through models, infrastructure, and defaults, contestation must also shift its form. Traditional modes of critique target visible decisions, named actors, and explicit policies. Model driven authority resists these approaches because it rarely presents itself as choice. It appears as environment, as necessity, as the only viable configuration. To contest it requires first making it legible again.
This does not mean rejecting computation outright, but refusing its claim to inevitability. Models are constructed, maintained, and updated within specific institutional and economic contexts. Treating them as immutable facts is a political decision disguised as realism. Reopening the space of judgment begins with insisting that alternative configurations are possible, even when they are inconvenient, slower, or less profitable.
Authority in model driven worlds is also sustained by asymmetries of understanding. Technical opacity concentrates interpretive power in the hands of those who design and deploy systems. Contestation therefore depends on redistributing epistemic access, not by turning everyone into engineers, but by reasserting the legitimacy of non technical critique. Ethical, social, and political objections cannot be dismissed as misunderstandings simply because they do not speak in code.
Ultimately, the challenge is not to restore a past form of institutional control, but to prevent governance from dissolving entirely into automation. When authority becomes indistinguishable from infrastructure, accountability erodes, and agency becomes symbolic. Closing this gap requires slowing systems down, reintroducing friction, and creating points where human judgment is not an exception, but a requirement.
Model driven worlds do not eliminate power, they reorganize it. Contesting that power means refusing to accept neutrality as an endpoint, efficiency as a justification, and functionality as legitimacy. Only by re politicizing the systems that shape decisions can authority become visible again, and therefore contestable.
fakewhale
Founded in 2021, Fakewhale advocates the digital art market's evolution. Viewing NFT technology as a container for art, and leveraging the expansive scope of digital culture, Fakewhale strives to shape a new ecosystem in which art and technology become the starting point, rather than the final destination.
You may also like
Fakewhale in dialogue with Tim Plamper
For some time, we have been following Tim Plamper’s work, fascinated by his ability to blend diffe
FW Spotlight: Top Submissions of April
This month, we have selected three exhibitions, one in Frankfurt, one in Riga, and one in Greenville
How to craft an art tribe with open source DNA?
Decentralized Dynamics and Protocols: A New Type of Art Manifesto in the Internet 3.0 Era?” URL fr




