Responsible Scaling Policies and the Privatization of Governance

How Power Reconstitutes Itself When States Fall Behind

When states lose the ability to steer execution, governance does not disappear.

It relocates.

In artificial intelligence, that relocation is already visible—not in law, courts, or treaties, but inside the operating rules of the firms that control model training and deployment.

Responsible Scaling Policies (RSPs) are often described as ethics frameworks.

They are not.

They function as internal constitutions: rule systems that govern scaling, deployment, and intervention at the point where external institutions can no longer operate at execution speed.

What a Responsible Scaling Policy Actually Is

At an operational level, Responsible Scaling Policies define thresholds, triggers, and constraints that govern when increasingly capable AI systems may be trained, deployed, slowed, or paused.

In practice, this includes capability benchmarks that require additional evaluation, internal red lines that block deployment absent escalation, governance bodies empowered to override commercial timelines, and monitoring regimes tied to model capability rather than use case alone.

These rules are not advisory norms.

They are embedded directly into deployment pipelines.

An RSP does not ask whether a system is socially acceptable.

It asks whether a system crosses an internal boundary that requires intervention.

That distinction matters.

A Concrete Case: Capability Gating in Practice

This is not hypothetical.

Anthropic’s Responsible Scaling Policy, introduced in 2023 and iterated through 2024, formalized a system of capability-based gating tied to internal safety thresholds rather than application domains.¹ Earlier versions articulated AI Safety Levels (ASL-1 through ASL-4), with higher tiers triggering mandatory evaluations, deployment restrictions, and escalation to senior governance bodies. The highest thresholds were explicitly framed as levels at which deployment would be blocked absent major advances in alignment and control.

Subsequent revisions introduced more flexible capability thresholds and affirmative safety cases rather than rigid categorical blocks. But the governing logic remained unchanged: scaling is not continuous by default; it is conditionally permitted.

No regulator enforces these thresholds.

No court adjudicates them.

They are enforced internally, upstream of deployment, because no external actor can intervene fast enough once training and release are underway.

This is execution-layer governance.

Beyond a Single Firm

Anthropic is not unique in structure, even if it is unusually explicit.

OpenAI’s Preparedness Framework and Google DeepMind’s Frontier Safety Framework differ in rigidity, transparency, and scope.² ³ Some rely more heavily on continuous evaluation, others on escalation protocols or board-level review. Thresholds are defined differently, and public commitments vary.

But across these approaches, one structural commonality holds: governance is embedded upstream of deployment, inside the organizations that control training compute, release cadence, and intervention mechanisms.

The variation matters politically.

It does not change where power is exercised.

Why Governance Moved Inside Firms

States govern through law. Law operates through legitimacy, procedure, and enforcement after the fact.

AI systems operate through execution, iteration, and speed.

Once coordination shifts to machine-speed systems, the only actors capable of binding outcomes before execution are those who control architectures, training pipelines, deployment infrastructure, update cadence, and kill-switches.

States can punish after harm.

They struggle to intervene continuously.

Firms can.

Responsible Scaling Policies emerge not because firms are benevolent, but because they are structurally positioned to govern what states cannot reach in time.

Governance Without Democratic Legitimacy

RSPs do not derive authority from public consent or electoral mandate.

They derive authority from control over execution.

There is no electorate, no judicial review, and no formal due process. Instead, there are internal safety committees, escalation ladders, executive veto points, and board-level interventions operating outside democratic accountability.

Calling this arrangement “post-legitimate” does not justify it.

It describes a system whose authority no longer depends on legitimacy rituals, but on control over execution.

These regimes persist because no external actor can replace them at the execution layer without forfeiting control entirely.

Law reacts.

Responsible Scaling Policies preempt.

Governance now binds at the point where systems can still be stopped.

From Regulation to Constraint

Traditional regulation constrains behavior through the threat of penalty.

RSPs constrain behavior by designing systems so certain actions cannot occur without triggering internal intervention.

This is a qualitative shift.

Law governs outcomes after harm.

RSPs govern capabilities before deployment.

This is why RSPs increasingly resemble safety systems in aviation or nuclear engineering—domains where failure cannot be meaningfully corrected after the fact.

The governing logic here is not moral.

It is preventative.

Are RSPs Really Binding?

A fair objection is that RSPs are self-imposed and revisable. Anthropic, OpenAI, or DeepMind could, in principle, rewrite their policies tomorrow. Unlike law, RSPs do not bind across actors or time.

This is true.

RSPs are soft constitutions, not sovereign law.

But softness does not negate power. Their force comes not from permanence, but from position. They sit at the only point where scaling can be stopped before execution occurs. Even when revised, they remain the last effective choke point prior to deployment.

States may later codify, constrain, or displace these regimes.

They did not create them.

Why This Is Not “Corporate Self-Regulation”

Calling RSPs “self-regulation” understates what is happening.

Self-regulation implies voluntary restraint within an external framework of authority. RSPs exist precisely because the external framework cannot operate at the required speed or depth.

They are not supplements to state governance at the execution layer.

They are substitutes there.

States may influence these mechanisms through threat, procurement leverage, or later regulation—but they are not the source of their binding force.

The Accountability Gap

RSPs bind outcomes without public accountability.

Decisions about scaling, deployment, and capability limits are made by small groups of executives and technical leaders whose incentives and risk tolerances remain largely opaque.

There is no appeal.

There is no guaranteed transparency beyond voluntary disclosure.

This is not a scandal.

It is a structural feature of execution-layer governance.

Accountability lags because accountability mechanisms still operate at human speed.

Why States Tolerate This

States are not unaware of this arrangement.

They tolerate it because the alternative is worse.

Absent internal constraint systems, states face either uncontrollable deployment or blunt prohibitions that freeze innovation entirely.

RSPs offer a third option: governance by proxy.

States retain symbolic authority.

Firms retain operational control.

This equilibrium persists not because it is ideal, but because it is stable.

Governance Has Not Vanished. It Has Re-Layered.

Responsible Scaling Policies answer a question critics often pose:

If institutions are hollowing out, what replaces them?

The answer is not chaos.

It is private governance embedded directly in execution systems.

States govern through legibility.

Firms govern through constraint.

Together, they form a split sovereignty regime—one symbolic, one operational.

This is not an aberration.

It is the default shape of governance once coordination outruns legitimacy.

What This Means Going Forward

The most consequential governance decisions about AI are no longer made primarily in legislatures or courts.

They are made inside deployment rules, escalation protocols, and internal veto mechanisms that operate upstream of law.

Debates about transparency and democratic oversight will continue. But unless those debates reattach authority to execution capacity, they will orbit the real center of power without binding it.

The question is no longer whether this arrangement is legitimate.

It is whether any alternative can reattach legitimacy to execution without collapsing speed or control.

Governance did not disappear when institutions lost speed.

It relocated to those who could still stop the system before it ran.

Editorial Note

This essay continues the applied sequence examining how governance persists after institutional control over AI systems weakens.

The preceding case, The EU AI Act as a Legibility Theater, examined how regulation adapts when it no longer operates at execution speed—governing through classification, documentation, and post-hoc responsibility rather than direct control.

This case moves inward, to the organizational layer where execution is still directly governable.

Each case in this sequence isolates a different governance surface.

This one examines corporate execution-layer control.

Footnotes

  1. Anthropic. Responsible Scaling Policy. 2023–2024.

  2. OpenAI. Preparedness Framework. 2023.

  3. Google DeepMind. Frontier Safety Framework. 2024.