AI governance is saturated with rules—risk classifications, conformity assessments, transparency mandates, alignment specifications, model cards, and impact assessments. What it still lacks is an institutional mechanism for judging whether those rules are being followed in substance rather than appearance.

This paper argues that the absence is not incidental but structural. A significant strand of the debate starts from the opposite premise: Kahneman et al. argue that human judgment is noisy and that algorithmic decision-making can improve consistency. That view has force, but it presumes that rules exhaustively determine their own application. At the governance boundary—where compliance must be assessed rather than merely executed—the Wittgensteinian critique applies to algorithmic and non-algorithmic rules alike. Transparency mandates, risk classifications, and alignment specifications share the aspiration to make situated human judgment unnecessary. Each therefore creates room for entities to perform compliance while evading accountability.

The result is what this paper terms performative alignment: formal adherence to governance rules that functions as public relations rather than substantive constraint. Performative alignment is the predictable institutional consequence of governance architectures that multiply rules while providing no mechanism for judging whether those rules are genuinely operative. Its characteristic mechanism—waivers of agency—is the strategic oscillation between claiming AI agency in contexts where it benefits the company and disclaiming that same agency in contexts where it would create accountability.

Comparative Rechtsdogmatik analysis of Anthropic’s Claude Constitution, OpenAI’s Model Spec, and Google’s AI Principles reveals this pattern at work across the three leading AI developers.

What institution, then, can supply the missing judgment? Corporate law has developed a mechanism for this situation. When a corporation’s legal structure is used to externalise harm onto third parties, courts are empowered to “pierce the corporate veil”—to look behind formal structures and assess operational reality. The doctrine is deliberately non-algorithmic: no checklist determines when the veil pierces, and courts must exercise contextual judgment about the totality of circumstances. It is, in Aristotelian terms, institutionalised phronesis—a judgment mechanism that operates precisely where rules reach their limits.

Three claims structure the argument: that the gap between rules and their application is the operational reality of AI governance today (§2); that this gap is already being strategically exploited through the “waivers of agency” pattern (§3); and that corporate veil-piercing doctrine—adapted through four governance-specific triggers—provides the institutional judgment mechanism that current AI governance lacks (§4). It then maps this proposal against regulatory architectures in the EU, China, and the UK (§5) and addresses limitations (§6). The contribution is philosophical, not legislative: a diagnosis and an institutional remedy, not a statutory proposal.

The current debate rests on four assumptions: that judgment and rules are closely connected; that algorithmic and non-algorithmic rules differ; that algorithmic rules exhaustively determine their application; and that judgment merely supplements rules for non-algorithmic cases. Whether one follows Gori in treating computational rules as standing in a more determinate relationship to their application than legal rules—a distinction this paper accepts as marking a real difference in degree—the critical question is whether that difference in degree entails a difference in kind at the governance boundary. The first two assumptions hold in strengthened form: judgment is constitutive of rules, not merely connected to them—rules are “dead signs” (Wittgenstein) without the judgment that applies them. It challenges the third: even algorithmic rules, at the governance boundary, cannot exhaustively determine their application—the meta-recursive problem (§2.1) demonstrates that no amount of output inspection can distinguish genuine from performative compliance. And it reframes the fourth: judgment is not supplementary but constitutive. Rules without judgment produce exploitable governance—the “waivers of agency” pattern diagnosed in §3.


Contents | Next: The Judgment Gap in AI Governance