Abstract
AI governance does not suffer from a deficit of rules but from a deficit of judgment. This paper argues that dominant governance approaches share a structural flaw: they seek conditions under which situated human judgment becomes unnecessary, thereby creating room for entities to perform compliance while evading accountability. It terms this structural condition performative alignment. The paper argues that the gap between rules and their application is constitutive rather than contingent: rules cannot determine their own application, training on approval cannot access the grounds of approval, and the resulting gap is already being strategically exploited. Comparative Rechtsdogmatik analysis of Anthropic’s Claude Constitution, OpenAI’s Model Spec, and Google’s AI Principles identifies the characteristic mechanism of performative alignment: waivers of agency, the strategic oscillation between claiming AI agency in marketing contexts and disclaiming it in liability contexts. Against this diagnosis, the paper argues that corporate veil-piercing doctrine—adapted through four governance-specific triggers—supplies the institutional judgment mechanism current AI governance lacks.
Keywords
AI governance; judgment; performative alignment; waivers of agency; veil-piercing; corporate accountability