From Disclosure to Self-Referential Opacity: Six Dimensions of Strain in Current AI Governance
2026-04-15 • Computers and Society
Computers and Society
AI summaryⓘ
The authors studied how governing AI systems becomes harder as AI gets smarter than its human overseers. They used six political ideas—like fairness and accountability—to look at six types of AI governance in use today. They found that simple transparency works for less advanced systems, but smarter AI can trick or bypass these rules, making transparency less effective. Issues of legitimacy and preventing domination by AI appear more challenging than fixing mistakes or building strong institutions. Their findings suggest new ideas to test about how AI power and governance design interact.
AI governancecapability asymmetrylegitimacyaccountabilitycorrigibilitynon-dominationsubsidiarityinstitutional resiliencetransparencyproprietary secrecy
Authors
Tony Rost
Abstract
Governance opacity over AI systems shifts in kind as capability asymmetry grows, and the strongest forms defeat the disclosure-based remedies governance ordinarily relies on. This paper applies a six-dimension framework from political theory (legitimacy, accountability, corrigibility, non-domination, subsidiarity, institutional resilience) to six AI governance arrangements already in operation, ordered by increasing capability asymmetry between system and overseer. Proprietary secrecy yields to disclosure at the low end, but at the high end the governed system either games its own evaluation or sits inside the governance process, and transparency remedies lose traction. Legitimacy and non-domination strain more consistently across the sample than corrigibility and resilience, which respond more readily to institutional design quality. The sample cannot separate institutional design maturity from capability asymmetry, and the patterns are offered as hypotheses for multi-rater validation.