A Dissent That Reveals More Than It Conceals
At the AI summit in India this week, delegates from dozens of nations converged toward a shared statement on how humanity should govern rapidly evolving AI systems. The Trump administration stood apart, opposing global AI guardrails while the broader international consensus moved in the other direction. This is being reported as a geopolitical story about American exceptionalism versus multilateral cooperation. That framing is not wrong, but it misses the more analytically interesting problem. The administration's dissent is not simply a policy disagreement. It is a signal about what kind of coordination failure we are actually dealing with - and organizational theory gives us sharper tools to diagnose it than diplomatic commentary does.
The Coordination Problem Beneath the Political Surface
Standard accounts of international AI governance treat this as a negotiation problem: actors have different interests, those interests need to be reconciled, and multilateral forums are the mechanism for doing that. But this framing assumes that all parties share a common structural understanding of what they are trying to govern. The more I look at what happened in India, the less confident I am that this assumption holds. The United States, the European Union, and nations like India do not merely disagree about what rules to impose on AI systems. They appear to operate from fundamentally different schemas about what AI systems are, how they produce value, and what governance is actually meant to constrain.
This is a schema problem, not a preferences problem. Kellogg, Valentine, and Christin (2020) identified something analogous in workplace algorithmic management: workers and managers interacting with the same algorithmic systems developed divergent mental models about how those systems operated, and these divergent models produced coordination failures that procedural interventions could not resolve. The fix was not more rules. It was shared structural understanding. The India summit failure looks structurally similar. If the U.S. delegation and the multilateral consensus are operating from different schemas about what AI governance is for, then producing a joint statement would have been a procedural artifact, not a genuine coordination outcome.
Why Procedural Solutions Will Fail Here
There is a recurring impulse in policy response to AI risk: produce more documentation, more frameworks, more compliance checklists. The European Union's AI Act is partly this. International summits that produce signed statements are partly this. My concern, grounded in the distinction Hatano and Inagaki (1986) draw between routine and adaptive expertise, is that procedural governance instruments are being asked to do work they structurally cannot do. Routine expertise - knowing which compliance box to check - fails when the environment changes. AI systems are not static. The procedures written to govern GPT-4 may be poorly matched to whatever comes next. Adaptive expertise, by contrast, requires understanding the structural principles well enough to respond to novel configurations. International AI governance needs the adaptive form, and summits producing declarative statements do not build it.
Rahman (2021) showed that the most consequential features of algorithmic control are often the least visible to those being governed by them. The same principle applies here at the international level. Governance bodies are largely reactive to visible outputs of AI systems - deepfakes, job displacement, bias in hiring. The structural features of how these systems coordinate behavior, amplify initial differences, and create path dependencies in capability development are less visible and therefore under-governed. The Trump administration's rejection of global guardrails does not solve this problem. But neither, if I am being direct, does a consensus statement that addresses surface-level outputs rather than structural mechanisms.
The Variance Problem in National Capability Development
One element of this story that deserves more attention is the asymmetric stakes. Not all nations at the India summit face the same exposure to AI-driven coordination failures. The variance puzzle that motivates my own research - platform workers with identical access producing dramatically different outcomes due to algorithmic amplification of initial differences (Schor et al., 2020) - has a direct analogue at the national level. Countries with strong existing AI research infrastructure will compound advantages through AI adoption in ways that countries without that infrastructure will not. Global guardrails, depending on their design, could either reduce this variance or entrench it. The Trump administration's position forecloses that design conversation, but the multilateral consensus was not clearly engaging with the variance problem either.
What This Means for Governance Design
I am not arguing that global AI governance is impossible or that the India summit was misconceived. I am arguing that the dissent moment illuminates a prior problem: governance mechanisms are being proposed before participants have developed a shared structural schema for what they are governing. Gentner's (1983) structure-mapping theory suggests that productive analogical reasoning - and productive coordination - depends on identifying shared relational structure, not surface similarity. The nations at this summit share surface-level concern about AI risk. Whether they share a structural understanding of AI's coordination mechanisms is a different and more important question. Until that schema alignment problem is addressed, the procedural disagreement between the U.S. and the multilateral consensus is somewhat beside the point.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt