The Specific Event
This week, workers at Google DeepMind's London headquarters voted to unionize and formally requested recognition from management through the Communication Workers Union. The stated motivation is direct: employees want to prevent DeepMind's AI systems from being deployed in military contracts with Israel and the United States. This is not a wage dispute. It is a governance dispute, and the distinction matters enormously for how we analyze what is actually happening organizationally.
Governance at the Application Layer
What makes this case theoretically interesting is that the workers are not protesting the existence of AI systems or even their general commercial deployment. They are protesting specific application-layer decisions, choices about which downstream uses of an AI system are permissible. This is a meaningful unit of analysis. The application layer, as I have been developing it in my dissertation work, is precisely where abstract computational capability gets translated into sociotechnical action. The union action at DeepMind is, at its core, a dispute over who controls that translation process.
Rahman (2021) described how algorithmic systems function as invisible cages, structuring worker behavior without workers having formal authority over the rules of the cage. The DeepMind situation inverts this framing. These workers are not subject to an algorithm in the gig-economy sense. They are builders of algorithms, and their complaint is that they lack governance authority over the outputs of their own expertise. The invisibility problem is not informational here. It is institutional. The workers can see exactly what the technology does. What they cannot see, or cannot influence, is the organizational process by which use cases get approved.
The Competence-Authority Disjunction
There is a structural problem embedded in this story that standard organizational theory handles poorly. Classical hierarchy theory assumes a rough alignment between technical competence and decision-making authority. The people who understand a technology best are, in theory, positioned to make informed decisions about it. The DeepMind case illustrates a sharp disjunction between these two variables. The engineers and researchers voting to unionize almost certainly have the deepest technical understanding of what the AI systems can and cannot do in military contexts. Management, however, retains formal authority over deployment decisions.
Kellogg, Valentine, and Christin (2020) documented extensively how algorithmic management systems redistribute authority in ways that decouple expertise from control. The DeepMind situation represents a symmetric version of this problem. Rather than management using algorithms to control workers, workers with algorithmic expertise are attempting to assert control over management decisions. Unionization is the organizational mechanism they are reaching for because no existing governance structure within the firm provides an alternative channel.
The Schema Problem in AI Ethics Governance
There is a deeper theoretical issue here about what kind of organizational knowledge is actually required to govern AI deployment decisions responsibly. My ALC framework distinguishes between folk theories and structural schemas (Gentner, 1983). A folk theory of military AI ethics might produce a list of prohibited use cases. A structural schema would identify the relational features that make any given deployment ethically problematic, regardless of the specific context.
The governance failure that the DeepMind workers are responding to looks like a folk-theory failure. Management appears to be operating with a procedural checklist model of AI ethics, approved vendors, approved contract types, approved national security partners, while workers with structural schema-level understanding of the technology recognize that those procedures do not capture the relevant risks. Hatano and Inagaki (1986) would recognize this immediately as the gap between routine expertise and adaptive expertise. Routine expertise executes existing procedures. Adaptive expertise recognizes when existing procedures are structurally inadequate for the problem at hand.
What This Predicts for Corporate AI Governance
The DeepMind unionization effort is likely an early instance of a broader organizational pattern. As AI systems become more capable and their application-layer consequences more significant, the competence-authority disjunction will intensify. Organizations that rely on procedural compliance frameworks, acceptable use policies, ethics review checklists, and approval hierarchies will find those frameworks increasingly contested by workers who possess structural understanding that the frameworks do not encode (Hancock, Naaman, and Levy, 2020).
Sundar (2020) argued that machine agency introduces new layers of accountability complexity that existing organizational forms were not designed to handle. The DeepMind case confirms this empirically. The workers are not wrong that unionization is the available lever. But the deeper organizational problem is that there is no existing governance structure, within this firm or most others, that formally allocates decision-making authority to those with the highest structural competence over AI deployment consequences. Until that institutional gap is addressed, workforce conflict over application-layer decisions will continue to escalate.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, K. S. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt