The Trump administration's recent decision to classify Anthropic as a "supply chain risk" - effectively barring the company from federal contracts and isolating it from private firms doing business with the military - raises a question that procurement policy rarely asks directly: what exactly is being excluded? The framing of the decision as a national security measure treats Anthropic as a vendor of deliverables. But Anthropic is more accurately described as a vendor of coordination infrastructure, and that distinction carries consequences that standard procurement logic is not built to handle.
When Exclusion Operates on the Wrong Unit of Analysis
Federal procurement frameworks were designed around a relatively stable assumption: that the competence required to use a procured system sits with the procuring organization, and that the vendor's role is to deliver a specified capability. This assumption breaks down with AI systems in a way that it does not break down with, say, steel or logistics software. The relevant competence for extracting value from a system like Claude is not transferable through a contract specification. It develops endogenously, through repeated interaction with the system's structural features (Kellogg, Valentine, and Christin, 2020). Blocking access to Anthropic's systems does not preserve that competence in federal agencies - it simply ensures it does not develop there.
This is not an abstract theoretical concern. Research on platform coordination consistently shows that workers and organizations with nominally identical access to algorithmic systems produce dramatically different outcomes, with distributions following power-law rather than normal patterns (Schor et al., 2020). The variance is not explained by differences in formal training or documented procedures. It is explained by differences in what I would call structural schema - accurate internal representations of how the system's underlying logic operates, as distinct from surface-level procedural knowledge about which buttons to press.
The Procurement Version of the Awareness-Capability Gap
There is a well-documented gap in the algorithmic literacy literature between awareness and capability. Knowing that an algorithm exists, or even knowing that it shapes outcomes, does not produce improved performance (Gagrain, Naab, and Grub, 2024). Federal procurement policy, as applied to AI vendors, appears to operate at the awareness level. Decision-makers are aware that AI systems introduce supply chain dependencies. What the policy does not engage with is the capability question: what would it actually take for federal agencies to develop the structural understanding necessary to use, evaluate, or audit AI systems independently?
The Anthropic exclusion answers the capability question by foreclosing it. This is not unusual in procurement - exclusion is a standard risk management tool. But the specific risk being managed here is misidentified. The actual risk to federal AI capability is not vendor dependence on Anthropic specifically. It is the broader pattern where organizations treat AI integration as a procurement problem rather than a coordination problem. Hancock, Naaman, and Levy (2020) note that AI-mediated communication fundamentally alters the epistemic conditions under which decisions are made. Excluding a vendor does not alter those conditions - it simply changes which vendor is absent from them.
What Routine Expertise Cannot Solve Here
Hatano and Inagaki (1986) draw a distinction between routine expertise, which is the efficient execution of known procedures, and adaptive expertise, which is the ability to apply principles to novel configurations. Federal AI governance, as currently structured, is investing heavily in the routine side of this distinction. Procurement guidelines, approved vendor lists, and supply chain audits are all procedural instruments. They are well-suited to environments where the relevant task structure is stable and knowable in advance.
AI systems are not that environment. The structural features of large language model behavior are not fully specified by any vendor, including Anthropic (Sundar, 2020). This means that the competence required to govern AI use in federal contexts is inherently adaptive rather than routine. Blocking Anthropic does not make that competence easier to develop - it removes one of the environments in which it might have developed, while leaving the underlying structural challenge unaddressed.
The Broader Implication for Organizational AI Governance
The Anthropic case is worth watching precisely because it is not primarily a story about one company's federal contracts. It is an early, high-visibility example of what happens when organizations apply classical coordination logic - vendor selection, exclusion, contract specification - to a class of systems that operates through a fundamentally different coordination mechanism. Gentner's (1983) structure-mapping framework would predict that organizations will transfer the procedural schema from familiar procurement contexts to AI procurement, and that this transfer will produce systematic errors because the underlying relational structure of the two domains does not align. That prediction appears to be playing out in real time.
References
Gagrain, A., Naab, T., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt