A Deadline Without a Schema
The EU AI Act's first major compliance threshold passed in February 2025, requiring organizations deploying high-risk AI systems to demonstrate documented risk management procedures, data governance standards, and human oversight mechanisms. Enforcement attention is now turning toward a second wave of obligations taking effect in August 2026. What has emerged in the intervening months is not a story about regulatory compliance as such. It is a story about organizational competence, and specifically about the kind of competence that compliance frameworks cannot produce by design.
Across European enterprises, the dominant response to AI Act obligations has been procedural. Legal teams have produced documentation. HR departments have launched AI literacy modules. Governance committees have been formed. The assumption embedded in each of these responses is that the problem is informational: if workers and managers know what the rules require, they will be able to act accordingly. This assumption deserves serious scrutiny.
The Proceduralization Trap
Hatano and Inagaki (1986) drew a distinction between routine expertise and adaptive expertise that is directly relevant here. Routine expertise is the capacity to execute known procedures reliably. Adaptive expertise is the capacity to respond effectively when the procedure does not fit the situation. Regulatory compliance frameworks are, almost by definition, engines of routine expertise production. They define categories, specify documentation requirements, and assign accountability. What they do not produce is the structural understanding that would allow an organization to recognize when a novel AI deployment falls outside the categories already defined.
The EU AI Act's risk classification system is illustrative. The Act sorts AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. Each category carries different obligations. But the Act cannot anticipate every system, and classification itself requires judgment about how a given system functions, in what context, and with what potential for harm. That judgment depends on what I would call schema-level understanding: an accurate model of how algorithmic systems work as a structural class, not merely familiarity with the Act's enumerated examples. Organizations whose AI literacy programs have focused on procedural compliance will struggle precisely at these novel boundary cases, because they have trained for topography rather than topology (Kellogg, Valentine, and Christin, 2020).
The Awareness-Capability Gap at the Organizational Level
Research on algorithmic literacy has consistently found that awareness of an algorithm's existence or general logic does not translate into improved outcomes (Gagrain, Naab, and Grub, 2024). Workers who know they are being evaluated by an algorithm do not, on that basis alone, perform better. They develop folk theories, locally plausible narratives about how the system responds, that may or may not correspond to the actual structural logic. The same dynamic appears to be playing out in organizational responses to AI regulation.
Corporate AI governance teams have developed detailed awareness of the Act's requirements. What is less clear is whether this awareness corresponds to accurate structural understanding of the AI systems the Act is meant to govern. Sundar (2020) notes that machine agency introduces a distinct layer of communicative complexity that human institutional actors are poorly equipped to model. When an organization documents its "human oversight mechanism" for a high-risk AI system, the quality of that documentation depends entirely on whether the documenters understand what the system is actually doing well enough to know what oversight would need to catch. Awareness of the regulatory obligation does not supply this understanding.
What the Compliance Industry Is Not Selling
A consulting and legal services industry has grown rapidly around EU AI Act compliance, and its product is predominantly procedural: gap analyses, documentation templates, training certificates. This is not a criticism of the industry. It is producing what organizations are asking for, and what organizations are asking for reflects a genuine belief that procedural compliance and organizational competence are the same thing. They are not.
Hancock, Naaman, and Levy (2020) argue that AI-mediated communication requires new frameworks precisely because existing communicative competencies do not transfer reliably to algorithmically structured environments. The same principle applies to governance. The structural features of algorithmic systems, their opacity, their context-sensitivity, and their tendency to produce emergent behaviors that were not specified in design, require that organizations develop something closer to adaptive expertise than to regulatory fluency. Gentner's (1983) structure-mapping framework suggests that this kind of transfer depends on schema induction rather than instance-level training: on learning the relational structure of a problem class, not on memorizing exemplars.
The Organizational Implication
The August 2026 compliance wave will likely reveal a sharp variance in outcomes across organizations with nominally equivalent compliance programs. That variance will not be fully explained by the quality of their documentation. It will be explained by whether the people making classification and oversight decisions have accurate structural models of the systems they are governing. Procedure can mandate that a decision be made. It cannot supply the schema required to make the decision well.
References
Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt