The Specific Problem Hidden Inside a Relatable Story
A recent report in the business press profiles Jade, a 30-year-old insurance technology worker in Raleigh, North Carolina, who spends her days optimizing systems with AI tools while privately fearing that the processes she builds will eventually eliminate her own role. The story frames this as a psychological phenomenon, a kind of occupational dread particular to the current moment. That framing is understandable, but it misidentifies where the real analytical tension lives. The more precise problem is not that Jade is anxious. It is that she is competent enough to be useful to her employer and literate enough to understand her situation, yet neither of those attributes appears to be protecting her position in any durable way. That combination is theoretically significant.
Awareness Without Structural Understanding
Algorithmic literacy research has consistently documented a gap between awareness and capability. Workers learn that algorithms govern their outputs, their visibility, and their evaluations, and yet this awareness does not reliably translate into improved performance or improved outcomes (Kellogg, Valentine, and Christin, 2020). Jade's situation extends this logic into a different register. She is not simply aware that an algorithm is sorting her content or rating her delivery times. She is aware that she is participating in building the very infrastructure that may eventually displace her. This is a more sophisticated form of awareness. And it still does not help her.
The distinction I find useful here comes from Hatano and Inagaki (1986), who differentiated between routine expertise and adaptive expertise. Routine expertise is the ability to execute known procedures reliably. Adaptive expertise is the ability to understand the underlying principles well enough to respond when those procedures no longer apply. Jade, as described, appears to possess the first kind. She optimizes systems. She follows workflows. She is useful precisely because she can execute. But the article implies she lacks a structural schema for understanding what kind of work remains irreplaceable and what kind does not. That is not an emotional failure. It is a cognitive one.
The Training Industry's Response Is Part of the Problem
The same news cycle that surfaces Jade's story also surfaces a listicle enumerating 13 AI skills organizations should train their workforces to acquire. This is almost a controlled demonstration of what I would call the procedural documentation trap. The underlying assumption is that the solution to workforce displacement anxiety is a better checklist. If workers learn to prompt effectively, interpret model outputs, and integrate AI tools into existing workflows, the reasoning goes, they will become durable contributors rather than replaceable ones.
This assumption is worth interrogating carefully. Gentner's (1983) structure-mapping theory predicts that transfer across novel contexts depends on shared relational structure, not surface similarity. A worker who learns 13 discrete AI skills has acquired 13 procedures. A worker who understands why certain categories of work resist automation, which structural properties of tasks make them difficult for current architectures to replicate, has acquired a schema. The first worker is better positioned for the current tool landscape. The second worker is better positioned for the next one. Organizations that conflate these two things will produce workforces that feel prepared and are not.
What Organizational Theory Says About This Moment
Rahman (2021) documented how platform architectures create what he called invisible cages: constraint systems that workers can feel but cannot fully see or articulate. The white-collar AI context introduces a variation on this dynamic. The constraints are not invisible to Jade. She can see them with considerable clarity. The cage is transparent. What she lacks is a structural account of which parts of the cage are load-bearing and which are not. That requires something different from awareness, and something different from procedural training. It requires schema induction: instruction that targets the relational features of the system rather than its surface behaviors (Hancock, Naaman, and Levy, 2020).
Schor et al. (2020) argued that platform dependence generates a specific kind of precarity rooted in information asymmetry between workers and the systems governing them. The white-collar AI case is a partial inversion of that model. The information asymmetry is not total. Workers like Jade often have significant access to information about what AI systems are doing. The precarity persists anyway. That persistence is the puzzle worth taking seriously, and listicles about AI skills are not going to resolve it.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt