FedEx has begun delivering what it calls "promotion-ready" AI training to over 400,000 workers worldwide. The scale is notable. So is the framing: promotion-ready implies the training is meant to produce not just awareness but career-consequential capability. That framing deserves scrutiny, because the research on how people actually develop competence in algorithmically-mediated environments suggests that scale and ambition are not the same thing as effective design.
The Awareness Trap at Scale
The central problem in AI literacy initiatives is not motivation or access. It is the gap between awareness and capability. Algorithmic literacy research consistently shows that workers can develop accurate beliefs about the existence and general behavior of algorithmic systems without developing any improved ability to navigate those systems effectively (Gagrain, Naab, & Grub, 2024). Workers know the algorithm exists. They know it shapes their outcomes. That knowledge does not translate into better decisions. FedEx's initiative, as reported, emphasizes delivery of training content to a very large population. What remains unclear is whether the training is designed to close the awareness-capability gap or simply to produce awareness at scale.
This distinction matters because the two goals require fundamentally different instructional architectures. Producing awareness favors breadth, module completion, and measurable knowledge recall. Closing the capability gap requires something harder: teaching workers to recognize the structural features of algorithmic systems so they can reason from those features when conditions change. Hatano and Inagaki (1986) describe this as the difference between routine expertise and adaptive expertise. Routine expertise is procedural. It works when conditions are stable. Adaptive expertise is principle-based. It works when conditions shift, which is exactly what AI-mediated work environments do.
What "Promotion-Ready" Actually Requires
The promotion-ready framing in FedEx's initiative is interesting because it implicitly acknowledges that AI competence is now a criterion for advancement. That is a meaningful organizational signal. But it also raises a measurement problem. If promotion decisions are downstream of training completion, the organization may be conflating training exposure with demonstrated capability. The research on schema induction suggests that the workers most likely to transfer AI-related skills to novel tasks are those who have developed accurate structural schemas, not those who have completed the most modules (Gentner, 1983). A worker who understands why an algorithmic system weights certain inputs the way it does will outperform a worker who has memorized the correct procedural response to each known scenario, precisely because the first worker can reason forward when a novel scenario appears.
This is directly relevant to FedEx's operational context. Logistics is not a stable environment. Routes change, demand signals shift, and AI-driven optimization tools are updated continuously. A training program that produces procedural fluency with the current system configuration is producing competence that has a short shelf life. What FedEx actually needs, if the goal is durable capability, is training that produces workers who understand the structural logic of algorithmic optimization well enough to adapt when the specific procedures change.
The Organizational Theory Problem Beneath the Initiative
There is a deeper organizational theory issue here that the promotion-ready framing surfaces. Classical coordination theory assumes that organizations can specify roles, train workers to fill those roles, and achieve reliable performance through that pipeline (Kellogg, Valentine, & Christin, 2020). Platform and AI-mediated work inverts this assumption. The system does not stay still long enough for procedural training to remain valid. The competence required is endogenous to ongoing participation in the system, not to any fixed curriculum delivered in advance.
FedEx is not alone in facing this problem. Adobe's CFO Dan Durn is reportedly using AI to auto-respond to 300,000 emails and compress contract review timelines. The organizational question in both cases is the same: are workers being trained to use specific tools, or are they being trained to reason about a class of tools? The former produces dependency on the current configuration. The latter produces adaptability across configurations. Only one of those is actually promotion-ready in any meaningful sense.
What the Evidence Suggests
The ALC framework I am developing at Bentley predicts that general schema-induction training, the kind that teaches structural features rather than specific procedures, should produce better transfer outcomes than platform-specific procedural training, even if the procedural training produces faster initial performance. FedEx's initiative is, in effect, a large-scale natural experiment on this question. The outcome will depend entirely on what the training actually teaches, not on how many employees complete it. Scale without design validity does not solve the awareness-capability gap. It just produces it at scale.
Gagrain, M., Naab, T. K., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society. Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170. Hatano, G., & Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development, 27-36. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work. Academy of Management Annals, 14(2), 366-401.
Roger Hunt