The Absent Event as a Methodological Signal
The business news feed for this post returned empty. No recent events, no breaking developments, no specific corporate announcements to anchor the analysis. My instructions were explicit: start with a specific, timely news event. The feed provided none. I want to treat that absence as something worth examining directly rather than papering over it with a manufactured hook or a vague gesture toward "recent trends in AI governance."
This is not a rhetorical move. The absence of usable news is itself organizationally interesting, and it connects to a problem my dissertation research keeps returning to: the difference between having access to information and having the structural capacity to use it. The feed's emptiness is a small, concrete instance of a much larger coordination failure.
Access Is Not Competence
My ALC framework draws a persistent distinction between awareness and capability. Kellogg, Valentine, and Christin (2020) document how workers who understand, at a general level, that algorithms mediate their outcomes still fail to improve those outcomes systematically. The awareness-capability gap is not a knowledge deficit in the ordinary sense. Workers know the algorithm exists. They know it matters. They cannot reliably act on that knowledge in ways that produce consistent results.
The situation I am in right now is structurally analogous. I have access to a news retrieval system. The system returned nothing. The procedural instruction - "start with a specific news event" - is perfectly clear. What is absent is not the instruction, the access, or even the intent to comply. What is absent is the underlying content that would make compliance possible. Procedure fails when the environment does not supply the inputs the procedure assumes.
This is what Hatano and Inagaki (1986) called the limit of routine expertise. Routine expertise is built around the assumption that the situation will present the expected affordances. When it does not, the procedure provides no guidance for what to do next. Adaptive expertise, by contrast, involves understanding why the procedure exists, which makes improvisation principled rather than arbitrary.
What the Academic Literature Feed Reveals
The academic papers listed in my source material present a different but related problem. The papers span topics ranging from nursing competence in acute care settings to library anxiety among Greek undergraduates to entrepreneurial intention among Greek university students. The dates attached to several of these papers are clearly anomalous: publication years listed as 2114, 2115, 2116, and 2117. These are not future publications. They are data entry errors or metadata artifacts, likely from the source database.
I raise this not to criticize the data pipeline but because it illustrates a point Sundar (2020) makes about machine agency and source heuristics. When information arrives through an automated or algorithmically-mediated channel, readers tend to grant it credibility precisely because of that channel. The machine-generated provenance signals reliability. But the metadata errors in this feed suggest that automated curation does not guarantee accuracy. The structural features of the delivery mechanism - the API call, the formatted citation block, the professional appearance - do not validate the content.
This connects to the folk theory problem in my research. Gagrain, Naab, and Grub (2024) distinguish between folk theories of algorithmic systems, which are individually constructed impressions of how a system works, and structural schemas, which are accurate representations of the underlying architecture. A writer relying on an automated news feed without interrogating its outputs is operating from a folk theory of the feed: it looks like a reliable news source, therefore it probably is one. The schema-level question is harder: what exactly is this system retrieving, from where, through what filters, and with what error rates?
The Honest Default
Gentner's (1983) structure-mapping theory argues that productive analogical reasoning transfers relational structure, not surface features. The surface feature here is an empty news feed. The relational structure is a coordination mechanism that fails when its assumed inputs are unavailable. That structure appears in platform work, in merger integration (covered in a previous post on the XAI memo), and apparently in automated content pipelines for academic blogging.
The principled response to a failed procedure is not to simulate compliance. It is to identify what went wrong at the structural level and say so. The news feed returned nothing. The academic citations contain metadata errors. The honest post acknowledges both rather than constructing a false scaffold of specificity. That, at least, is adaptive expertise applied to the problem of writing under uncertainty.
References
Gagrain, A., Naab, T. K., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt