The Announcement and What It Actually Means
Google this week introduced AppFunctions, a new early beta feature set designed to transform Android into what the company explicitly calls an "agent-first" operating system. The architecture is specific: apps no longer present interfaces for users to navigate directly. Instead, apps provide functional building blocks, and AI agents or assistants call those functions on the user's behalf to fulfill task-level goals. The user specifies intent; the agent determines execution. This is not a marginal update to how Android handles notifications or permissions. It is a structural reorganization of who, or what, mediates between human intention and application behavior.
This announcement deserves careful organizational analysis, because it instantiates a coordination problem that existing theory handles poorly. The question is not whether AI agents will become more capable. The question is what happens to the competencies users and developers previously held when the platform absorbs those competencies into its own mediation layer.
The Competence Absorption Problem
Classical coordination theory, whether through markets, hierarchies, or networks, assumes that actors arrive with some pre-existing competence relevant to the coordination task (Kellogg, Valentine, and Christin, 2020). AppFunctions inverts this assumption in a specific and interesting direction. Under the previous Android model, users developed what I would call topographical knowledge: they learned the layout of specific apps, specific menus, specific gesture sequences. This knowledge was procedural, fragile under updates, but functional within stable environments.
The agent-first model does not simply replace those procedures with better procedures. It relocates the procedural layer entirely. The AI agent now holds topographical knowledge of application structure. The user is expected to operate at the level of intent specification. This is, in theory, a promotion: users graduate from navigating menus to articulating goals. In practice, it creates a new and largely unmapped competence requirement. Specifying intent precisely enough for an agent to execute correctly is not a simpler skill than navigating an interface. It is a different skill, and one that has received almost no formal treatment in either AI literacy research or HCI design literature.
Folk Theories Will Not Save Users Here
Research on algorithmic literacy consistently documents what I have elsewhere called the awareness-capability gap: workers and users develop awareness that algorithms govern their outcomes, but this awareness does not reliably translate into effective behavior (Gagrain, Naab, and Grub, 2024). The gap exists because awareness tends to produce folk theories rather than accurate structural schemas. A folk theory of AppFunctions might be: "the AI knows what I want if I describe it clearly." A structural schema would involve understanding how the agent maps natural language intent to discrete function calls, where ambiguity in intent maps to ambiguity in function selection, and where the failure modes concentrate.
Sundar (2020) argues that as machine agency increases, users shift from active navigators to passive recipients of machine-generated outputs, with downstream effects on both engagement and accuracy judgments. AppFunctions accelerates this shift by design. The user's role in the interaction loop shrinks. This is fine when agent execution is accurate. It becomes systematically problematic when the agent misinterprets intent, because the user has progressively fewer touch points at which to notice and correct the error.
The Developer Coordination Problem Is Equally Underspecified
AppFunctions also creates a non-trivial coordination problem on the developer side. Developers must now expose functional building blocks that are legible to an AI agent rather than designing interfaces that are legible to human users. These are not the same design task. Human interface design relies on conventions, visual affordances, and accumulated HCI research. Designing functions for agent consumption requires a different kind of specification: precise, unambiguous, context-independent function signatures that an LLM-based agent can reliably invoke. Hatano and Inagaki (1986) distinguish between routine expertise, which handles expected cases, and adaptive expertise, which handles novel ones. Most Android developers have built routine expertise in human-facing UI design. The agent-first model demands adaptive expertise in a domain where best practices do not yet exist.
Why This Matters Beyond Android
The AppFunctions announcement is worth tracking not because Android is uniquely important, but because it is an early, observable instance of a broader architectural shift. If platforms increasingly mediate between user intent and application execution through AI agents, then the competence questions I examine in my dissertation research, specifically around schema induction and transfer, become directly relevant to mainstream consumer technology. The ALC framework predicts that users who develop accurate structural schemas of how agent-mediated coordination works will outperform users who accumulate platform-specific procedural knowledge, precisely because the agent layer will continue to change in ways that invalidate any specific procedure. Google's announcement this week gives that prediction a concrete test environment worth watching.
References
Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt