top of page
Search

From Automation to Autonomy: Agentic AI, Organizational Power, and the New Political Economy of Work

  • Writer: OUS Academy in Switzerland
    OUS Academy in Switzerland
  • 6 days ago
  • 13 min read

Author: Alex Morgan

Affiliation: Independent researcher


Abstract

Organizations in 2025 are moving from rule-bound automation toward agentic AI—software agents that plan, decide, and act with a degree of autonomy in complex environments. While mainstream management narratives focus on productivity gains, a fuller academic account requires socio-theoretical lenses that reveal how agentic systems redistribute power, reshape professional fields, and reorganize global value chains. This paper develops an interdisciplinary, critical sociology of agentic AI that speaks to management, technology, and service sectors (including tourism and hospitality). Using Bourdieu’s concept of capital and field, world-systems theory, and institutional isomorphism, the article explains why adoption is accelerating, how benefits and risks are unevenly distributed, and what governance principles can align autonomous systems with human flourishing. We propose a conceptual model that distinguishes five design and governance dimensions—autonomy, adaptivity, accountability, alignment, and auditability—and we derive a staged roadmap for responsible deployment. The analysis highlights labor transformation, capability complementarities, and the emergence of “algorithmic authority” as organizations entrust coordination and decision rights to agents. We conclude with implications for research and policy, emphasizing comparative, longitudinal, and sector-sensitive inquiry into the social consequences of agentic AI.


Keywords: agentic AI; autonomous systems; organizational governance; institutional isomorphism; Bourdieu; world-systems; digital transformation


1. Introduction

The near-term future of work is being shaped not only by automation—the execution of predefined tasks by software—but increasingly by agency, the capacity of systems to pursue goals, adapt, and coordinate actions in dynamic settings. Agentic AI differs from conventional automation in three critical ways. First, it plans: agents generate and update action sequences in response to changing constraints. Second, it learns: agents refine policies based on feedback, sometimes generalizing across tasks. Third, it coordinates: agents communicate with humans and with each other, negotiate resource trade-offs, and trigger processes without being explicitly instructed at every step. These differences matter for management because they change the distribution of decision rights within organizations, influence the skills employees need, and reconfigure the interfaces between firms, regulators, and markets.

To date, managerial discourse has emphasized efficiency—faster workflows, fewer errors, and lower costs. Yet, a purely instrumental lens obscures deeper social dynamics. Which actors gain or lose symbolic authority when an “autonomous teammate” writes a financial memo or reroutes a supply chain? How do sectoral norms and accreditation regimes discipline or accelerate adoption? Do agentic systems amplify existing core–periphery patterns in the global political economy, or can they enable upgrading in peripheral regions? Addressing these questions demands concepts from critical sociology and political economy, alongside technical and managerial analysis.

This article proceeds in seven parts. Section 2 develops the theoretical background, using Bourdieu’s capital and field theory, world-systems theory, and institutional isomorphism to construct a multidimensional explanation of adoption trajectories. Section 3 proposes a conceptual model—the 5A framework (autonomy, adaptivity, accountability, alignment, auditability)—to guide design and governance. Section 4 analyzes organizational transformation: labor substitution and augmentation, process orchestration, and the rise of algorithmic authority. Section 5 explores sectoral and global dynamics, including tourism and hospitality, public administration, and cross-border services, situating them within world-systems analysis. Section 6 offers a practical roadmap for staged deployment and metrics for evaluation. Section 7 outlines research directions and limitations. Section 8 concludes with implications for strategy and policy.

Throughout, the writing favors clear, human-readable language while maintaining academic rigor, so that managers, technologists, and social scientists can use a shared vocabulary to discuss the future of agentic AI.


2. Theoretical Background

2.1 From automation to agency

Automation is reactive. It relies on rules or learned mappings applied to inputs within a narrow range. By contrast, agentic systems embody a representation of goals and environment, plan actions, monitor progress, and revise plans. They may instantiate reinforcement learning policies, planning algorithms, or hybrid architectures that interact with enterprise software, data services, and human operators. Autonomy is not binary; it exists on a spectrum—from assisted execution (suggestions that a human approves) to supervised autonomy (human can veto) to conditional autonomy (agent acts within bounded risk thresholds).

Agency moves organizations from “push-button” efficiency to continuous sense–plan–act cycles embedded in business processes. In management terms, agentic AI reallocates decision rights from humans to machines under governance constraints, creating new questions of accountability, risk, and legitimacy.

2.2 Bourdieu: capital, habitus, field

Bourdieu’s sociology gives us tools for analyzing adoption beyond cost savings. Organizations inhabit fields—structured social spaces with their own rules, positions, and struggles for dominance. In each field (finance, higher education, hospitality), actors compete for capital in multiple forms:

  • Economic capital: budget, assets, and revenue that fund transformation.

  • Cultural capital: specialized knowledge and credentials (e.g., data science capability; process engineering expertise).

  • Social capital: networks spanning vendors, regulators, clients, and epistemic communities that diffuse practices.

  • Symbolic capital: prestige and legitimacy that make certain moves (e.g., launching an “AI-first” strategy) recognized as appropriate or visionary.

Habitus—the durable dispositions of managers and professionals—shapes how they perceive the promises and dangers of agency. In some fields, the habitus values standardization and auditability (e.g., aviation), while in others it values bespoke judgment (e.g., luxury hospitality). The distribution and convertibility of capitals condition which organizations can implement agentic AI successfully. A firm rich in cultural and social capital (skilled teams and vendor ties) can convert these assets into symbolic capital (“innovation leadership”), attracting clients and talent and thus reinforcing economic capital.

2.3 World-systems theory: core–periphery dynamics

World-systems theory directs attention to the geopolitical and economic stratification of the AI ecosystem. Compute capacity, large-scale data assets, and high-end talent are concentrated in core regions; semi-peripheral regions leverage outsourcing, specialized services, or regulatory niches; peripheral regions often import technology and provide lower-value labor. Agentic AI may reinforce this stratification if autonomy depends on access to proprietary models, vast datasets, and high-performance infrastructure. Conversely, carefully designed open architectures and regulatory sandboxes can enable peripheral upgrading, allowing local firms to orchestrate services and serve regional markets with agentic systems tailored to local languages and norms.

2.4 Institutional isomorphism: coercive, normative, mimetic

DiMaggio and Powell’s theory predicts that organizations in the same field tend to become more alike over time through three mechanisms:

  • Coercive isomorphism: mandates and procurement rules (e.g., audit logs, explainability requirements, data protection law) that structure how agents are designed and governed.

  • Normative isomorphism: professional standards (e.g., model risk management, safety guidelines, data governance) diffused by associations and certification bodies.

  • Mimetic isomorphism: uncertainty pushes firms to imitate “market leaders” by adopting similar architectures, vendor stacks, and governance boards.

These mechanisms explain why agentic AI rollouts often converge on similar patterns—Centers of Excellence, “human-in-the-loop” review, red-team testing—even across different sectors and countries.

3. A Conceptual Model for Agentic AI: The 5A Framework

We propose the 5A framework to analyze both the design of agentic systems and the governance that constrains them:

  1. Autonomy: the scope of decisions an agent can make without human approval; defined by risk thresholds, financial limits, and domain boundaries.

  2. Adaptivity: the system’s capacity to learn and adjust policies; includes safeguards against harmful drift.

  3. Accountability: assignment of responsibility for outcomes; includes audit trails, incident reporting, and role clarity.

  4. Alignment: explicit mapping of agent objectives to organizational strategy, legal norms, and ethical values; includes reward shaping and constraint programming.

  5. Auditability: the ability to reconstruct decisions post hoc; requires logging, versioning of models/prompts, and reproducible workflows.

The 5A lens integrates technical and sociological perspectives: autonomy/adaptivity relate to system capability; accountability/alignment/auditability address legitimacy and power. The framework guides both pre-deployment design and post-deployment oversight.


4. Organizational Transformation and Algorithmic Authority

4.1 Labor substitution, augmentation, and recomposition

Agentic systems do not simply “replace” jobs; they recompose workflows. Routine tasks (data retrieval, document drafting, basic triage) migrate to agents; humans focus on exceptions, negotiation, and relational work. Three patterns emerge:

  • Substitution: tasks become fully autonomous (e.g., low-risk routing).

  • Complementarity: agents propose actions; humans supervise, negotiate, or customize.

  • Reconfiguration: entire processes are redesigned around sense–plan–act cycles, creating new roles (AI product owner, safety officer, prompt architect) and hybrid teams.

Labor market effects depend on institutional contexts—training systems, collective bargaining, and professional licensing. Without reskilling pathways, substitution can produce localized displacement. With strong learning support, complementarity yields productivity and job quality gains.

4.2 Process orchestration and BPM with agents

In business process management (BPM), agentic AI can monitor process health, diagnose bottlenecks, and act by triggering tasks or reallocating resources. Over time, processes become self-tuning: agents update routing rules, propose revised service-level agreements, and coordinate human teams. However, orchestration raises governance concerns: Who approves changes? How are unintended consequences detected? The 5A framework implies embedded gates—simulation, sandboxing, and staged rollouts with rollback paths.

4.3 The rise of algorithmic authority

When agents write reports, summarize regulations, or recommend pricing, they accumulate symbolic capital—managers come to rely on their outputs as authoritative. This algorithmic authority can be productive (reducing noise and bias) but risky if it crowds out deliberation. To prevent over-reliance, organizations can enforce argumentation protocols: agents must provide reasons and alternatives; humans must record acceptance or rejection with justifications. This practice maintains human accountability while capturing learning signals.

4.4 Risk, safety, and compliance

Agentic systems operate under uncertainty. Safety regimes should include: (a) explicit risk catalogs, (b) guardrails on actions (spending caps, data access), (c) red-teaming for adversarial and social harms, (d) post-incident reviews with corrective actions, and (e) model lifecycle management (versioning, monitoring for drift). Legal compliance includes data protection, sector-specific regulations, and procurement rules. In high-stakes contexts (finance, health, public services), policy should mandate human-in-the-loop checkpoints for non-reversible or rights-affecting decisions.


5. Sectoral and Global Dynamics

5.1 Tourism and hospitality: from automation to curated autonomy

Tourism organizations increasingly deploy agents for itinerary planning, real-time service recovery, and multilingual guest support. The value proposition is not only speed but contextual personalization: agents integrate guest history, local events, and sustainability preferences to propose nuanced choices. Bourdieu’s lens clarifies why leading brands move first: they hold cultural capital (service design expertise) and social capital (destination partnerships) that convert into symbolic capital (reputation for effortless, personalized service). World-systems analysis warns, however, that gains may concentrate in core hubs with better data, connectivity, and talent. To foster peripheral upgrading, destination management organizations can standardize data APIs, pool resources for shared agent infrastructure, and cultivate local cultural capital via training.

5.2 Public administration and permits

Administrative agencies adopt agents for case triage, deadline tracking, and citizen response. Isomorphic pressures are strong: coercive (digital government mandates), normative (professional public administration standards), and mimetic (copying models perceived as “modern”). The challenge is balancing autonomy with procedural fairness: agents can schedule inspections or request documents, but rights-affecting determinations require human adjudication, transparent reasoning, and appeal mechanisms. Auditability is especially important: logs must preserve the chain of reasoning for judicial review.

5.3 Professional services and finance

Consultancies and financial institutions use agentic systems for document synthesis, risk dashboards, and portfolio monitoring. Because symbolic capital is central to these fields, algorithmic authority can be both a differentiator and a liability. Firms with thick social capital (client trust networks) can experiment with semi-autonomous outputs that humans validate. Others may face skepticism. Governance boards should define thresholds for unsupervised actions (e.g., small hedging adjustments) versus supervised recommendations (e.g., credit decisions).

5.4 Global value chains and compute geopolitics

Agentic AI accentuates compute geopolitics. Access to high-end chips, cloud capacity, and multilingual datasets shapes who can deploy cutting-edge agents. Core regions set technical standards; semi-peripheral regions specialize in domain-specific agents; peripheral regions risk dependency on opaque systems. Yet the periphery can upgrade through strategic complementarities: local language models, regulatory innovation, public–private training programs, and consortia that negotiate fair data access. World-systems theory predicts that without such strategies, surplus value from agentic AI will flow to the core; with them, semi-peripheral hubs can capture more value.


6. Governance, Design, and the Human in the Loop

6.1 Principles for responsible autonomy

Building on the 5A framework, seven practical principles can guide deployment:

  1. Purpose limitation: define the specific goals an agent optimizes; prevent scope creep.

  2. Proportional autonomy: match autonomy to risk; start with low-risk actions, expand with evidence.

  3. Explainability by design: require reason-giving, alternatives, and uncertainty estimates.

  4. Dual-track accountability: pair human sign-off with machine logs; both are necessary and auditable.

  5. Data governance and dignity: minimize data collection; respect user consent; prevent surveillance drift.

  6. Continual oversight: monitor performance, fairness, and drift; plan for rollback and graceful degradation.

  7. Participatory change management: include workers and affected stakeholders in design and review.

6.2 A staged roadmap

  • Stage 1 — Assess and Align: map processes; identify pain points; analyze capital endowments (cultural, social) that condition feasibility; align with strategy.

  • Stage 2 — Design and Simulate: specify autonomy boundaries; design guardrails; create testbeds using synthetic and curated real data; stress-test failure modes.

  • Stage 3 — Pilot with Humans-in-the-Loop: deploy in limited scope; collect decision logs; require human justification for approvals/overrides to generate learning signals.

  • Stage 4 — Scale and Institutionalize: formalize governance boards; adopt periodic audits; integrate metrics into performance management; rotate staff through AI oversight roles to retain skills.

  • Stage 5 — Reflect and Renew: evaluate broader impacts on job quality, equity, and resilience; update policies as the field evolves.

6.3 Metrics beyond productivity

Firms often chase turnaround time or cost per task. A sociologically literate dashboard includes:

  • Human flourishing metrics: job variety, autonomy, learning opportunities, workload balance.

  • Equity and access: distribution of benefits across roles, regions, and demographics.

  • Resilience: performance under shock scenarios; recovery time after failures.

  • Legitimacy: user trust, complaint rates, appeal outcomes, and regulator feedback.

  • Environmental footprint: compute energy use and efficiency improvements from process redesign.


7. Power, Culture, and the Political Economy of Agentic AI

7.1 Capital conversion and symbolic struggles

Agentic AI creates new opportunities to convert cultural capital (technical expertise) and social capital (ecosystem relationships) into symbolic capital (reputation for reliable autonomy). Early adopters can lock in advantages through standards-setting and preferred vendor status. But symbolic capital is fragile: high-profile incidents can liquidate trust quickly. Hence, symbolic capital hinges on visible governance—third-party audits, clear redress mechanisms, and transparent improvement cycles.

7.2 Habitus and professional identity

Professionals define themselves through expertise and judgment. Introducing autonomous agents can threaten identity or, if designed well, enhance it by offloading drudgery and spotlighting high-skill tasks (negotiation, empathy, strategy). Training that frames agents as colleagues and requires reflective practice—writing brief rationales when accepting or rejecting agent proposals—helps update habitus without eroding dignity.

7.3 Global inequalities and data colonialism

World-systems theory alerts us to data colonialism—the extraction of behavioral data from peripheral regions to train models that primarily benefit the core. Ethical deployment demands reciprocal arrangements: revenue sharing for data value, local capacity building, and equitable access to resulting tools. Otherwise, autonomy becomes another vector of dependency.

7.4 Institutional isomorphism and the diffusion of governance

As adoption diffuses, isomorphic pressures can be beneficial—codifying minimum safety and transparency. However, they can also cement suboptimal designs if early templates are mimicked uncritically. Professional communities should promote “principled isomorphism”: copy only those practices that pass empirical checks and ethical review; allow variation where local conditions differ.


8. Mini-Vignettes Across Sectors

  • Hospitality operations: An agent monitors room readiness, guest messages, and local events. When a flight delay affects arrivals, it reschedules housekeeping, pre-heats rooms, adjusts restaurant staffing, and messages guests with new options. Human supervisors maintain oversight, approve compensation offers for high-value guests, and review end-of-day logs to refine policies.

  • Destination management: A regional tourism board deploys multilingual agents to answer traveler questions and coordinate small operators. The board pools data across firms to build a shared knowledge base, reducing the core–periphery gap by boosting local cultural capital and enabling synergies among SMEs.

  • Public permitting: A city uses agents to triage applications, request missing documents, and schedule inspections. Decisions that affect rights remain human-adjudicated, but waiting times fall, and transparency improves through explanations attached to each action.

  • Supply chain resilience: A retailer’s agent predicts disruptions from weather and port congestion, reroutes shipments, and negotiates spot contracts within budget caps. Humans approve exceptions; the firm tracks resilience metrics to ensure the agent improves not just cost but shock recovery.

These vignettes show that value arises when autonomy is bounded, auditable, and aligned with institutional norms.


9. Ethical Considerations and Social Risk

  • Fairness: Agents must be stress-tested for disparate impacts. Equity dashboards should be standard.

  • Privacy and dignity: Minimize surveillance; separate service optimization from intrusive profiling.

  • Transparency: Provide “right to explanation” for consequential interactions; allow appeals.

  • Safety: Maintain kill-switches; limit irreversible actions without human consent.

  • Collective voice: Include worker councils and user representatives in oversight committees, especially where agentic decisions shape working conditions or access to services.


10. Implications for Research and Practice

Research:

  • Comparative field studies across sectors to identify when complementarity vs. substitution dominates.

  • Longitudinal analyses to track how agentic systems shift habitus and capital distributions.

  • Measurement science for autonomy boundaries and decision rights.

  • Cross-national studies to map world-systems predictions onto actual compute access and data governance regimes.

  • Methodologies for auditing alignment and adaptivity without revealing proprietary details.

Practice:

  • Build interdisciplinary teams (engineering, legal, ethics, operations, labor relations).

  • Invest in cultural capital (training) and social capital (ecosystem partnerships).

  • Treat symbolic capital as a risk asset—protected through transparency and accountability.

  • Use staged autonomy and evidence-based scaling rather than “big bang” deployments.


11. Limitations

This is a conceptual synthesis. While it draws on established theories and practitioner observations, it does not present new empirical measurements. Some sectoral dynamics—such as real-time safety incidents or regulatory interventions—are discussed at a high level. Future work should integrate quantitative and qualitative evidence to test the 5A framework, measure outcomes, and evaluate equity impacts across regions and demographics.


12. Conclusion

Agentic AI marks a step-change in how organizations coordinate work. Moving beyond automation, autonomous agents plan, adapt, and collaborate—redistributing decision rights and reshaping professional fields. A critical sociology perspective reveals that adoption is conditioned by capital endowments, global stratification, and institutional pressures. The 5A framework and the staged roadmap help decision-makers align agentic capability with human dignity, fairness, and resilience. If we get the governance right—purpose limitation, proportional autonomy, explainability, dual accountability, strong data governance, continual oversight, and participatory change—agentic AI can augment human judgment, upgrade capabilities in semi-peripheral regions, and strengthen service quality in sectors such as tourism, hospitality, and public administration. If we neglect these principles, we risk consolidating power, deepening dependency, and eroding trust. The next phase of digital transformation will therefore be judged not only by speed and savings but by whether autonomous systems are aligned with the social good.


References / Sources

  • Bourdieu, P. (1986). The Forms of Capital. In J. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education.

  • Bourdieu, P. (1990). The Logic of Practice.

  • Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology.

  • DiMaggio, P., & Powell, W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review.

  • Wallerstein, I. (1974). The Modern World-System.

  • Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age.

  • Davenport, T. H., & Ronanki, R. (2018). Artificial Intelligence for the Real World. Harvard Business Review.

  • Floridi, L. (2013). The Ethics of Information.

  • Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network Theory.

  • Malone, T. (2004). The Future of Work.

  • Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.).

  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.).

  • van der Aalst, W. (2016). Process Mining: Data Science in Action.

  • Zuboff, S. (2019). The Age of Surveillance Capitalism.

  • Wooldridge, M. (2002). An Introduction to MultiAgent Systems.

  • Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading Digital: Turning Technology into Business Transformation.

  • Winner, L. (1986). The Whale and the Reactor: A Search for Limits in an Age of High Technology.

  • Susskind, R., & Susskind, D. (2015). The Future of the Professions.

  • Pasquale, F. (2015). The Black Box Society.

  • O’Neil, C. (2016). Weapons of Math Destruction.


Hashtags

 
 
 

Recent Posts

See All

Comments


This article is licensed under  CC BY 4.0

61e24181-42b7-4628-90bc-e271007e454d.jpeg
feb06611-ad56-49a5-970f-5109b1605966.jpeg

Open Access License Statement

© The Author(s). This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits unrestricted use, distribution, adaptation, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and the source, and any changes made are indicated.

Unless otherwise stated in a credit line, all images or third-party materials in this article are included under the same Creative Commons license. If any material is excluded from the license and your intended use exceeds what is permitted by statutory regulation, you must obtain permission directly from the copyright holder.

A full copy of this license is available at: Creative Commons Attribution 4.0 International (CC BY 4.0).

Submit Your Scholarly Papers for Peer-Reviewed Publication:

Unveiling Seven Continents Yearbook Journal "U7Y Journal"

(www.U7Y.com)

U7Y Journal

Unveiling Seven Continents Yearbook (U7Y) Journal
ISSN: 3042-4399

Published under ISBM AG (Company capital: CHF 100,000)
ISBM International School of Business Management
Industriestrasse 59, 6034 Inwil, Lucerne, Switzerland
Company Registration Number: CH-100.3.802.225-0

ISSN:3042-4399 (registered by the Swiss National Library)

issn-logo-1.png

The views expressed in any article published on this journal are solely those of the author(s) and do not necessarily reflect the official policy or position of the journal, its editorial board, or the publisher. The journal provides a platform for academic freedom and diverse scholarly perspectives.

All articles published in the Unveiling Seven Continents Yearbook Journal (U7J) are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0)

Creative Commons logo.png
bottom of page