top of page
Search

Autonomous AI Agents and the Reorganization of Power: A Critical Sociology of Management, Tourism, and Technology in 2025

  • Writer: OUS Academy in Switzerland
    OUS Academy in Switzerland
  • Oct 12
  • 12 min read

Author: Miguel López

Affiliation: Independent researcher


Abstract

Autonomous AI agents—software systems able to plan, decide, and act with minimal human oversight—have moved from pilot experiments to strategic enterprise capabilities in 2025. This paper offers a critical sociology of agentic AI’s rise in management, tourism, and technology sectors. Using Pierre Bourdieu’s concepts of capital, field, and habitus; world-systems theory’s core–periphery dynamics; and institutional isomorphism’s coercive, mimetic, and normative pressures, the paper analyzes how agentic AI restructures organizational power, labor, and value capture across global industries. After defining agentic AI and mapping its organizational diffusion, we examine five transformations: (1) the redistribution of economic, social, cultural, and symbolic capital inside firms; (2) the emergence of “agent governance fields” and new professional habitus; (3) core–periphery asymmetries in data, compute, and standards; (4) sector-specific consequences in management operations and tourism experience design; and (5) institutional pressures that accelerate convergence toward similar AI operating models. We propose a seven-layer governance model to align human oversight and agent autonomy and outline a research agenda spanning comparative field studies, longitudinal labor impacts, and cross-regional political economy. The analysis argues that agentic AI is not only a technical innovation but also an organizational and geopolitical force that reorders who holds power, how value is produced, and which actors can credibly claim legitimacy in the new “agentic” economy.


Keywords: autonomous AI agents; agentic AI; organizational power; Bourdieu; world-systems; institutional isomorphism; management; tourism; technology; AI governance; digital transformation


1. Introduction

In 2025, organizations increasingly deploy autonomous AI agents to handle multistep tasks—procurement checks, routing and scheduling, content generation, risk monitoring, customer support orchestration, and more. These agents work across human and digital environments, integrating with enterprise systems and acting within defined constraints. Their spread represents more than efficiency gains; it signals a reallocation of decision rights, authority, and expertise within firms and across global markets.

This paper takes a critical sociology perspective on this week’s dominant technology trend—agentic AI—and situates it within broader transformations of management, tourism, and technology ecosystems. Rather than asking only whether agents “work,” we ask: Who gains or loses different forms of capital? How do fields of practice adapt? Which regions and firms accumulate advantage in the evolving world-system of data, compute, and standards? Why do organizations, even with diverse contexts, converge on similar agent governance structures?

To answer these questions, we combine three theoretical lenses:

  1. Bourdieu’s theory of capital, field, and habitus: How agentic AI reshapes access to economic, social, cultural, and symbolic capital; how “agent governance” becomes a field with its own rules; how managerial habitus adapts to supervising software actors.

  2. World-systems theory: How agentic AI reinforces, complicates, or potentially disrupts core–periphery relationships via data access, model control, and cloud concentration.

  3. Institutional isomorphism: How coercive (regulatory), mimetic (imitation under uncertainty), and normative (professional standards) forces drive organizations to adopt similar agent architectures and governance templates.

The contribution is a synthetic, theory-informed framework that remains practical: it clarifies what leaders should measure, how to design governance, and where risks of exclusion, dependency, or symbolic domination may arise.


2. Defining Agentic AI and Its Organizational Diffusion

2.1 What is an autonomous AI agent?

An autonomous AI agent is a system that can interpret goals, plan actions, call tools or services, monitor outcomes, and adapt behavior to achieve objectives with limited human intervention. Unlike static automation, agents can reason iteratively, coordinate with other agents, and escalate to humans when uncertainty or constraints demand it. In enterprise settings, they are increasingly embedded in ERP, CRM, supply-chain, and analytics platforms.

2.2 Why 2025 matters

This year marks a scale-up phase. Organizations have shifted from isolated demos to agent-in-the-loop workflows and, in some domains, human-in-the-loop oversight for agent-led execution. Vendors package agent frameworks, orchestration layers, and governance features; enterprises pilot in back-office functions, then extend to revenue-critical areas. The shift is not uniform but is widespread enough to alter managerial routines and labor processes.

2.3 Diffusion pathways

Agentic AI diffuses through three overlapping pathways:

  • Infrastructural path: Cloud, vector databases, event streams, and connectors lower integration costs.

  • Organizational path: “Agent ops” roles emerge (policy design, safety, monitoring, red-teaming); councils or committees adjudicate escalation rules and ethical guardrails.

  • Cultural path: Managers learn to supervise software; frontline staff shift from execution to exception handling; metrics evolve to evaluate quality, alignment, and accountability.


3. Theoretical Framework

3.1 Bourdieu: Capital, field, habitus

Bourdieu’s framework helps us see that agentic AI is also a struggle over capital:

  • Economic capital: budget, compute, data acquisition capacity.

  • Cultural capital: technical know-how (ML engineering), governance literacy (risk, compliance), and domain expertise (e.g., pricing, logistics).

  • Social capital: networks that provide high-quality training data, strategic partnerships, and preferential access to models or standards.

  • Symbolic capital: prestige and legitimacy—who can credibly claim to run “responsible” agents or to be at the frontier of AI?

These capitals operate within fields—relatively autonomous spaces (e.g., enterprise software, hospitality operations, digital marketing) with rules about what counts as legitimate action. As agentic AI matures, it creates sub-fields—agent governance, agent operations, agent assurance—where new forms of capital (e.g., “auditability capital”) emerge. Habitus, the embodied dispositions of managers and professionals, adapts: supervisors must become comfortable reading agent logs, interpreting confidence scores, and setting escalation thresholds.

3.2 World-systems theory: Core–periphery dynamics

World-systems theory positions the global economy as a hierarchy of core, semi-periphery, and periphery regions. In AI, core actors control foundational models, large-scale compute, and standards; peripheral actors may depend on platform access and pay rents for data and inference. Agentic AI can deepen this asymmetry if proprietary interfaces, licensing, or data gravity anchor value extraction in the core. Conversely, open standards, regional data commons, and localized agent stacks could empower semi-periphery regions to negotiate better terms and align agents with local norms.

3.3 Institutional isomorphism

DiMaggio and Powell’s mechanisms explain convergent adoption patterns:

  • Coercive pressures: regulations on safety, data protection, and algorithmic accountability push firms toward similar audit trails and human-override features.

  • Mimetic pressures: under uncertainty, organizations imitate early movers’ agent architectures (guardrails, oversight committees).

  • Normative pressures: professional communities (risk officers, auditors, engineers) shape “best practices” that become obligations to maintain legitimacy.

Together, these lenses move us beyond technical feasibility toward an analysis of power, legitimacy, and global unevenness.


4. Transformations Inside the Firm

4.1 Redistribution of capital

Agentic AI redistributes capital in at least four ways:

  1. Economic capital shifts toward teams that control data pipelines and orchestration layers. Budget authority accrues to leaders who can demonstrate measurable ROI and risk control.

  2. Cultural capital rises for hybrid profiles—professionals who understand both domain logic (e.g., yield management in tourism) and agent safety principles (guardrails, alignment).

  3. Social capital concentrates around integration alliances (cloud, data providers, compliance partners). Access to these networks determines speed and scope of deployment.

  4. Symbolic capital accrues to units that publish governance frameworks, pass audits, and communicate responsible adoption credibly to boards and regulators.

This redistribution can unsettle established hierarchies. Functions once central (manual reconciliation, sequential approvals) may lose status, while agent operations becomes a site of strategic prestige.

4.2 The agent governance field

A recognizable field emerges with its own doxa (taken-for-granted rules). Typical components include:

  • Policy layer: permitted actions, risk classes, escalation rules.

  • Assurance layer: red-teaming, scenario testing, robustness evaluations.

  • Telemetry layer: logs, rationales, uncertainty measures.

  • Oversight institutions: cross-functional councils with veto power.

  • Symbolic practices: internal “conformity badges” or readiness levels that signal legitimacy.

Within this field, firms compete not just on capability but on governability—the capacity to demonstrate control without sacrificing efficiency.

4.3 Habitus shift for managers and staff

Managers absorb new dispositions: reading agent dashboards becomes as natural as reading P&L statements. Frontline staff pivot from execution to exception craftsmanship—resolving edge cases, feeding back lessons, and refining policies. Human labor becomes more deliberative and synthetic, less repetitive and sequential.


5. Sector Analyses

5.1 Management and operations

In general management, agents compress the time between signal (market change) and response (price move, inventory shift). They reduce latency in coordination across procurement, logistics, and customer service. However, Bourdieu reminds us that efficiency rhetoric can mask struggles over symbolic capital: which team “owns” the decision? Whose metrics define success? Agent dashboards can become instruments of symbolic domination if they privilege certain KPIs (e.g., cost) over others (e.g., fairness, employee well-being).

Implications:

  • Decisional clarity: define which decisions are agent-authorizable and which require human consent.

  • Metric pluralism: balance financial, ethical, and service quality indicators.

  • Reflexive governance: periodic reviews to prevent “metric capture” by any single coalition inside the firm.

5.2 Tourism and hospitality

Tourism offers a vivid testbed. Agents can orchestrate dynamic packaging, personalized itineraries, real-time service recovery, and revenue management. On the supply side, they schedule housekeeping, optimize staffing, and allocate resources during peaks. On the demand side, they tailor offers by preferences and constraints (budget, mobility, dietary needs).

From a world-systems angle, global distribution platforms often sit in the core, capturing intermediary rents. Peripheral destinations may rely on these platforms’ agent ecosystems, accepting default rules that shape visibility and pricing power. To counteract dependency, destination management organizations and hotel associations can form regional data cooperatives—pooling anonymized demand signals and hosting local agent stacks that negotiate better terms with core platforms while honoring local norms (sustainability, cultural heritage, resident quality of life).

From Bourdieu’s view, cultural capital (local knowledge, storytelling, language fluency) remains critical. Agents that embed local cultural scripts deliver more symbolic value—experiences perceived as authentic. Firms that translate embodied cultural capital into agent-readable prompts and constraints can differentiate without surrendering identity to generic global models.

Risks: algorithmic homogenization of experiences; symbolic domination where global templates eclipse local voice; unequal access to real-time data between small operators and large chains.

5.3 Technology suppliers and platforms

Technology vendors consolidate economic capital in compute, data, and orchestration. They also build symbolic capital by setting “reference architectures” for safe agents. Institutional isomorphism increases their influence: regulators, consultants, and industry groups circulate the same templates, making vendor designs de facto standards.

Counter-moves for buyers:

  • Request interoperability by default and favor open interfaces.

  • Maintain shadow optionality (second-source models/connectors) to avoid lock-in.

  • Develop internal cultural capital (agent policy engineering, evaluation science) so governance skill does not fully externalize to vendors.


6. Global Political Economy

6.1 Data, compute, and extractive asymmetries

In the agentic economy, three levers define global advantage:

  • Data: access to high-quality, timely, and lawful datasets, including operational telemetry.

  • Compute: ability to train/fine-tune agents, run high-throughput inference, and support robust simulation.

  • Standards: control over formats, safety taxonomies, and audit expectations.

Core actors tend to dominate all three. Peripheral regions risk data and standards dependency, where the cost of deviation from core templates is high. Yet the periphery is not passive. Regional alliances, public procurement policies, and investment in local compute can shift bargaining power. Semi-periphery coalitions—universities, public labs, and industry consortia—can co-develop contextual agents anchored in local languages and regulatory traditions.

6.2 Tourism as a case of uneven development

Tourism showcases value capture asymmetry: core platforms often capture margins through fees and targeted advertising, while peripheral destinations carry environmental and social costs. Agentic AI could exacerbate this if itinerary and ranking agents systematically prioritize high-rent segments. Conversely, policy-aligned agents can encode sustainability constraints (caps on fragile sites, local-vendor quotas) and redirect demand to under-represented communities, turning agentic AI into a tool for equitable distribution.

6.3 Symbolic capital and legitimacy

Legitimacy battles intensify. Who has the authority to declare an agent “responsible”? Certification schemes, audit badges, and conformance rubrics become sources of symbolic capital. In some contexts, symbolic legitimacy may be as decisive as technical performance for winning public contracts or large enterprise deals.


7. Institutional Isomorphism in Practice

7.1 Coercive pressures

Regulatory regimes demand auditability, risk classification, human override, and traceability. Firms converge on similar logging schemas (rationales, tool calls, uncertainty), lifecycle controls (versioning, rollback), and incident reporting.

7.2 Mimetic pressures

Under uncertainty about ROI and safety, organizations imitate perceived leaders’ architectures: policy-reasoner-executor agent stacks; safety filters; tiered autonomy (read-only, propose, execute under limits). This imitation reduces variance but can also stifle local innovation.

7.3 Normative pressures

Professional communities (risk, legal, ML safety) circulate standards through conferences, handbooks, and certifications. A new professional habitus forms: shared language about red-teaming, hallucination taxonomies, off-policy evaluation, and systemic risk.


8. A Seven-Layer Governance Model

To align human aims and agent autonomy, we propose a layered model:

  1. Purpose & Scope: define “why agents,” success metrics, and out-of-scope zones (e.g., high-stakes employment decisions).

  2. Policy & Constraints: permissible actions, data access, rate limits, cost caps, and escalation thresholds.

  3. Safety & Assurance: pre-deployment red-teaming; simulation of adversarial inputs; stress tests for distribution shifts.

  4. Observability: standardized logs (prompts, tool calls, outputs), rationales, uncertainty estimates, and replayability for audits.

  5. Control & Intervention: graded autonomy levels (observe → propose → execute); human-in-the-loop checkpoints; emergency stop.

  6. Accountability: explicit role mapping (who approves models, who signs off on policies, who reports incidents); periodic board reporting.

  7. Learning & Adaptation: feedback loops from human overrides; post-incident reviews; scheduled retraining with change-control gates.

This model recognizes that governability is a competitive asset: firms that can prove control without paralyzing innovation accumulate both symbolic and economic capital.


9. Measuring What Matters

To avoid metric capture, use a balanced scorecard:

  • Operational: cycle time reduction; error rates; cost per task; service-level adherence.

  • Safety & Ethics: incident counts; severity; time-to-containment; fairness/impact audits.

  • Human Factors: override rates; staff satisfaction; skill development; task diversity.

  • Strategic: revenue lift from personalization; market share shifts; partner stickiness; reputation indices.

In tourism specifically, add destination well-being indicators: congestion levels, resident sentiment, heritage conservation metrics, and local-vendor share.


10. Labor, Skills, and the New Habitus

Agentic AI changes work but rarely eliminates it. The composition of tasks shifts:

  • From repetitive execution to curation, supervision, and synthesis.

  • From siloed functions to cross-field collaboration (ops + data + risk).

  • From tacit knowledge hoarding to codified policy that agents can read.

Organizations should invest in:

  • Cultural capital development: training in agent policy writing, evaluation science, and ethical reasoning.

  • Social capital building: communities of practice for sharing edge cases and governance patterns.

  • Symbolic capital strategies: transparent reporting and credible external assurance.


11. Counterfactuals and Failure Modes

Common pitfalls include:

  • Over-automation: delegating high-ambiguity decisions without robust escalation.

  • Opaque logging: insufficient observability prevents learning and accountability.

  • Lock-in risk: single-vendor designs that restrict adaptability.

  • Homogenization: mimetic isomorphism that ignores local context, especially damaging in tourism and public services.

  • Periphery dependence: outsourcing policy and evaluation to core vendors, eroding local cultural capital.

Mitigations: maintain dual-sourcing, invest in internal evaluation teams, require policy portability, and negotiate data repatriation rights.


12. Outlook and Research Agenda

Three lines of inquiry deserve attention:

  1. Comparative field studies: How do agent governance fields vary across sectors and regions?

  2. Longitudinal labor impacts: Which skills rise or fall; what new forms of precarity or empowerment emerge?

  3. Cross-regional political economy: Under what conditions can semi-periphery coalitions build sustainable, context-sensitive agent ecosystems?

Policy-makers should explore collective standards that preserve room for local innovation—metrics that embrace cultural and environmental goals, not just throughput.


13. Conclusion

Agentic AI is reorganizing power inside firms and across the global economy. Bourdieu’s lens shows a redistribution of capitals and the birth of new fields and habitus; world-systems theory highlights asymmetries in data, compute, and standards; institutional isomorphism explains why firms converge on similar governance structures. The managerial challenge is to govern for difference: build interoperable, audited, and human-aligned agent systems that honor local contexts while achieving global efficiency.

For management, tourism, and technology, the strategic question is no longer whether to adopt agents, but how to do so in ways that expand—not constrict—organizational and societal possibilities. Firms that cultivate governability as a form of capital, invest in cultural and social capital around agent operations, and negotiate fair positions within the global AI world-system will earn both performance and legitimacy in 2025 and beyond.


References / Sources

  • Bourdieu, P. (1986). The Forms of Capital. In J. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education (pp. 241–258). Greenwood Press.

  • Bourdieu, P. (1977). Outline of a Theory of Practice. Cambridge University Press.

  • Bourdieu, P. (1990). The Logic of Practice. Stanford University Press.

  • Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

  • Castells, M. (1996). The Rise of the Network Society. Blackwell Publishing.

  • DiMaggio, P., & Powell, W. W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 48(2), 147–160.

  • Davenport, T. H., & Ronanki, R. (2018). Artificial Intelligence for the Real World. Harvard Business Review, 96(1), 108–116.

  • Floridi, L. (2013). The Ethics of Information. Oxford University Press.

  • Katsamakas, E., Pavlov, O. V., & Saklad, R. (2023). Artificial Intelligence and the Transformation of Higher Education Institutions. Education and Information Technologies, 28(6), 6421–6440.

  • Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press.

  • March, J. G. (1991). Exploration and Exploitation in Organizational Learning. Organization Science, 2(1), 71–87.

  • March, J. G., & Simon, H. A. (1958). Organizations. John Wiley & Sons.

  • Mazzucato, M. (2013). The Entrepreneurial State: Debunking Public vs. Private Sector Myths. Anthem Press.

  • McKinsey & Company. (2025). Technology Trends Outlook 2025. McKinsey Global Institute Report.

  • Bain & Company. (2025). Technology Report 2025: The Enterprise Reinvented. Bain Global Technology Practice.

  • Gartner, Inc. (2025). Top Strategic Technology Trends 2025: Agentic AI and Autonomous Systems. Gartner Research.

  • Polanyi, K. (1944). The Great Transformation: The Political and Economic Origins of Our Time. Beacon Press.

  • Sekaki, Y., & et al. (2021). Artificial Intelligence in Management Studies (2021–2025): A Bibliometric Mapping. Journal of Business Research and Innovation, 12(3), 212–234.

  • Srnicek, N. (2016). Platform Capitalism. Polity Press.

  • Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.

  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

  • Fitsilis, P., Tsoutsa, P., Damasiotis, V., & Kyriatzis, V. (2022). Uncovering Key Trends in Industry 5.0 through Advanced AI Techniques. Technological Forecasting and Social Change, 184, 121969.




 
 
 

Comments


This article is licensed under  CC BY 4.0

61e24181-42b7-4628-90bc-e271007e454d.jpeg
feb06611-ad56-49a5-970f-5109b1605966.jpeg

Open Access License Statement

© The Author(s). This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits unrestricted use, distribution, adaptation, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and the source, and any changes made are indicated.

Unless otherwise stated in a credit line, all images or third-party materials in this article are included under the same Creative Commons license. If any material is excluded from the license and your intended use exceeds what is permitted by statutory regulation, you must obtain permission directly from the copyright holder.

A full copy of this license is available at: Creative Commons Attribution 4.0 International (CC BY 4.0).

License

Copyright © U7Y Journal – The Seven Continents Yearbook of Research
All rights reserved.

How to Cite and Reference U7Y Journal Articles

To ensure consistency and proper academic recognition, all articles published in the U7Y Journal – The Seven Continents Yearbook of Research should be cited following internationally recognized bibliographic standards. The journal supports multiple citation styles to accommodate diverse academic disciplines and indexing systems.
Here are standard reference formats for citing articles published in the U7Y Journal – The Seven Continents Yearbook of Research (ISSN 3042-4399). Authors, readers, and indexing services may use any of the following styles according to their institutional or publisher requirements.
bottom of page