top of page
Search

Generative AI in Student Support as a 2025 Turning Point: A Critical Sociology of Adoption, Equity, and Institutional Change

  • Writer: OUS Academy in Switzerland
    OUS Academy in Switzerland
  • 12 minutes ago
  • 11 min read

Author: Alex Morgan

Affiliation: Independent Researcher


Abstract

The beginning of the 2025–26 academic year has intensified a worldwide trend: universities are rapidly adopting generative AI (GenAI) to extend student support—advising, tutoring, administrative assistance, and early alert services. This article offers a critical sociology of this trend, analyzing how GenAI mediates relations among students, staff, and institutions. Using Pierre Bourdieu’s concept of capital, world-systems theory, and institutional isomorphism, the paper explains why adoption is accelerating, what forms of advantage and exclusion may be produced, and how organizational fields are converging on shared models of “AI-enabled support.” The argument is interdisciplinary yet written in accessible language: while GenAI offers the promise of scalable, 24/7 help and personalized guidance, its deployment also redistributes cultural, social, and technological capital; reconfigures academic labor; and risks deepening center–periphery inequalities in the global system. To align GenAI with educational values, the paper proposes governance principles, impact evaluation routines, and hybrid human-AI service designs that preserve care, fairness, and academic integrity. The contribution is a conceptual framework and set of practical guardrails suitable for policy, management, and faculty decision-making during this pivotal moment.


Keywords: generative AI; higher education; student support; advising; equity; institutional isomorphism; Bourdieu; world-systems; governance; digital inequality.


1. Introduction: Why This Week Matters

September 2025 marks a new intensity in GenAI adoption across higher education. As the academic year opens, institutions face a familiar set of pressures—rising student expectations for immediate answers, financial constraints on staffing, the need for continuous online services, and an increasingly competitive global market for enrollments. GenAI, especially large language model (LLM)–based assistants, appears to square the circle: it promises real-time responses, broad coverage across subjects and services, and the potential to triage routine queries so human staff can focus on complex cases.

Yet a critical lens is required now more than ever. The promise of efficiency can divert attention from deeper sociological questions: Whose capacities are enhanced, and whose are diminished? What forms of student and institutional capital are being accumulated or devalued? Which actors in the global higher-education system stand to benefit, and which risk dependency on external platforms? Why do universities, often quite different in mission and context, move in similar directions at the same time?

To address these questions, this article synthesizes theory and practice. Section 2 establishes the theoretical framework. Section 3 reviews the current landscape of GenAI in student support. Section 4 interrogates equity using Bourdieu’s forms of capital. Section 5 situates adoption in the core–periphery dynamics of world-systems theory. Section 6 explains why organizational fields converge through institutional isomorphism. Section 7 analyzes consequences for academic labor and student agency. Section 8 proposes governance and design principles. Section 9 concludes with scenarios and a research agenda.


2. Theoretical Framework

2.1 Bourdieu’s Capital in a Digital University

Bourdieu’s concepts of economic, cultural, social, and symbolic capital help diagnose how GenAI reorders advantage.

  • Economic capital shapes access to computational infrastructure, paid tools, and staff with AI literacy.

  • Cultural capital (embodied dispositions, competencies, and credentials) shapes students’ ability to prompt well, interpret AI output, and integrate feedback into coursework.

  • Social capital (networks and relationships) influences who learns about AI-enabled services early and who gets help in making them work.

  • Symbolic capital determines legitimacy: when elite universities brand an AI assistant as “trusted,” it accrues reputational value that legitimizes similar moves elsewhere.

GenAI can convert one capital into another. For example, institutions with economic capital can invest in safe deployments and faculty training (cultural capital), which then convert into symbolic capital (“innovative, student-centered”). Conversely, students lacking cultural capital may misapply AI guidance, yielding lower learning gains even when tools are available.

2.2 World-Systems Theory: Core, Semi-Periphery, Periphery

World-systems theory, associated with Wallerstein, frames higher education as part of a global system structured by uneven development. In this system:

  • Core institutions (highly resourced, often in wealthy countries) set norms and produce standards;

  • Semiperipheral institutions adopt and translate those standards while aspiring to core status;

  • Peripheral institutions often depend on external platforms without negotiating power.

GenAI’s technical stack and governance regimes are largely shaped in the core. The diffusion of those models—licensing agreements, privacy frameworks, proprietary APIs—can create new dependencies. If core institutions secure favorable terms and robust safeguards, while peripheral institutions accept default settings or “black-box” models, global inequalities may widen even as advertised “access” improves.

2.3 Institutional Isomorphism

DiMaggio and Powell’s concept of coercive, mimetic, and normative isomorphism explains why universities converge on similar AI strategies:

  • Coercive pressures include regulation, funding conditions, and accountability metrics demanding “innovation” and “efficiency.”

  • Mimetic pressures arise under uncertainty; institutions copy early adopters, especially prestigious ones.

  • Normative pressures include professional communities (IT associations, advising networks, quality agencies) that consolidate a “right way” to deploy AI.

Together, these forces normalize the idea that “student support must be AI-enabled,” even if local evidence of effectiveness is still emerging.


3. Anatomy of the Trend: GenAI in Student Support

3.1 Service Domains

  1. Academic Advising: AI triages common questions (deadlines, course eligibility, degree maps) and drafts plan options, leaving complex cases to humans.

  2. Tutoring and Feedback: LLMs provide formative feedback on drafts, problem-solving hints, and step-by-step explanations.

  3. Administrative Helpdesks: Virtual assistants answer queries on fees, housing, visas, and campus logistics.

  4. Early Alerts and Nudges: AI analyzes engagement signals (LMS clicks, assignment patterns) to suggest timely outreach.

  5. Well-Being Gateways: Some institutions deploy AI as first-line, low-stakes check-ins that escalate to human counselors when risk indicators appear.

3.2 Promises and Trade-offs

  • Speed and Scale: 24/7 response coverage reduces wait times; however, faster answers can normalize atomized interactions and reduce relationship-building.

  • Personalization: Tailored feedback and guidance can improve persistence; however, personalization depends on data depth, raising privacy concerns.

  • Efficiency: Routine tasks shift from humans to machines; however, time “saved” is not always reinvested in high-touch care, and cost-cutting incentives can erode service quality.

3.3 Infrastructure and Integration

A practical challenge is orchestration: aligning LLMs with student information systems, learning management systems, and identity providers. Institutions must govern data flows, manage prompt libraries, and implement guardrails (e.g., allowed domains of advice). The trend this season is toward hybrid architectures: small, local models for sensitive tasks; larger external models for general language tasks; and retrieval-augmented generation for policy-consistent answers.


4. Equity Through Bourdieu: Capital, Conversion, and Closure

4.1 Cultural Capital and Prompt Literacy

The same tool can amplify inequalities if students differ in prompt literacy—the capacity to ask good questions, structure tasks, and critically appraise output. Students with strong cultural capital are likelier to use AI feedback as a scaffold rather than a substitute, gaining deeper understanding. Those without may accept surface fluency as mastery, risking assessment misalignment.

Implication: Universities must treat prompt literacy as a teachable academic skill, with explicit instruction and formative practice rather than assuming “digital natives” already possess it.

4.2 Social Capital and Invisible Gatekeeping

Access to “better” AI configurations often flows through informal networks—friends who share prompt templates, lab groups with custom tools, or mentors who explain constraints. Such social capital can become tacit gatekeeping, producing inequalities even where formal access is universal.

Implication: Institutions should publicize exemplars, host open prompt libraries, and ensure staff mentoring is widely available.

4.3 Economic Capital and the Cost of Confidence

Paid tiers, premium compute, or licensed add-ons can improve accuracy, safety, and integration—but they require economic capital. The risk is capital closure, where only certain students or programs benefit from higher-quality AI services, reproducing advantage.

Implication: If AI is integral to student support, essential capabilities must be provided as public goods within the university, not optional extras borne by students.

4.4 Symbolic Capital: Legitimacy and the “Trusted Assistant”

When prestigious institutions confer symbolic capital on an AI—branding it “trusted,” featuring leaders in media—the tool gains legitimacy that accelerates field-wide adoption. Yet symbolic capital can mask unresolved issues (bias, hallucinations, over-delegation). Legitimacy must be earned through evidence, not marketing.

Implication: Institutions should couple deployment with transparent evaluation and publish criteria for “trusted” status.


5. World-Systems Perspective: Platform Power and Peripheral Dependency

5.1 Standard-Setting at the Core

Core institutions influence norms by publishing toolkits and shaping procurement templates. Vendors prioritize integrations requested by the core, setting default choices for others. These standards become de facto global baselines for “responsible AI” in support services.

5.2 Semi-Periphery as Translators

Semiperipheral institutions often pilot core-designed systems under local constraints—limited bandwidth, multilingual contexts, different academic calendars. They translate standards into workable routines, but with less bargaining power over pricing and features.

5.3 Peripheral Constraints and the Risk of Lock-In

Peripheral institutions may rely on black-box systems with minimal customization. Data sovereignty concerns and the lack of locally hosted models intensify dependency. A global “support divide” emerges: universal access to some form of AI, but uneven access to adaptable, transparent, and context-aware AI.

Policy Response: Promote open standards, regional model hubs, and data-locality options; fund shared evaluation labs in the semi-periphery; encourage knowledge transfer from core to periphery without extracting rents.


6. Institutional Isomorphism: Why Everyone Moves Together

6.1 Coercive Isomorphism

Funding agencies and ministries increasingly expect “innovation,” digital performance indicators, and scalability. Procurement and quality audits can pressure institutions to adopt AI to demonstrate “efficiency” and “student responsiveness.”

6.2 Mimetic Isomorphism

Under uncertainty about best practices, universities copy prestigious peers: “If they have an AI advising assistant, we should too.” Over time, the field converges on similar tool stacks, interface patterns, and risk frameworks.

6.3 Normative Isomorphism

Professional bodies, conferences, and standards codify “responsible use.” Training programs for advisors and IT staff diffuse shared vocabularies and playbooks. Norms stabilize even before long-term evidence accumulates, which can obscure context-specific considerations.

Guardrail: Maintain space for pluralism—sandbox multiple designs, compare outcomes, and publish negative results to prevent premature closure of the design space.


7. Consequences for Academic Labor and Student Agency

7.1 Task Recomposition, Not Simple Replacement

Academic work decomposes into micro-tasks: drafting emails, generating study plans, providing first-pass feedback. AI performs some of these, while staff curate, verify, and escalate complex cases. New roles emerge (prompt librarians, AI stewards), while traditional roles shift toward oversight and care work.

7.2 Care, Empathy, and the Limits of Automation

Student support is not merely information delivery; it is relational. AI can reduce waiting and repetition, but empathy, ethical judgment, and pastoral care remain human strengths. Institutions that offload too much to machines risk degrading the hidden curriculum of care that underpins belonging and persistence.

7.3 Student Agency and Assessment Integrity

When AI assists with planning and feedback, assessments must evaluate process transparency as well as outcomes. Students should learn to disclose AI assistance appropriately and reflect on how feedback changed their work. The goal is augmented agency, not passive consumption.


8. Governance, Evaluation, and Design for Fairness

8.1 Governance Principles

  1. Purpose Limitation: Define what the assistant may and may not do (e.g., information, study skills; no medical or legal advice).

  2. Transparency: Clearly indicate when students interact with AI; publish data flows and retention policies.

  3. Human-in-the-Loop: Require human review for high-stakes decisions and sensitive contexts.

  4. Equity by Design: Budget for universal access to core capabilities; monitor usage disparities.

  5. Contestability: Provide channels for students to challenge AI outputs or data inferences.

  6. Open Documentation: Maintain a living model card for the institution’s AI services, including updates, evaluation results, and known limitations.

8.2 Evaluation Routines

  • Effectiveness: Track response times, resolution rates, and learning outcomes (e.g., improvement in draft quality across iterations).

  • Fairness: Compare outcomes by program, language background, disability status, and other protected characteristics, with privacy safeguards.

  • Safety: Log hallucination rates, escalation frequencies, and categories of errors; use red-teaming and scenario testing.

  • Usability: Collect student and staff feedback; run think-aloud studies; analyze drop-offs.

  • Cost Realism: Measure total cost of ownership (compute, integration, monitoring, training) vs. benefits; reinvest efficiencies into high-touch support.

8.3 Design Patterns

  • Hybrid Triage: AI answers routine queries and routes ambiguous or emotional cases to humans.

  • Context-Bound Answers: Retrieval-augmented generation restricted to current institutional policies; avoid open-domain “best guesses.”

  • Prompt Scaffolds: Built-in “templates” teach students how to ask for help responsibly (e.g., “explain your current understanding and where you are stuck”).

  • Learning Mode vs. Answer Mode: Distinct modes signal expectations: exploration vs. official guidance.

  • Accessible Multilingual Support: Prioritize accessibility features and multilingual corpora; avoid second-class experiences for non-dominant languages.


9. Ethics and Law in Practice

9.1 Privacy and Data Minimization

Student support often involves sensitive data. Collect only what is necessary; default to short retention; anonymize logs used for quality improvement; and avoid secondary uses without explicit consent.

9.2 Bias and Representation

Bias audits should include domain-specific checks (e.g., advising bias across first-gen students), not just generic benchmarks. Engage diverse stakeholders in defining “error severity” and acceptable risk.

9.3 Academic Integrity

Policies should recognize responsible assistance (e.g., brainstorming, outline feedback) while prohibiting delegated authorship. Provide clear disclosure templates to normalize transparent use.


10. Regional and Institutional Typologies

  • Research-Intensive Core Universities: Likely to build bespoke systems, integrate with data lakes, and lead on governance frameworks; may pilot small local models for privacy-sensitive tasks.

  • Teaching-Focused Institutions: Prioritize advising and tutoring at scale; rely on platform partners; focus governance on procurement checklists and staff development.

  • Open and Distance Providers: Emphasize 24/7 multilingual support, adaptive pacing, and nudges; invest in analytics-informed coaching.

  • Emerging Universities (Peripheral/Semi-Periphery): Seek low-cost, high-impact solutions; face bandwidth and localization constraints; benefit from open standards and regional collaborations.


11. Scenarios: Where the Field Might Head Next

  1. Human-Centered Augmentation (Preferred): AI handles routine tasks, while staff redeploy time to mentoring and community-building; equity gaps narrow through proactive support.

  2. Automation Drift: Efficiency imperative dominates; staff reductions outpace reinvestment; relational support erodes; inequities deepen.

  3. Platform Lock-In: Proprietary ecosystems dictate terms; innovation slows; peripheral institutions become passive consumers of updates.

  4. Open Standards and Shared Evaluation: Consortia produce evidence-based patterns; multiple vendors and local models interoperate; institutions retain strategic autonomy.


12. Practical Checklist for Leaders (Actionable SEO-Friendly Guidance)

  • Set Purpose: Define student support outcomes (access, persistence, well-being).

  • Map Risks: Identify privacy, bias, and integrity risks for each use case.

  • Choose Architecture: Combine local models for sensitive data with external LLMs for general language tasks; use retrieval for policy accuracy.

  • Train People: Build AI literacy for staff and prompt literacy for students as a core academic skill.

  • Measure What Matters: Evaluate learning gains, equity of access, student satisfaction, and total cost of ownership.

  • Publish and Improve: Release evaluation summaries; invite peer review; iterate on guardrails and pedagogy.


13. Conclusion: A Turning Point That Demands Judgment

This September, GenAI is becoming a normal part of student support. That normalization is not value-neutral: it shifts how help is requested, how advice is given, and how academic communities care for their members. Using Bourdieu’s capital, we see how advantages cluster unless deliberate measures cultivate prompt literacy, shared resources, and mentoring. Using world-systems theory, we recognize how platform power can widen global gaps without regional capacity-building. Using institutional isomorphism, we understand why convergence is happening—and why local evidence and plural experimentation are essential to avoid premature lock-in.

The question is not whether GenAI will be used, but how. Institutions have agency: to design hybrid services that elevate human care, to measure equity rather than assume it, and to insist that trust is earned through transparent, ongoing evaluation. If the sector treats this moment as an opportunity to align technology with educational purpose, the promise of broader access and better support can become real—without sacrificing the relationships and values that make education transformative.


Hashtags


References / Sources

  • Bourdieu, P. (1986). “The Forms of Capital.” In J. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education.

  • Bourdieu, P., & Passeron, J.-C. (1977). Reproduction in Education, Society and Culture.

  • DiMaggio, P., & Powell, W. (1983). “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” American Sociological Review.

  • Wallerstein, I. (1974–1989). The Modern World-System (Vols. 1–3).

  • Zuboff, S. (2019). The Age of Surveillance Capitalism.

  • Selwyn, N. (2016). Education and Technology: Key Issues and Debates.

  • Williamson, B. (2017). Big Data in Education: The Digital Future of Learning, Policy and Practice.

  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism.

  • Susskind, R., & Susskind, D. (2015). The Future of the Professions.

  • OECD (2025). Trends Shaping Education 2025.

  • UNESCO (2021). AI in Education: Guidance for Policy-Makers.

  • Floridi, L., et al. (2018). “AI4People—An Ethical Framework for a Good AI Society.” Minds and Machines.

  • Luckin, R. (2017). Machine Learning and Human Intelligence: The Future of Education for the 21st Century.

  • Biesta, G. (2010). Good Education in an Age of Measurement: Ethics, Politics, Democracy.

  • Freire, P. (1970). Pedagogy of the Oppressed.

  • Goffman, E. (1959). The Presentation of Self in Everyday Life.

  • Scott, W. R. (2014). Institutions and Organizations: Ideas, Interests, and Identities.

  • Marginson, S. (2016). The Dream Is Over: The Crisis of Clark Kerr’s California Idea of Higher Education.

  • Ben Williamson & Anna Hogan (2020). “Commercialisation and Privatisation in/ of Education in the Context of Covid-19.” Education International Research.

 
 
 

Recent Posts

See All

Comments


This article is licensed under  CC BY 4.0

61e24181-42b7-4628-90bc-e271007e454d.jpeg
feb06611-ad56-49a5-970f-5109b1605966.jpeg

Submit Your Scholarly Papers for Peer-Reviewed Publication:

Unveiling Seven Continents Yearbook Journal "U7Y Journal"

(www.U7Y.com)

U7Y Journal

ISSN:3042-4399 (registered by the Swiss National Library)

issn-logo-1.png

The views expressed in any article published on this journal are solely those of the author(s) and do not necessarily reflect the official policy or position of the journal, its editorial board, or the publisher. The journal provides a platform for academic freedom and diverse scholarly perspectives.

All articles published in the Unveiling Seven Continents Yearbook Journal (U7J) are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0)

Creative Commons logo.png
bottom of page