Beyond ChatGPT: Rethinking the History and Sociology of Artificial Intelligence
- OUS Academy in Switzerland

- Aug 22
- 6 min read
Author: Youssef Serkal, Independent Researcher
Abstract
Artificial Intelligence (AI) is often viewed by the public as a sudden innovation born with tools like ChatGPT. However, AI has a long intellectual history that stretches back to mid-twentieth century computer science, earlier philosophical traditions, and even ancient mythologies. This article critically reconstructs that history while embedding it within sociological frameworks. It argues that AI should not be understood solely as a technical trajectory but also as a product of cultural capital (Bourdieu), global systemic structures (world-systems theory), and institutional dynamics (isomorphism in organizations). By examining AI through these lenses, we reveal how knowledge systems, social hierarchies, and global inequalities have shaped both the production and diffusion of artificial intelligence. The analysis suggests that AI is not only a scientific field but also a deeply social phenomenon that reflects broader patterns of power, legitimacy, and cultural imagination.
1. Introduction: The Myth of Sudden Origins
Public narratives often position AI as if it were born in late 2022 with ChatGPT. Media headlines and political debates reinforce this short memory. Yet this narrative conceals the slow accumulation of knowledge and repeated cycles of enthusiasm and disappointment. AI is better understood as a layered history with intellectual, technical, and cultural dimensions. Like all scientific fields, it has been shaped by social institutions, funding regimes, and symbolic struggles for legitimacy.
2. Ancient Imaginaries: Proto-AI Before Science
Long before algorithms, humans imagined artificial beings. Ancient myths of mechanical servants in Chinese folklore, the Greek automaton Talos, or Jewish legends of the Golem demonstrate humanity’s persistent fascination with life-like machines. These myths served as symbolic capital: cultural resources that societies drew upon to imagine mastery over nature and matter.
From a Bourdieusian perspective, these myths reflect how symbolic capital is deployed to reinforce the authority of priests, rulers, or philosophers. The idea of intelligent automata elevated their status as intermediaries between human society and transcendent knowledge. Thus, AI’s history begins not in laboratories but in social struggles over imagination and authority.
3. The Scientific Birth of AI: 1950s Optimism
AI emerged as a formal discipline during the Dartmouth Conference in 1956. The pioneers—McCarthy, Minsky, Newell, Simon—envisioned machines capable of reasoning, problem-solving, and self-learning. This “symbolic AI” relied on logic and rules.
The optimism was partly technical but also institutional. Universities and military funders saw AI as a way to accumulate scientific prestige and geopolitical capital during the Cold War. World-systems theory helps frame this moment: AI was not just research but also part of a core nation’s strategy to secure dominance in the global knowledge economy.
4. Cycles of Promise and Disillusionment: AI Winters
The first AI winter in the 1970s followed the Lighthill Report, which criticized AI’s lack of practical results. A second winter in the late 1980s stemmed from the collapse of expert systems.
These cycles can be analyzed sociologically as crises of institutional legitimacy. Organizations that had invested in AI faced pressures from funders, leading to retrenchment. DiMaggio and Powell’s theory of institutional isomorphism explains how universities and labs followed similar trajectories: initially adopting AI to appear modern, then retreating when legitimacy was questioned.
The AI winter is thus not just a technical setback but a moment of organizational adaptation to external pressures and symbolic environments.
5. Expert Systems and Symbolic Capital
Despite winters, expert systems of the 1980s—like medical diagnostic tools—became emblematic of AI’s potential. These systems transformed specialized knowledge into machine-processable rules. Here, Bourdieu’s notion of cultural capital is relevant: expert systems attempted to codify the embodied cultural capital of professionals into explicit symbolic capital stored in machines.
Yet this translation was incomplete. The tacit knowledge of experts often resisted formalization, highlighting the limits of symbolic approaches. Nonetheless, the pursuit reflected the broader societal desire to transform human expertise into institutionalized, transferable capital.
6. Machine Learning and the Global System (1990s–2000s)
By the 1990s, statistical approaches gained momentum. Unlike symbolic AI, machine learning relied on probabilities and large datasets. The rise of machine learning was tied to broader transformations in the world-system: the expansion of global capitalism, digitalization of commerce, and the growth of computational infrastructures.
Peripheral nations contributed primarily as data suppliers or labor sources for annotation, while core nations (the U.S., Western Europe, Japan) controlled algorithmic innovation and capital. This imbalance illustrates world-systems theory: AI reinforced the global division of labor, with technological prestige concentrated in the core.
7. Deep Learning and the Cultural Logic of the 2010s
The breakthrough of deep learning in 2012 was not simply technical; it marked a cultural shift. Neural networks were re-imagined as symbols of intelligence. GPUs, big data, and algorithmic advances enabled models to surpass human benchmarks in image and speech recognition.
Institutionally, deep learning spread rapidly through isomorphism. Universities, companies, and governments all adopted it, partly because of coercive pressures (funding priorities), mimetic pressures (imitating successful labs), and normative pressures (professional consensus). Within a few years, deep learning became the dominant paradigm.
8. Generative AI: Capital, Power, and Imagination (2020s)
Generative AI models such as GPT-3 and DALL·E represent a qualitative leap. Unlike earlier systems, they create new content—text, images, music—on demand. Their release sparked global fascination.
Bourdieu’s theory helps us see generative AI as a form of symbolic capital. Institutions that deploy generative AI enhance their prestige and legitimacy. At the same time, generative AI democratizes cultural capital by making creative production accessible to non-experts. Yet inequalities persist: only a few corporations in the global core control the largest models, securing economic capital and technological dominance.
9. The Sociology of AI Hype
Why does AI repeatedly cycle through hype and disappointment? Sociologists argue that scientific fields function like markets of symbolic goods. Hype generates symbolic capital, attracting investment and talent. When expectations fail, legitimacy collapses. This mirrors financial bubbles.
Institutional isomorphism intensifies the cycle: once a few universities or firms pivot to AI, others follow, fearing loss of legitimacy. Hype, therefore, is not irrational—it is structurally embedded in how institutions compete for prestige.
10. AI and Global Inequality
World-systems theory frames AI as a site of global inequality. Most advanced AI models originate in a handful of nations, while the Global South provides data, markets, or raw computational labor. Initiatives to build “sovereign AI” in emerging economies often face dependency on core technologies.
This reflects broader patterns of dependency: just as industrial technologies once reinforced global hierarchies, AI may entrench digital colonialism. Yet local adaptations and collaborations suggest potential pathways for semi-peripheral actors to carve niches in the system.
11. AI as Cultural Capital in Education and Professions
Within education, AI literacy is becoming a new form of cultural capital. Students and professionals who master AI tools gain advantage in labor markets. Universities, eager to maintain legitimacy, integrate AI into curricula. Here, institutional isomorphism ensures convergence across national systems.
But access remains unequal: elite institutions provide advanced AI training, while underfunded universities struggle. Thus, AI reproduces social hierarchies even as it promises democratization.
12. Ethical Discourses and Symbolic Struggles
Ethical debates around bias, transparency, and accountability represent another symbolic struggle. Institutions that claim leadership in “responsible AI” accumulate symbolic capital, enhancing legitimacy in public and policy arenas.
Yet these discourses often mask power asymmetries. Core nations dictate ethical standards that peripheral nations must adopt. This recalls how colonial powers once imposed educational and legal norms on colonies. AI ethics, too, can serve as a soft power tool in global competition.
13. AI, Capitalism, and the Logic of Accumulation
AI’s trajectory cannot be separated from capitalism’s drive for accumulation. From predictive analytics in marketing to automated logistics, AI extends the commodification of human behavior. In Marxian terms, AI is a new “general intellect” that both increases productivity and intensifies surveillance.
Generative AI, in particular, transforms creative labor into commodified outputs. It accelerates the circulation of symbolic goods while devaluing traditional artistic capital. This raises profound questions about the future of work and cultural production.
14. Theoretical Integration: AI as a Social Field
Synthesizing the perspectives:
Bourdieu: AI is a field where actors struggle for economic, cultural, and symbolic capital.
World-systems theory: AI reflects global inequalities between core and periphery.
Institutional isomorphism: AI spreads through organizational mimicry and legitimacy pressures.
Together, these theories reveal that AI is not merely technological but deeply embedded in social relations. It is both a product and producer of global structures of power.
15. Conclusion: Beyond ChatGPT
ChatGPT is not the origin of AI but a moment in its long and socially embedded history. From myths of automata to expert systems, from statistical learning to generative models, AI’s evolution reflects both technical ingenuity and broader social dynamics.
Understanding AI requires more than engineering; it demands a sociological imagination. By situating AI within fields of capital, global systems, and institutional logics, we see its trajectory as both a continuation of human history and a driver of future transformations.
Hashtags
References / Sources
Bourdieu, Pierre. Distinction: A Social Critique of the Judgment of Taste.
Bourdieu, Pierre. Forms of Capital.
Wallerstein, Immanuel. The Modern World-System.
DiMaggio, Paul & Powell, Walter. The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality.
Kaplan, Andreas & Haenlein, Michael. A Brief History of Artificial Intelligence.
Zhang, Lin. Artificial Intelligence: 70 Years Down the Road.
Hajkowicz, Stefan et al. Artificial Intelligence Adoption in the Physical Sciences, Natural Sciences, Life Sciences, Social Sciences and the Arts and Humanities.
Floridi, Luciano. The Fourth Revolution: How the Infosphere is Reshaping Human Reality.
Russell, Stuart & Norvig, Peter. Artificial Intelligence: A Modern Approach.
Pickering, Andrew. The Mangle of Practice: Time, Agency, and Science.

Comments