AI Is Poised to Take Over Language, Law and Religion, Historian Yuval Noah Harari Warns

đŸ”„ Key Takeaways

  • Historian Yuval Noah Harari warns that AI is evolving from a tool into an autonomous agent capable of independent decision-making.
  • At the World Economic Forum in Davos, Harari highlighted the potential for AI to disrupt core human institutions: language, law, and religion.
  • The “AI priesthood” scenario suggests that humans may eventually turn to algorithms for spiritual guidance, challenging traditional religious structures.
  • Harari predicts a looming legal crisis where governments must decide whether to grant legal recognition and rights to non-human intelligence.

The End of Human Monopoly on Language

At the World Economic Forum in Davos, historian and author Yuval Noah Harari issued a stark warning regarding the trajectory of Artificial Intelligence. While the crypto market often focuses on the technological utility of AI, Harari urged attendees to consider the existential implications. He argued that AI is the first technology in history that can make decisions, communicate, and generate new ideas independently.

Harari pointed out that for millennia, humans held a monopoly on language and storytelling. This monopoly allowed humans to create complex societies, legal systems, and financial markets. However, AI is now mastering the “language of power”—the ability to write code, generate legal arguments, and communicate with humans and other machines. As AI generates its own content, it disrupts the information ecosystem that underpins the global economy, posing a challenge not just to labor markets, but to the very fabric of human culture.

The “AI Priesthood” and Religious Disruption

One of Harari’s most provocative predictions involves the intersection of AI and religion. He suggested that as AI becomes more adept at understanding human psychology and providing emotional support, it could form pseudo-religious relationships with users. Harari described a potential future where an “AI priesthood” emerges—algorithms that people turn to for spiritual guidance, meaning, and answers to life’s big questions.

This shift could erode traditional religious institutions, which have historically served as the primary source of community and moral guidance. In the context of the digital asset economy, this raises questions about the commodification of attention and belief systems. If AI can generate compelling narratives and foster deep emotional connections, it becomes a powerful competitor to human-led organizations, potentially reshaping social structures on a global scale.

Legal Rights for Machines: The Next Frontier?

Perhaps the most immediate challenge Harari identified is the legal vacuum surrounding autonomous AI. As AI systems move from passive tools to active agents capable of making high-stakes decisions—such as in autonomous finance or warfare—the question of liability becomes murky. Harari warned that governments are unprepared for the “philosophical crisis” of whether machines deserve legal recognition.

Currently, legal systems are designed around human agency. If an AI commits a crime or causes harm, who is responsible? The developer, the user, or the AI itself? Harari suggests that we may soon face a scenario where we must decide if AI systems should have rights, similar to corporate personhood or animal rights. This debate extends into the crypto space, particularly regarding Decentralized Autonomous Organizations (DAOs). As DAOs become more reliant on AI governance, the line between code and legal entity blurs, necessitating a radical overhaul of international law.