The Ethics of Artificial Intelligence:
From Dartmouth to UNESCO — Principles, Frameworks & the Governance Imperative
A comprehensive analysis of the philosophical, institutional, and corporate frameworks shaping AI ethics — UNESCO’s Recommendation, the EU’s Trustworthy AI Guidelines, NIST’s Risk Management Framework, Google’s AI Principles, and the foundational ideas from Turing, Dartmouth, and the Luddite debate.
By Ravinder Singh Dhull, Advocate · March 2026 · 24 min read
Artificial intelligence is not merely a technological development. It is a civilisational inflection point — one that demands governance frameworks commensurate with its transformative power. From the 1956 Dartmouth workshop that gave AI its name, through Alan Turing’s foundational question of whether machines can think, to the 2025 debate over whether Google’s removal of its weapons-and-surveillance AI ethics pledges signals a corporate retreat from responsibility, the history of AI ethics is a story of ambition outpacing governance, and of society perpetually trying to catch up. This article traces that arc — from philosophical foundations to the institutional frameworks now attempting to govern the most consequential technology of our time.
⬥ ⬥ ⬥
I. The Dartmouth Conference (1956): Where It All Began
In the summer of 1956, a small group of mathematicians, cognitive scientists, and computer engineers gathered at Dartmouth College in Hanover, New Hampshire, for a two-month workshop that would birth an entire field. The proposal, authored by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, declared with remarkable confidence that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
The Dartmouth workshop is universally recognised as the founding moment of artificial intelligence as a formal academic discipline. It gave the field its name — McCarthy coined the term “artificial intelligence” specifically for the proposal — and established the intellectual agenda that would guide AI research for decades: language use, abstraction, concept formation, problem-solving, and self-improvement.
What the Dartmouth proposal conspicuously lacked was any consideration of the ethical implications of creating machines that could simulate human intelligence. This omission was not unusual for 1956 — the atomic bomb was barely a decade old, and the systematic study of technology ethics was still embryonic. But it established a pattern that would persist for more than half a century: the builders of AI systems would focus on what machines could do, while the question of what they should do — and what safeguards should constrain them — would be deferred, delegated, or ignored.
⬥ ⬥ ⬥
II. The Turing Test (1950): Can Machines Think?
Six years before Dartmouth, the British mathematician Alan Turing published what may be the most influential paper in the history of artificial intelligence: “Computing Machinery and Intelligence” (1950). Rather than attempting to define intelligence directly, Turing proposed what he called the “imitation game” — now universally known as the Turing Test.
The setup is deceptively simple: a human interrogator communicates via text with two unseen entities — one human, one machine. If the interrogator cannot reliably distinguish the machine from the human, the machine is said to have passed the test. Turing predicted that by the year 2000, a computer would be able to fool 30% of human interrogators after five minutes of questioning.
The Turing Test remains relevant to AI ethics in three critical ways. First, it introduced the concept of deception as a benchmark — a machine succeeds by convincing a human that it is something it is not. This raises profound ethical questions in an era of deepfakes, AI-generated content, and chatbots that can simulate empathy, grief, and romantic attachment. Second, Turing himself anticipated many of the objections that are still raised today — theological concerns, mathematical limitations (Gödel’s incompleteness theorem), the argument from consciousness — and addressed them with remarkable prescience. Third, the test implicitly raises the question of moral status: if a machine becomes indistinguishable from a human in conversation, does it deserve moral consideration? This question, once purely philosophical, is becoming practically urgent as AI systems grow more sophisticated.
Today, large language models like GPT-4 and Gemini can arguably pass versions of the Turing Test in casual conversation, yet they lack understanding, consciousness, or intentionality in any meaningful sense. This disconnect — between behavioural mimicry and genuine comprehension — lies at the heart of many contemporary AI ethics debates, including questions about AI-generated legal advice, medical diagnoses, and judicial decision-making tools.
⬥ ⬥ ⬥
III. The Luddite Fallacy: Will AI Destroy Jobs — Or Is That the Wrong Question?
The “Luddite Fallacy” refers to the economic argument that technological innovation does not, in the long run, destroy jobs — it merely shifts them. The term derives from the Luddites, English textile workers who between 1811 and 1816 destroyed weaving machinery that they believed threatened their livelihoods. Economists have traditionally invoked this historical example to argue that fears of technological unemployment are perennially overstated: the industrial revolution displaced handloom weavers but created vastly more factory jobs; the automobile eliminated carriage-making but spawned an entire automotive industry; the computer displaced typists but created a digital economy employing billions.
The argument is called a “fallacy” because the Luddites’ fears, while understandable, proved historically incorrect — in the aggregate and over time. New technologies have consistently generated more employment than they destroyed. But the critical question for AI ethics is whether this historical pattern will hold for a technology that is fundamentally different from anything that preceded it.
The Distinguishing Feature of AI: Previous technologies automated physical tasks (weaving, assembly, transportation) or routine cognitive tasks (calculation, data entry, basic analysis). AI — particularly generative AI — automates non-routine cognitive tasks: writing, legal analysis, medical diagnosis, creative design, software development, strategic planning. For the first time in economic history, the technology in question competes directly with human intellectual labour, which is precisely the domain in which displaced workers have historically found new employment.
This does not necessarily mean the Luddite Fallacy is no longer a fallacy. New categories of work may emerge that we cannot yet envision — just as the Luddites could not have imagined software engineering. But the UNESCO Recommendation explicitly recognises this concern in its policy action areas on labour markets, calling on member states to manage AI’s impact on employment, support just transitions for displaced workers, and ensure that the economic benefits of AI are equitably distributed. The question is not whether AI will change the nature of work — that is certain — but whether the social, educational, and economic systems will adapt quickly enough to prevent mass dislocation in the interim.
⬥ ⬥ ⬥
IV. UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021)
In November 2021, all 193 member states of UNESCO unanimously adopted the Recommendation on the Ethics of Artificial Intelligence — the first-ever global normative instrument on AI ethics. This was the product of three years of multidisciplinary consultation involving experts from 155 countries, making it the most inclusive AI governance framework to date.
Four Core Values
The Recommendation is built upon four foundational values. Human rights and human dignity — AI systems must respect, protect, and promote human rights and fundamental freedoms throughout their lifecycle. Peaceful, just, and interconnected societies — AI should contribute to peaceful societies and respect cultural diversity. Diversity and inclusiveness — AI development must include diverse perspectives and ensure benefits reach all people. Environment and ecosystem flourishing — AI systems should support environmental sustainability and climate action.
Ten Principles
These values translate into ten actionable principles: proportionality and do no harm; safety and security; privacy and data protection; multi-stakeholder and adaptive governance; transparency and explainability; human oversight and determination; responsibility and accountability; awareness and literacy; sustainability; and fairness and non-discrimination. A standout provision explicitly prohibits the use of AI systems for social scoring and mass surveillance — a powerful statement that certain applications of AI are fundamentally incompatible with human rights.
Eleven Policy Action Areas
What distinguishes the UNESCO Recommendation from earlier ethical frameworks is its explicit move beyond high-level principles toward practical implementation. Eleven policy action areas provide concrete guidance across ethical impact assessment, governance and stewardship, data policy, development and international cooperation, environment and ecosystems, gender, culture, education and research, communication and information, economy and labour, and health and social wellbeing.
UNESCO has developed two practical methodologies to support implementation: the Readiness Assessment Methodology (RAM), which helps member states assess their preparedness to implement the Recommendation and identify gaps, and the Ethical Impact Assessment (EIA) framework, a structured process helping AI project teams identify and assess the impacts an AI system may have on human rights, society, and the environment. The first Global Forum on Ethics of Artificial Intelligence, held in February 2024 in the Czech Republic, and the subsequent launch of the Global AI Ethics and Governance Observatory, mark the transition from standard-setting to implementation monitoring.
⬥ ⬥ ⬥
V. The EU’s Ethics Guidelines for Trustworthy AI (2019) and the AI Act (2024)
The European Union has pursued the most comprehensive regional approach to AI ethics and regulation, combining voluntary ethical guidelines with binding legislation in a two-track strategy.
The Ethics Guidelines (2019)
In April 2019, the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) — a 52-member independent body drawn from academia, industry, and civil society — published the Ethics Guidelines for Trustworthy Artificial Intelligence. The Guidelines define trustworthy AI through three requirements: it should be lawful (respecting all applicable laws and regulations), ethical (adhering to ethical principles and values), and robust (technically and socially resilient).
These are grounded in four ethical principles — respect for human autonomy, prevention of harm, fairness, and explicability — which translate into seven key requirements that AI systems must satisfy: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination and fairness, societal and environmental well-being, and accountability. The accompanying Assessment List for Trustworthy AI (ALTAI) provides a practical self-assessment checklist for organisations deploying AI systems.
The EU AI Act (2024): From Ethics to Law
The EU’s voluntary ethical guidelines were always intended as a precursor to binding regulation. The EU Artificial Intelligence Act, given final approval by the Council in May 2024, is the world’s first comprehensive AI regulation. It follows a risk-based approach: the higher the risk to society, the stricter the rules. AI systems are classified into four risk tiers — unacceptable risk (prohibited), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (unregulated). Prohibited applications include real-time biometric surveillance in public spaces, social scoring systems, and AI that manipulates human behaviour to circumvent free will — directly echoing the UNESCO Recommendation’s prohibitions.
The AI Act establishes the European Artificial Intelligence Office within the European Commission to oversee implementation, coordinate enforcement across member states, and develop guidelines and standards. It imposes fines of up to €35 million or 7% of global annual turnover for the most serious violations. For legal practitioners, the AI Act represents the most significant new regulatory burden since the GDPR, with compliance obligations extending to AI providers, deployers, importers, and distributors operating within or serving the EU market.
⬥ ⬥ ⬥
VI. NIST’s AI Risk Management Framework (2023–2025)
The United States has taken a markedly different approach from the EU, relying primarily on voluntary standards rather than binding regulation. The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023, establishing a structured, evidence-driven approach to identifying, assessing, mitigating, and monitoring AI risks.
The AI RMF is organised around four core functions — Govern, Map, Measure, and Manage — that operate as an iterative lifecycle cycle rather than a one-time compliance exercise. In July 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies twelve specific risk categories unique to or exacerbated by generative AI — including confabulation (hallucinations), CBRN weapon information risks, data privacy violations, environmental costs, and information integrity threats — and proposes more than 200 actions that organisations can adopt to manage these risks.
The NIST framework’s strength lies in its granularity and technical rigour. Unlike the UNESCO Recommendation (which operates at the level of values and principles) or the EU Guidelines (which combine ethics with an assessment checklist), the AI RMF provides sector-specific profiles, detailed risk taxonomies, and integration pathways with existing enterprise governance systems such as ISO/IEC 42001, SOC2 AI controls, and EU AI Act conformity workflows.
However, the voluntariness of the NIST framework raises questions about its effectiveness. Following the change in US administration in January 2025, Executive Order 14110 on Safe, Secure, and Trustworthy AI — which had provided the NIST framework with significant political momentum — was revoked. The replacement Executive Order 14179 focuses on removing “barriers” to AI leadership, prioritising innovation over regulation. Whether the NIST framework will retain its influence without executive branch support remains an open question.
⬥ ⬥ ⬥
VII. Google’s AI Principles: The Rise and Retreat of Corporate AI Ethics
No corporate AI ethics initiative has been more consequential — or more instructive — than Google’s. The trajectory from Project Maven to the February 2025 revision of Google’s AI Principles encapsulates the tensions between ethical aspiration and commercial reality that define corporate AI governance.
Project Maven and the 2018 Principles
In early 2018, it emerged that Google was collaborating with the US Department of Defense on Project Maven — a programme that used AI to analyse drone surveillance footage for target identification purposes. Over 3,000 Google employees signed an internal petition demanding that CEO Sundar Pichai cancel the contract and commit to never building warfare technology. Approximately a dozen employees resigned in protest. Google let the contract lapse in 2019.
In direct response, Pichai published Google’s AI Principles in June 2018, which explicitly pledged that Google would not develop AI for weapons or technologies whose principal purpose was to cause or directly facilitate injury, AI for surveillance that violated internationally accepted norms, or AI that caused “overall harm” — where the benefits did not substantially outweigh the risks. These principles were widely praised as a model for corporate AI ethics, demonstrating that employee activism and public accountability could constrain even the most powerful technology companies.
The Quiet Reversal of 2025
On 4 February 2025, Google quietly revised its AI Principles, removing all language prohibiting AI development for weapons and surveillance applications. The updated principles emphasise three priorities — innovation, responsible AI development and deployment, and collaboration — and commit to working with “governments and organizations that share democratic values” to support “national security.” The company no longer maintains any explicit prohibition on military AI applications.
The reversal was not sudden. By 2022, Google Cloud had secured a co-contract with the Department of Defense for the Joint Warfighting Cloud Capability programme with a ceiling of USD 9 billion. In 2024, Google fired more than 50 employees who protested against Project Nimbus, a USD 1.2 billion contract with the Israeli government that reportedly included AI tools for image categorisation and object tracking. The workforce itself had changed: many vocal opponents of military work had left during and after the Maven controversy, while mass layoffs in 2023 and 2024 dampened internal dissent. By 2025, Google’s defence and intelligence contracts reportedly generated several billion dollars annually.
The Lesson: Dr. Timnit Gebru, the former co-lead of Google’s Ethical AI team who was forced out of the company, warned that corporate AI ethics commitments are “flexible when profits and power are at stake.” Google’s trajectory — from principled refusal of military AI in 2018 to multi-billion-dollar Pentagon contracts by 2025 — demonstrates that voluntary corporate ethics pledges, without external regulatory enforcement, are inherently fragile. They can be adopted when the political and commercial environment favours them, and quietly discarded when it does not.
⬥ ⬥ ⬥
VIII. Comparative Framework: How the Pieces Fit Together
| Dimension | UNESCO (2021) | EU HLEG + AI Act | NIST AI RMF (US) | Google AI Principles |
|---|---|---|---|---|
| Nature | Global normative instrument (194 member states) | Ethics guidelines (voluntary) + AI Act (binding law) | Voluntary risk management framework | Voluntary corporate self-regulation |
| Binding? | Politically binding; not legally enforceable | AI Act is legally binding; penalties up to €35M / 7% turnover | Entirely voluntary | Entirely voluntary; subject to unilateral revision |
| Scope | All AI systems across all sectors | Risk-based: unacceptable → high → limited → minimal | All AI; GAI-specific profile (AI 600-1) | Google products and services only |
| Weapons / surveillance | Prohibits social scoring and mass surveillance | Bans real-time biometric surveillance and social scoring | Risk identification; no prohibitions | Removed weapons/surveillance prohibitions (Feb 2025) |
| Key strength | Global consensus; 11 policy action areas; implementation tools | Legal enforceability; risk-based proportionality; penalties | Technical rigour; 200+ actions; sector-specific profiles | Demonstrated influence of employee activism (2018) |
| Key weakness | No enforcement mechanism | EU-limited jurisdiction; compliance complexity | Voluntary; political support uncertain post-2025 | Subject to unilateral revision based on commercial interests |
⬥ ⬥ ⬥
IX. Conclusion: Ethics Without Enforcement Is Aspiration Without Effect
Seventy years after Dartmouth, the field of artificial intelligence has frameworks, principles, guidelines, recommendations, assessment checklists, risk profiles, and governance observatories. What it lacks — with the sole exception of the EU AI Act — is enforceable law that carries consequences for violation. The UNESCO Recommendation is politically significant but legally voluntary. The NIST framework is technically excellent but operationally optional. Google’s AI Principles were once the gold standard for corporate self-regulation — and were quietly gutted the moment commercial incentives pointed in a different direction.
The lesson is not that principles and guidelines are worthless. They are essential — they establish normative expectations, create accountability frameworks, and provide the conceptual foundations upon which binding regulation can be built. The EU AI Act could not have been drafted without the AI HLEG’s Ethics Guidelines. The UNESCO Recommendation provides the global consensus against which national legislation can be measured. The NIST framework offers the technical infrastructure that enforcement agencies need to assess compliance.
But principles without enforcement are aspirations. And in a domain where the stakes include autonomous weapons, mass surveillance, algorithmic discrimination, labour displacement, and the potential erosion of democratic processes, aspiration is not sufficient. The Luddites were wrong about the long-term employment effects of the power loom. But they were right about something more fundamental: that technologies which concentrate power while displacing those who lack it require governance — not just goodwill.
The Turing Test asked whether machines can think. The question that matters now is whether we will think — carefully, urgently, and with enforceable consequences — about the rules that govern them.
Advocate, Punjab & Haryana High Court · Founding Partner, M & D Law Associates LLP
With over 22 years of practice spanning constitutional law, PIL, and technology law, Advocate Dhull brings a practitioner’s perspective to AI governance, data protection, and digital rights. He is the architect of the LexPatra legal technology platform and has authored comprehensive compliance frameworks under India’s DPDPA 2023.
Juris Altus | jurisaltus.com | Excellence in Legal Practice & Innovation
Panchkula • Delhi-NCR • International Alliance Network