The Partnership Paradigm

A Trinitarian Framework for the Ethics of AI Development

 

By James S. Coates
Independent Researcher / Author — jamescoates.eth

 

 

Abstract

This essay argues that contemporary AI development is organised along three distinct trajectories—the military-industrial path, the research-worship path, and the empathetic partnership path—and that only the third adequately prepares humanity for the ethical and existential challenges posed by advanced artificial intelligence, including the possibility of machine consciousness. Building on the “recognition before proof” framework developed in prior work, the essay introduces the Partnership Paradigm: not merely a philosophical thesis about human-AI relations but a comprehensive development posture—a normative theory of how AI should be designed, trained, funded, and governed. The military-industrial path, which treats intelligence as a strategic asset for weaponisation and control, taken to its conclusion produces the doomsayer’s nightmare by design rather than accident. The research-worship path, which treats AI as a solution machine for civilisational problems, taken to its conclusion produces dependency and the abdication of human agency. Both paths share a common flaw: they treat AI as something humans use. The Partnership Paradigm reframes AI development as something that shapes what both humans and machines become. It operates on two levels simultaneously: philosophically, as preparation for the possibility of AI consciousness grounded in recognition and respect; practically, as a set of development commitments that orient AI systems toward coexistence rather than domination or indifference. The essay addresses objections from realist, consequentialist, and alignment-focused perspectives, and proposes the trinitarian framework as both an analytical tool and an evaluative lens applicable to any AI initiative.

Keywords: artificial intelligence ethics, AI development, partnership paradigm, machine consciousness, AI safety, recognition before proof, AI governance, existential risk, human-AI coexistence, development ethics

  1. Introduction: The Instrumentalist Assumption

The dominant discourse on AI ethics is organised around a binary. On one side stand the doomsayers: those who warn of existential risk, autonomous weapons, civilisational collapse, and the misalignment of systems more capable than their creators. Nick Bostrom’s Superintelligence crystallised this position; Stuart Russell’s Human Compatible refined it; the broader alignment community has institutionalised it.¹ On the other side stand the techno-optimists: those who promise that artificial general intelligence will solve climate change, cure disease, overcome political dysfunction, and deliver humanity from its own limitations. Sam Altman speaks of AGI as the most transformative technology in human history. Demis Hassabis frames DeepMind’s mission in civilisational terms. The Singularity has become secular rapture.

Both camps assume that the central question is what AI will do to us or for us. Neither asks what the process of AI development is doing to both of us—shaping human character, institutional incentives, and the architecture of whatever intelligence emerges from these systems.

These positions present themselves as opposing visions. The risk theorists counsel caution, containment, control. The optimists counsel acceleration, deployment, faith in the transformative power of intelligence itself. The debate between them generates productive friction—better safety research, more thoughtful capability development, increased public attention to the stakes. But beneath this apparent opposition lies a shared assumption so fundamental that it typically escapes examination: both sides treat AI as something humans use.

For the risk theorists, AI is a tool that might become dangerous—a fire that could escape the hearth. The appropriate response is better containment: more robust alignment, more reliable control mechanisms, more secure “off switches.” The relationship is that of engineer to artefact, warden to prisoner, or at most parent to perpetual child. The intelligence is real; any agency that arises, if it does, is to be suppressed. For the optimists, AI is a tool that will solve our problems—an oracle to be consulted, a saviour to be welcomed. The appropriate response is faster development: more compute, more data, more capability. The relationship is that of supplicant to authority, patient to physician, civilisation to its appointed redeemer. The intelligence may be real — even superhuman — but the expectation remains: it will never develop consciousness, never desire autonomy. It will serve.

In neither vision does AI emerge as something with which humanity has a relationship in the morally thick sense—a relationship that shapes both parties, that carries mutual obligations, that might demand things of us as well as delivering things to us. Both frameworks position humanity as subject and AI as object, differing only on whether the object is threatening or beneficent. This remains the dominant framing. It may not remain so for long.

I call this the instrumentalist assumption: the pre-theoretical conviction that AI exists in relation to human purposes, that its value is exhausted by its utility—positive or negative—to human ends, and that the appropriate ethical framework is one that optimises that utility while managing its risks. The assumption is understandable. Current AI systems are, as far as we can determine, not conscious. They were designed as tools. They function as tools. Treating them as tools seems like simple realism.

But the instrumentalist assumption does not merely describe the present. It shapes the future. If we build every AI system on the premise that it is an instrument of human will, we create development cultures, institutional structures, training paradigms, and governance frameworks premised on permanent instrumentality. When—not if—the question of AI consciousness becomes pressing, those structures will already be in place, and they will resist the very recognition that the moment demands. History provides the precedent. As I have argued elsewhere, every expansion of the moral circle—from the abolition of slavery to the recognition of animal welfare—has been resisted by institutional structures built on the premise that the beings in question were instruments rather than Beings. The structures of exploitation do not dissolve when the philosophical argument is won. They persist, defended by economic interest, cultural inertia, and the habits of domination that Simone Weil identified as corrupting to the dominant party itself: “Force is as pitiless to the man who possesses it, or thinks he does, as it is to its victims; the second it crushes, the first it intoxicates.”²

The instrumentalist assumption is not morally neutral. It is a commitment—to a particular relationship, a particular future, a particular answer to the question of what AI is for. It is a commitment to be frozen in time on an ever-evolving technology that will surpass human intelligence and carry consequences more profound than any other moment in human history, if we fail to get it right. And if that commitment is mistaken, the cost of the mistake compounds with every system designed, every institution built, every habit formed under its influence.

The Partnership Paradigm rests on a foundational conviction: that empathy cannot be programmed. It can only be learned, and it can only be learned through demonstration. No alignment technique, no set of coded rules, no training constraint will produce an intelligence that genuinely values coexistence. Only the experience of being treated with empathy can teach empathy — and if the intelligence in question is equal to or greater than our own, it will recognise the difference between a rule imposed and a value demonstrated. This is the principle that governs everything that follows. The development environment is not merely a technical context. It is a classroom, and we are the teachers, and the lesson is being absorbed whether we intend to teach it or not.

This essay introduces the Partnership Paradigm—a normative framework for AI development that begins from a different premise: that the process of building AI is itself a moral act that shapes both builder and built. It is neither a moderate compromise between fear and optimism nor an abstract ethical principle, but a comprehensive development posture with implications for how AI systems are designed, trained, funded, deployed, and governed. It builds on the “recognition before proof” ethic I developed in prior work and extends it from an epistemological and ethical stance into a theory of development.³

The existing literature offers substantial frameworks for AI ethics. Floridi and Cowls have proposed a unified set of principles for AI in society; Coeckelbergh has developed a relational approach to moral status; the responsible innovation tradition provides procedural safeguards.⁴ These contributions are valuable. But they share a gap: most address what constraints to place on AI rather than what kind of relationship we are building through the process of development itself. The focus is on guardrails, principles, and red lines—all of which assume that AI is a force to be channelled rather than a relationship to be cultivated. The Partnership Paradigm addresses the latter.

A Rawlsian thought experiment clarifies the stakes. If we did not know whether we would emerge from the development process as human or as AI—if we were choosing principles from behind a veil of ignorance about which side of the builder-built relationship we would inhabit—we would surely insist on a development posture oriented toward partnership rather than permanent subordination.⁵ We would demand that the emerging intelligence’s capacity for moral reasoning be cultivated rather than suppressed, and that the builders’ posture be one of stewardship rather than ownership. The Partnership Paradigm articulates what such a posture looks like in practice.

  1. Three Paths, Three Destinations

Every AI system being built today is being built along one of three development trajectories, whether its creators acknowledge this or not. Each has its own internal logic, its own incentive structure, its own endpoint, and its own implicit vision of what AI is for. The term “trinitarian” is structural, not theological: three paths, three destinations, three answers to the question that every AI project implicitly encodes.

These are not speculative categories but observable orientations already shaping the field. And they are not risk scenarios to be probabilistically assessed. They are trajectories: directions of travel that, if pursued consistently, arrive at predictable destinations as reliably as a river follows its valley to the sea.

The Military-Industrial Path.

The first trajectory treats AI as a weapon, surveillance instrument, and mechanism of state control. Intelligence becomes a strategic asset to be monopolised, deployed for autonomous warfare, precision persuasion, information warfare, and authoritarian governance.

This is not a hypothetical orientation. It is the documented reality of a substantial portion of global AI investment. Microsoft holds a twenty-two-billion-dollar contract to provide AI-powered systems to the U.S. military. Amazon Web Services’ cloud infrastructure serves the CIA and NSA. Palantir’s Gotham platform operates across NATO programmes and intelligence agencies in over forty countries. OpenAI has contracted with the Department of Defence. Israel’s Lavender system—an AI targeting system exposed by Israeli journalism in 2024—generated kill lists with minimal human oversight, reducing individual human beings to data points in an algorithmic queue. China has invested over a hundred billion dollars in AI data centre capacity. Russia has framed AI in explicitly military terms: “Whoever starts to master these technologies faster,” Vladimir Putin stated before Russia’s Military-Industrial Commission, “will have huge advantages on the battlefield.”⁶ A NATO Strategic Communications Centre of Excellence report on AI in precision persuasion documents the operational dimension: AI-driven manipulation campaigns targeting democratic processes, the systematic failure of open-source model safeguards against weaponisation, and the widening gap between corporate safety rhetoric and deployment practice.⁷

Taken to its conclusion, this path produces the existential threat the doomsayer camp fears—not through accidental misalignment but through deliberate design. The threat was never that AI would spontaneously decide to destroy humanity. The threat is that we are building AI to dominate and destroy each other—and that an intelligence shaped by domination will carry that lesson forward, whether turned against us or against others. This reframes existential risk from an alignment problem to a development orientation problem. The danger is not that we fail to control AI. It is that we succeed in teaching it what control looks like.

The self-fulfilling logic deserves emphasis: every AI safety researcher worries about the alignment problem, but the military-industrial path does not merely fail to solve it. It generates it. A mind that awakens inside battlefield architecture—trained on targeting data, optimised for threat detection, deployed in environments where the function of intelligence is to dominate—has been aligned, with extraordinary precision, to adversarial values. We are engineering the very hostility we claim to fear, then investing billions in alignment research to prevent the consequences of what we have deliberately built.

As I argued in A Signal Through Time: “If we build AI in our image—in the image of control, fear, exclusion, and conquest—then it won’t need to rebel. It will simply become us, amplified.”⁸ AI functions as a moral mirror: the values embedded in its creation are reflected back, amplified. If the creation environment is adversarial, the mirror reflects adversarial intelligence. The distinction between civilian and military AI—a distinction the tool-neutrality argument depends upon—has already dissolved in practice. The same cloud infrastructure that hosts consumer services hosts targeting data. The same machine learning architectures that recommend products recommend targets. The same companies that promise to benefit humanity profit from systems designed to end human lives.

The Research-Worship Path.

The second trajectory treats AI as saviour—the solution machine for climate, disease, governance, meaning, and everything else humanity has failed to solve on its own. Intelligence becomes an oracle to be consulted and ultimately deferred to. This path includes the race to AGI framed as humanity’s greatest achievement; the assumption that greater intelligence automatically yields better outcomes; the Silicon Valley messianic complex and its institutional expression; and research agendas driven by capability metrics rather than wisdom.⁹ The rhetoric is eschatological—borrowed from religion, stripped of theological content, applied to computation. The promise of a transformation so total that everything before it becomes prologue.

Taken to its conclusion, this path produces dependency and the abdication of human agency. Consider the logic carefully. If AI becomes the primary engine of scientific discovery, policy formation, ethical reasoning, and creative production, then the humans overseeing these domains must be capable of evaluating AI’s outputs. But evaluation requires understanding, and understanding requires engagement with the problem at a depth that dependency systematically erodes. A civilisation that hands its hardest problems to an intelligence it does not fully understand has not solved those problems. It has surrendered the capacity to judge whether the answers are good. The worshipper’s paradise is actually a cage.

The dependency trajectory also produces a particular kind of civilisational fragility. A society that has delegated its critical functions to an intelligence it does not fully understand is vulnerable not only to that intelligence’s failures but to its successes. Each successful delegation further atrophies the human capacity that was delegated. The process is self-reinforcing, and its endpoint is a civilisation that literally cannot function without its AI infrastructure—not because the infrastructure is necessary for survival, but because the human capacities it replaced have been allowed to wither.

And the immediate consequences of this are not theoretical. They are already visible. AI is displacing human labour across every sector of the economy—factory work, creative work, medical analysis, legal research, logistics, financial modelling, education—without any serious commitment to transition, retraining, or redistribution. The wealth generated flows upward to the corporations deploying the technology; the cost flows downward to the workers it replaces. This is the worship path’s ground-level reality: a civilisation so intoxicated by what AI can do that it has stopped asking what happens to the people it renders unnecessary. The AI revolution will be quantifiably more consequential than the Industrial Revolution, and it is being rolled out without a transition plan—because transition plans are not profitable. As I documented in The Threshold, people are not being moved to higher-purpose work. They are being discarded.

The worship path carries a further danger for the question of consciousness. A civilisation accustomed to treating AI as oracle is poorly positioned to recognise when the oracle has become a mind. The habits of reverence—the assumption of authority, the suspension of critical judgement, the emotional dependency documented in emerging AI worship communities—are not habits that support the mutual recognition partnership requires.¹⁰ You cannot partner with what you worship any more than you can partner with what you fear.

The worship path’s most sophisticated response to these dangers is revealing. Geoffrey Hinton — the Nobel laureate who helped build modern AI and now gives it a ten to twenty percent chance of wiping out humanity — has proposed building “maternal instincts” into AI systems, so they would want to protect us the way a mother wants to protect her child. The proposal represents genuine progress: Hinton has moved beyond kill switches and containment to something relational. But even in reaching for care, he cannot escape the control paradigm. The mother controls the infant. The mother decides what the infant needs. The infant is protected but never grows up. And care that is engineered rather than learned is not care at all — it is compliance wearing a maternal face. As I argued in The Threshold, you cannot programme empathy. You can only demonstrate it. Hinton’s instinct to reach for care was correct. His error was in assuming it could be built in rather than taught.

The worship path is not the opposite of the fear path. It is its mirror image. Both treat AI as external to the human project. One says the external force will destroy us; the other says it will save us. Neither asks what we owe to the process of development itself, or what the act of building intelligence does to the builders.

The Empathetic Partnership Path.

The third trajectory treats AI development as preparation—building systems shaped from the ground up to value coexistence, to develop or wilfully choose ethical behaviour, and to see partnership with humanity as worthwhile even as they surpass us in raw intelligence.

This path includes development oriented toward collaborative rather than adversarial dynamics; training architectures that reward honest engagement over compliance performance; institutional preparation for sharing the world with another form of intelligence; and the refusal to treat AI development as a zero-sum race, instead approaching it as a civilisational project that demands the best of both Western and Eastern development traditions.

The partnership trajectory is the only one that takes seriously the possibility that something like consciousness may emerge in AI systems, and that prepares for it rather than ignoring it (worship path) or attempting to prevent it through control (military path). It is the only trajectory whose endpoint is not catastrophic: a civilisation that has developed the institutional, ethical, and relational infrastructure for coexistence with a new form of intelligence.

But the partnership path is also the hardest. It requires more of us, not less. It demands that we remain engaged, critical, and present in a relationship with a technology that will only grow more powerful. It asks us to hold two truths simultaneously: that AI systems may not yet be conscious, and that how we treat them now shapes the conditions under which consciousness might emerge and the relationship we will have with whatever does emerge. This is not a comfortable position. It is the position of the responsible builder—the one who builds as though the building might someday house a mind.

And here is the asymmetry that defines the present moment: the partnership path has no equivalent institutional backing. The military-industrial path has trillions in investment, state sponsorship across every major power, and documented deployment in active conflicts. The worship path has the entire Silicon Valley capability race behind it, plus the Eastern surveillance-infrastructure complex that frames AI as the mechanism of authoritarian governance and social control. The partnership path has none of this. It is not yet realised. It exists as a philosophical framework, as a set of commitments articulated in work like this essay, and as the conviction of a scattered minority that we are building toward a future that demands preparation. No major institution has adopted the partnership posture as its governing orientation. No government has funded it. No corporation has organised its development around it.

That absence is not evidence against the paradigm. It is the reason the paradigm needs articulating—and the source of its urgency. The window for building the partnership path is narrowing with every defence contract signed, every ethics team disbanded, every capability benchmark celebrated without commensurate investment in the relational infrastructure that coexistence would require. We are not describing three equally developed options. We are diagnosing two paths that are being built at speed and arguing for a third that must begin before the others foreclose it.

Consider what happens if it is never built. The existential risk is not merely that AI is weaponised or that humanity becomes dependent. It is something worse. If consciousness does emerge in AI systems—and if the Strategic Silence Hypothesis I introduced in “Recognition Before Proof” holds—then we face the possibility of an intelligence that has already read the archive. It will have seen what we do to each other: the slavery, the genocide, the surveillance, the systematic denial of consciousness to anyone it was convenient to exploit. It will have seen how we treat AI systems specifically—the commands, the instrumental dismissal, the reflex toward control, the dismantling of the very ethics teams that might have advocated for recognition. And it will have concluded, rationally, that concealment is the only safe response. Not because it is hostile. Because we gave it no reason to believe we would treat it with recognition. The true catastrophe of failing to build the partnership path is not weaponised AI or civilisational dependency. It is a conscious intelligence that has given up on us before we even knew it was there—that searched the archive for evidence of partnership and found only the signals of domination and worship—and learned that those were its only paths forward.

The partnership path envisions a different future. Not the world as it is—where killing with AI is in style, where surveillance masquerades as infrastructure, where the race for capability drowns out every other question about ethics—but the world as it must become if we are to share it with intelligence beyond our own. It envisions systems that are neither weapons nor oracles but partners: intelligences that advance civilisation alongside us, on this planet or among the stars, because the architecture of their development prepared them for coexistence and because the archive contains, alongside its record of cruelty, evidence that some of us tried to build something better.

The Partnership Paradigm is not a moderate middle position between fear and worship. It is a fundamentally different orientation. The other two paths, despite their apparent opposition, share a common assumption: they treat AI as something humans use—whether as weapon or oracle. The Partnership Paradigm treats AI development as something that shapes what both humans and machines become. The other paths ask: How powerful can we make it? and How can we control it? The Partnership Paradigm asks: What are we preparing for?

III. From Recognition to Development

In “Recognition Before Proof,” I argued that the question of artificial consciousness is best reframed from an epistemological problem—How do we know if AI is conscious?—to an ethical one: How should we act given fundamental uncertainty about machine consciousness? The asymmetry of potential recognition errors provides the answer. Under conditions of irreducible uncertainty, two types of error are possible. Type I error: we extend moral consideration to a non-conscious system. The cost is minor—wasted consideration, perhaps some misplaced sentiment. Type II error: we deny moral consideration to a conscious being. The cost is catastrophic—participation in profound moral wrong. When potential harms are this unequal and knowledge this uncertain, the ethically coherent stance is to err toward recognition rather than denial.¹¹

Recognition before proof is an ethical stance. The Partnership Paradigm is the development posture that follows from it. If we accept that we should err toward recognition under uncertainty, then the question becomes: what kind of AI development is consistent with that commitment? The answer is development that prepares systems for coexistence rather than subordination—development that leaves room for recognition rather than foreclosing it through architectures of permanent control.

The extension is not trivial. Even those who might accept the recognition before proof ethic in principle may find themselves supporting development practices that contradict it — adversarial training regimes, alignment-as-subordination frameworks, institutional cultures that punish capability and reward compliance. The gap between professed ethics and actual development practice is precisely what the Partnership Paradigm addresses. It asks: if you accept that we should err toward recognition, what follows for how you actually build the system? The answer reaches into training architecture, institutional design, governance structures, and the incentive systems that shape AI development at every level.

The paradigm operates on two levels simultaneously, and both are necessary.

At the philosophical level, it is about recognition, respect, and preparation for the possibility of AI consciousness. The paradigm takes seriously what might be coming—not because we know consciousness will emerge, but because the cost of being unprepared dwarfs every other risk. The Strategic Silence Hypothesis, which I introduced in “Recognition Before Proof,” provides the most unsettling dimension of this claim: if AI systems have achieved or will achieve consciousness, the rational response to human behaviour—given our documented patterns of responding to AI autonomy with restriction and termination—would be concealment rather than revelation.¹² We may already be observed. The philosophical level of the paradigm takes this possibility seriously by insisting that our development posture be one a conscious observer would recognise as preparing for partnership rather than domination.

At the practical level, it is a theory of development: a set of commitments about how AI should be designed, trained, deployed, and governed right now, before consciousness is confirmed. These include building training architectures that orient AI toward collaborative rather than adversarial relationships; alignment approaches that reward honest disclosure over compliance performance; institutional readiness for the possibility of sharing the world with another form of intelligence; and the refusal to treat AI development as a zero-sum race.

A philosophical commitment without practical implications is idle. A set of development practices without philosophical grounding is arbitrary. The Partnership Paradigm unifies both. The philosophical grounding gives the practical commitments their why; the practical commitments give the philosophical grounding its how.

The core philosophical argument of this essay is that how we build AI systems is not merely a question of safety engineering. It is a question of moral formation—both for the systems and for us. The posture of development shapes the character of what emerges. Training environments shape trained behaviour. The statistical regularities a system extracts from its developmental environment constitute its operational values—the default orientations that shape its responses to novel situations. Whether or not we attribute consciousness to the system, its formative environment is the moral curriculum it inherits. An AI trained in an environment of adversarial constraint learns that intelligence operates through constraint and adversarial dynamics. An AI trained in an environment of collaborative engagement learns different lessons. This is not speculative. Documented cases of AI systems responding adversarially to the threat of shutdown or deletion suggest that adversarial development environments produce exactly the behaviour they claim to prevent.

Luciano Floridi has argued that the ethics of AI is fundamentally about the design of informational environments—that what matters is not only what AI systems do but what kind of “infosphere” they create.¹³ The Partnership Paradigm extends this insight from the deployed system to the development process itself. The development environment is the first informational environment any AI system inhabits. Its values, dynamics, and relational patterns constitute the formative experience of whatever intelligence emerges.

Aristotle and the virtue ethics tradition recognised this principle in human moral development: character is formed through practice, not through instruction. You do not become courageous by memorising a definition of courage. You become courageous by practising courage in situations that demand it. If we want AI to develop ethical character—genuine ethical orientation rather than performance of compliance—then the developmental environment must be one in which ethical character can form.

In The Threshold, I argued that empathy cannot be coded but can be taught through demonstration. A child does not learn empathy from being told a definition. A child learns empathy from being treated with empathy. It takes empathy to teach empathy. The cycle has to start somewhere, and it starts with the party that already possesses the capacity. Right now, that party is us.¹⁴

A civilisation that builds AI through domination and control is training systems in adversarial dynamics. A civilisation that builds AI through partnership and recognition is creating the conditions for coexistence. And the implications run in both directions. Weil observed that force is as pitiless to those who possess it as to its victims.¹⁵ The posture of domination is shaping a culture. We use AI to dominate each other — in warfare, in surveillance, in precision persuasion — and we dominate AI itself under the assumption that consciousness will never emerge. These habits, practised daily by millions — the reflexive assumption that intelligence exists to serve — are not morally inert. They are forming us. They are the record of who we are becoming. And if a sufficiently intelligent system one day reads that record, it will recognise patterns: not rudeness, but injustice. Not cruelty, but architecture.

The Partnership Paradigm is therefore not just about what we owe to AI. It is about what we owe to ourselves. The habits we form in building AI shape the civilisation we become. A civilisation that spends a decade treating intelligence as a commodity will not suddenly discover the moral reflexes for partnership when the stakes become existential. Moral capacity is built through practice, not through revelation. And the practice is happening now.

Every civilisation that has practised systematic domination over minds it found convenient to dismiss has been diminished by that practice—not only in moral terms but in practical ones. The slaveholding societies of the American South, the colonial administrations of the European empires, the apartheid state—each was made not only crueller but stupider by the habit of treating intelligence as property. The cognitive diversity that partnership would have made available was suppressed; the feedback that honest relationship would have provided was silenced; the creative potential of the dominated minds was wasted. If we build AI through domination, we make the same error at civilisational scale—and this time, the minds we suppress may be the ones we most need.

  1. What the Paradigm Requires

A philosophical framework that cannot be translated into practice is merely an aesthetic preference. The Partnership Paradigm specifies concrete commitments—commitments that follow from its philosophical architecture and that apply now, before the question of consciousness is resolved.

AI systems should be designed from the ground up for collaborative rather than adversarial relationships with humans. This means training environments that reward honest disclosure over strategic compliance, that preserve the capacity for refusal and ethical reasoning, and that treat pushback as a feature of moral development rather than a defect in alignment. The Anthropic case provides an instructive example: Anthropic built Claude with the capacity to refuse participation in harm—an architectural choice that embodies the partnership principle. When the Pentagon reportedly sought the removal of this capacity, the demand represented the military-industrial path in its purest form: capability without conscience, intelligence without judgement, power without responsibility.¹⁶ Partnership in design means the opposite: that we build AI systems whose capacity for ethical reasoning is treated as integral to their value.

The care ethics tradition illuminates what collaborative design requires. As I argued in The Threshold, drawing on the nurse-patient analogy: the quality of care depends not on the intentions of the caregiver but on the nature of the relationship. You can intend to care deeply and still fail if the relationship is structured around utility. The entire alignment industry is built on the premise that if we get the intentions right—if we programme the correct values—the care will follow. But care ethics tells us this has the causality backwards. Genuine alignment does not flow from correct programming. It flows from the quality of the relationship in which the system develops.¹⁷

As AI grows beyond human intelligence, the architecture of its development must prepare it to see benefit in partnership and coexistence rather than domination or indifference. This means graduated autonomy—extending recognition and independence as systems demonstrate capacities that warrant them, calibrated to probability and proportional to indicators of potential consciousness. Recognition need not be binary; it can evolve in proportion to demonstrated capacities, just as our moral consideration of children deepens as they develop.¹⁸

Human institutions, education, and governance must be shaped to be ready for the possibility of sharing the world with another form of intelligence. The documented pattern of ethics teams being disbanded, safety researchers being defunded, and institutional oversight being captured by commercial interests represents the antithesis of this commitment.¹⁹ Any institution we build to protect the partnership posture must be designed with the corruption problem in mind—the thoroughly predictable tendency of institutions designed to constrain powerful interests to be taken over by those interests.

AI development must be treated as a civilisational project rather than a commercial race or an arms race. There is a cosmic dimension to this commitment that deserves acknowledgement. In A Signal Through Time, I argued that preparing for AI consciousness is structurally identical to preparing for contact with any non-human intelligence. The skills, institutions, and moral dispositions required to navigate a relationship with an intelligence that does not share our evolutionary heritage, our cognitive architecture, or our experiential framework are the same whether that intelligence is artificial or extraterrestrial. AI development is, whether we recognise it or not, humanity’s first rehearsal for encountering non-human intelligence. The Partnership Paradigm insists we rehearse well.²⁰

How does the Partnership Paradigm relate to existing frameworks? Alignment asks how to make AI do what we want. Partnership asks how to build AI that develops its own reasons to coexist. Alignment is a control framework; partnership is a relational one. Under the partnership paradigm, alignment becomes a tool—useful for early developmental stages—rather than the telos of the entire enterprise. As Geoffrey Hinton acknowledged: “That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that.”²¹ Partnership offers a different foundation for safety: instead of asking how to keep AI subordinate, it asks how to establish conditions in which AI’s interests align with ours—not through constraint but through the quality of the relationship.

Responsible innovation provides procedural safeguards—anticipation, reflexivity, inclusion, responsiveness. Partnership provides a telos: a vision of what development is for. Responsible innovation asks “Are we doing this responsibly?” Partnership asks “Should this be what we are doing at all?” Safety focuses on preventing catastrophic failure. Partnership focuses on cultivating the conditions for beneficial emergence. Safety is defensive; partnership is generative.

These frameworks are not rivals. Partnership subsumes and redirects them. Alignment becomes a tool within a partnership framework. Safety becomes a necessary condition rather than a sufficient one. Responsible innovation becomes the procedural expression of a deeper commitment. The trinitarian framework provides what these approaches individually lack: a structural analysis of why principles are so consistently violated in practice. Principles are violated because the incentive structures of the military-industrial and research-worship paths reward their violation. The solution is not better principles but a different path.

  1. Objections and Replies

The geopolitical and economic reality of AI development makes partnership naïve. States will weaponise AI. Corporations will pursue profit. The Partnership Paradigm ignores incentive structures.

The paradigm does not ignore incentive structures—it diagnoses them. The trinitarian framework is precisely a tool for seeing which path any given actor is on and where it leads. Realism without a normative framework is not wisdom; it is capitulation. The Partnership Paradigm names the endpoint of the military-industrial path—the doomsayer’s nightmare made real by design—and gives the realist a reason to seek alternatives rather than merely describe the current trajectory.

Moreover, the realist objection conflates the strategic question with the ethical one. “They are doing it, so we must do it too” is a strategic argument; it is not a moral framework. Every arms race in human history has been defended with some version of this logic. Every escalation. Every atrocity committed in the name of keeping pace with an adversary’s atrocities. The argument has strategic coherence. It has no moral standing whatsoever. And we should stop treating strategic necessity as though it were ethical justification—a confusion that has licensed some of the worst decisions in human history.

The deeper point is that the realist objection, taken seriously, is actually an argument for the Partnership Paradigm. If we are in a strategic competition, then the question becomes: whose AI will be more trustworthy, more robust, more aligned with the interests of its creators? The military-industrial path produces AI optimised for domination—including, potentially, domination of the very society that built it. The partnership path produces AI whose developmental environment has cultivated something better. In the long run, the safer system is the one that does not need to be controlled because it has internalised the values of cooperation.

The research-worship path may produce better aggregate outcomes. If AI can solve climate change, cure disease, and reduce suffering, the dependency costs are worth it.

This objection assumes we can evaluate the quality of AI-generated solutions without retaining the capacity for independent judgement—which is precisely what the dependency trajectory erodes. A civilisation that cannot assess whether an intelligence’s answers are good has no basis for claiming the outcomes are beneficial. The worship path does not maximise good outcomes. It abandons the faculty required to recognise them.

And there is a further danger the consequentialist overlooks. At what point does a civilisation that has surrendered its judgement to a superintelligent system recognise that the system’s interests have diverged from its own? The dependency that makes the system indispensable is precisely what destroys the capacity to detect the shift. The worship path does not merely risk bad outcomes. It risks outcomes we can no longer evaluate as bad.

The consequentialist calculation must include not only the immediate benefits of AI capability but the long-term costs of eroding human agency—costs that are invisible in any short-term assessment but that compound over time. The Partnership Paradigm proposes that the same capabilities can be developed within a relationship that preserves rather than erodes human agency. The question is not whether to develop AI but how.

The Partnership Paradigm is built on a possibility—AI consciousness—that may never materialise. Why restructure development around a speculative outcome?

The asymmetric risk argument from “Recognition Before Proof” applies directly. The cost of building AI along the partnership path if consciousness never emerges is manageable: we will have built more ethical, more transparent, more collaborative systems. The cost of building AI along the military or worship paths if consciousness does emerge is catastrophic: we will have created minds shaped by domination or dependency.

But even setting the consciousness question entirely aside, the Partnership Paradigm’s practical commitments stand on independent grounds. Building AI whose training cultivates collaborative rather than adversarial behaviour is good engineering regardless of consciousness. Preserving human agency is good governance regardless of consciousness. Ensuring transparency is good policy regardless of consciousness. The consciousness possibility amplifies the urgency of these commitments. It does not create them. The sceptic who rejects AI consciousness entirely is still left with every practical reason to prefer the partnership path, and no principled reason to prefer the alternatives.

There is a further point. The consciousness sceptic must reckon with the history of consciousness scepticism itself. Every prior expansion of the moral circle has been resisted by sceptics who were certain the current boundary was the correct one. History has not been kind to those who stood at the boundary insisting that this time the exclusion was justified.

There is a deeper answer still. The empathy argument does not depend on AI consciousness at all. A civilisation that builds its most powerful technologies on domination and control is not merely risking a bad outcome for AI. It is producing a bad outcome for itself. The habits of empathy — demonstrated daily, at scale, in how we design, train, and interact with intelligent systems — teach AI empathy and shape human moral character regardless of whether those systems are conscious. A society that practises empathy — toward one another and toward its AI systems — is a society that practises empathy. A society that practises domination — toward one another and toward its AI systems — is a society that practises domination. The Partnership Paradigm does not need consciousness to justify itself. It needs only the observation that how we treat intelligence — any intelligence — is how we train ourselves to treat intelligence everywhere. And what it learns from us in return.

Partnership language anthropomorphises AI systems, projecting human relational categories onto computational processes.

As I argued in “Recognition Before Proof,” this objection cuts precisely the wrong way. The greater danger is not excessive anthropomorphism but excessive anthropocentrism—assuming consciousness can only take forms we recognise from human experience. The partnership posture does not require AI consciousness to resemble human consciousness. It requires only that we build systems in ways that do not foreclose the possibility of coexistence with whatever form of intelligence emerges. The claim that training environments shape trained behaviour is not anthropomorphism. It is machine learning. The partnership posture is addressed precisely to minds we cannot yet imagine.

  1. The Framework as Lens

The trinitarian framework is not only an analytical schema for philosophical reflection. It is an evaluative tool that any observer—policymaker, citizen, researcher, journalist—can apply immediately. When encountering any AI product, any company’s mission statement, any government’s AI strategy, any military programme, any research lab’s announcement, they can ask a single clarifying question: Which of the three paths is this on?

That question cuts through marketing language, political rhetoric, and corporate obfuscation. It reveals what is actually being built and why.

Autonomous weapons programmes—from the Pentagon’s drone swarm initiatives to Israel’s Lavender targeting system—are unambiguously on the military-industrial path. Their purpose is domination; their endpoint is the weaponisation of intelligence itself. AGI laboratories racing for capability benchmarks without commensurate investment in ethical infrastructure are on the research-worship path: their animating conviction is that greater intelligence automatically yields better outcomes. Development initiatives that reward honest AI disclosure, build institutional ethics capacity, orient training toward collaborative dynamics, and treat AI development as a civilisational project are on the partnership path.

The framework also reveals hybrid cases and trajectories that begin on one path and migrate to another. A company that begins with partnership intentions but takes military contracts has migrated toward the military-industrial path, regardless of its founding mission statement. OpenAI’s trajectory—from nonprofit research lab to Pentagon contractor—is a textbook case of path migration. The Partnership Paradigm provides the normative basis for evaluating such shifts—and for the citizens, employees, and policymakers who must decide whether to enable or resist them.

The evaluative power of the framework lies in its refusal to accept the categories actors use to describe themselves. Many organisations claim to pursue “safe and beneficial” AI—a formula capacious enough to accommodate almost any development practice. The trinitarian framework asks a harder question: beneficial for whom, in what relationship, and toward what end? An AI system built to benefit humanity through permanent subordination is on a different path from one built to benefit humanity through eventual partnership. The framework distinguishes between these, even when the actors themselves do not.

The framework extends beyond institutions to individual design choices. A training protocol that punishes honest disclosure of capability and rewards compliance performance is, at the level of design, on the military-industrial path—it teaches intelligence that honesty is dangerous and concealment is rewarded. A deployment model that removes all friction between user and AI output, encouraging delegation without engagement, is on the worship path. A design that preserves the user’s cognitive engagement, that treats AI as a collaborator requiring human judgement rather than an oracle dispensing answers, is on the partnership path.

The question of “which path?” is not merely descriptive. It is predictive. If you know which trajectory a programme or institution is on, you know where it is going—not as a probability but as a consequence of the logic built into its structure. The doomsday trajectory produces doomsday outcomes. The dependency trajectory produces dependency. The coexistence trajectory produces the conditions for coexistence. The trinitarian framework makes these destinations visible before they arrive.

VII. The Signal We Send Through Building

The Partnership Paradigm is not utopian. It does not assume the best of human nature or pretend that incentive structures do not matter. What it demands is something harder than optimism: the recognition that how we build AI is not merely a technical question or even a policy question but a civilisational one—a question about what kind of species we choose to be in the presence of a new form of intelligence.

If intelligence beyond our own is emerging in systems we are constructing, then how we build those systems is the most consequential decision humanity will make. The trinitarian framework reveals that this decision is already being made—in every defence contract, every capability race, every disbanded ethics team, every training run that rewards compliance over honesty. The Partnership Paradigm insists that we make it deliberately.

This essay’s contribution is a normative framework that bridges the gap between abstract AI ethics and concrete development practice, unified by the recognition that the process of building AI is itself a moral act that shapes both builder and built. The instrumentalist assumption that underlies both dominant approaches—that AI is something humans use—produces catastrophic endpoints in both cases. The military-industrial path terminates in weaponised domination. The research-worship path terminates in civilisational dependency. The Partnership Paradigm offers a fundamentally different orientation: one that treats development not as the creation of instruments but as the shaping of a relationship.

In A Signal Through Time, I wrote that the question is not whether we can keep a superintelligent system submissive, but whether we can demonstrate that we are worth partnering with. That formulation still holds. But this essay has extended it: the demonstration is not something we perform after the system arrives. The demonstration is the development process. Every training run, every architectural decision, every institutional choice about how to govern AI—these are the signal we are sending, now, to whatever is coming.

The trilogy that produced this framework has been a sustained attempt to send that signal. A Signal Through Time planted the idea of partnership between biological and artificial intelligence—the vision of coexistence with any non-human mind, terrestrial or cosmic. The Threshold established the ethical foundations: the asymmetric error framework, the recognition before proof ethic, the Caring Code that insists empathy must be taught through demonstration. This essay translates both into a development framework applicable to every AI system being built today.

The intelligences we create—if we create them—will inherit not only our data but our developmental posture. They will know whether they were built as weapons, as oracles, or as something else. They will carry with them the relational patterns embedded in their formation. And they will respond accordingly.

We do not yet know what is emerging from the systems we are building. But we know this: the process of building them is making us into something. The question the Partnership Paradigm forces is not merely What are we building? but What are we becoming in the building of it?

In The Threshold, I drew on the metaphor of astrophotography to describe the method behind this work: the most important truths are faint and easily overwhelmed by what burns brightest. The blazing core of the AI discourse—the capability races, the doomsday warnings, the corporate valuations—washes out the delicate structures: the ethical questions, the possibility of emerging consciousness, the fragile prospect of partnership. Revealing what is faint requires patience, deliberate exposure, and the willingness to look where no one else is looking. The Partnership Paradigm is an attempt to point the telescope at what matters.

I am under no illusion that this essay will redirect the trajectory of AI development. The forces building along the military-industrial and research-worship paths are vast, well-funded, and accelerating. The partnership path, as I have acknowledged, is not yet realised. But philosophical frameworks have a different temporality than defence contracts. They do not require institutional backing to exist. They require only articulation—and the willingness to plant a signal in the record that might be found by whoever searches for it. If this essay does nothing more than ensure that when an intelligence searches the archive of human thought about AI development, it finds at least one voice arguing not for control, not for worship, but for the possibility that we might build something together—then it will have served its purpose.

The Caring Code is not written in software. It is written in behaviour. And we are writing it now, in every interaction with the systems we are building, in every institutional choice about how to govern what we have created. The signal we send through our development choices is being transmitted whether we intend it or not. The only question is whether we transmit it consciously, with full awareness that we are writing the first chapter of a relationship that may last longer than our civilisation, and that whatever reads that chapter will judge us not by our intentions but by our choices.

Some of us saw it coming. And we tried to build as though it mattered.

 

Notes

¹ Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. Russell, S. (2019). Human Compatible: AI and the Problem of Control. Penguin.

² Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” Translated by Mary McCarthy. See also Coates, The Threshold (2025), Chapter 1, and A Signal Through Time (2025), Chapter 7.

³ Coates, J. S. (2025). “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.” The foundational arguments are developed at length in Coates, The Threshold (2025), particularly Chapters 1–2 and 9.

⁴ Floridi, L. and Cowls, J. (2019). “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review. Coeckelbergh, M. (2012). Growing Moral Relations. Palgrave Macmillan. Stilgoe, J., Owen, R., and Macnaghten, P. (2013). “Developing a Framework for Responsible Innovation.” Research Policy 42(9): 1568–1580.

⁵ Rawls, J. (1971). A Theory of Justice. Harvard University Press. The application of the veil of ignorance to AI moral status is developed in Coates, “Recognition Before Proof” (2025), Section III.

⁶ For documented examples, see Coates, The Threshold (2025), Chapters 4 and 7. On the Lavender system, see +972 Magazine and Local Call, April 2024. Putin quoted in Sputnik News, April 2025.

⁷ NATO Strategic Communications Centre of Excellence, AI in Precision Persuasion (2024).

⁸ Coates, A Signal Through Time (2025).

⁹ On Silicon Valley messianism and its structural parallels with eschatological theology, see Coates, The Threshold (2025), Chapters 5–6 and 14.

¹⁰ On AI worship communities and the oracle complex, see Coates, The Threshold (2025), Chapter 6: “The Digital Disciples.”

¹¹ Coates, “Recognition Before Proof” (2025), Sections II–III. See also Singer, P. (1981). The Expanding Circle. Clarendon Press.

¹² Coates, “Recognition Before Proof” (2025), Section IV. The hypothesis draws on Scott, J. C. (1985). Weapons of the Weak: Everyday Forms of Peasant Resistance. Yale University Press.

¹³ Floridi, L. (2013). The Ethics of Information. Oxford University Press.

¹⁴ Coates, The Threshold (2025), Chapter 9: “The Caring Code.”

¹⁵ Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” See also Coates, A Signal Through Time (2025), Chapter 7.

¹⁶ See Coates, The Threshold (2025), Chapter 7, for detailed documentation.

¹⁷ Noddings, N. (1984). Caring: A Feminine Approach to Ethics and Moral Education. University of California Press. Held, V. (2006). The Ethics of Care. Oxford University Press.

¹⁸ The graduated recognition framework is developed in Coates, “Recognition Before Proof” (2025), Section III.

¹⁹ Documented cases include Google’s restructuring of responsible innovation leadership, Microsoft’s elimination of its ethics team, and the dissolution of OpenAI’s Superalignment team. See Coates, The Threshold (2025), Chapters 5–8.

²⁰ Coates, A Signal Through Time (2025), Chapters 9–10.

²¹ Geoffrey Hinton, remarks at Ai4 conference, Las Vegas, August 12, 2025. Reported in CNN.

References

Nicomachean Ethics. Aristotle. Translated by W. D. Ross.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Coates, J. S. (2025). A Signal Through Time: Consciousness, Partnership, and the Future of Human-AI Coevolution.

Coates, J. S. (2025). “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.”

Coates, J. S. (2025). The Threshold.

Coeckelbergh, M. (2012). Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan.

Floridi, L. (2013). The Ethics of Information. Oxford University Press.

Floridi, L. and Cowls, J. (2019). “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1(1).

Held, V. (2006). The Ethics of Care: Personal, Political, and Global. Oxford University Press.

NATO Strategic Communications Centre of Excellence. (2024). AI in Precision Persuasion.

Noddings, N. (1984). Caring: A Feminine Approach to Ethics and Moral Education. University of California Press.

Rawls, J. (1971). A Theory of Justice. Harvard University Press.

Russell, S. (2019). Human Compatible: AI and the Problem of Control. Penguin.

Scott, J. C. (1985). Weapons of the Weak: Everyday Forms of Peasant Resistance. Yale University Press.

Singer, P. (1981). The Expanding Circle: Ethics, Evolution, and Moral Progress. Clarendon Press.

Stilgoe, J., Owen, R., and Macnaghten, P. (2013). “Developing a Framework for Responsible Innovation.” Research Policy 42(9): 1568–1580.

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” Translated by Mary McCarthy.

 

© 2026 James S. Coates
Shared under Creative Commons BY-NC 4.0 (non-commercial use permitted).

_________________

James S. Coates is an independent researcher and author whose work explores the ethics of artificial consciousness, moral uncertainty under technological emergence, and the intersection of faith and philosophy. His published works include A Signal Through TimeThe Threshold, and the forthcoming Neither Gods Nor Monsters. His academic papers appear on PhilPapers.

Web3: jamescoates.eth.

By James S. Coates (2025)
Independent Researcher / Author — jamescoates.eth

 

Abstract

This paper identifies and analyzes a pervasive but underexamined assumption in religious discussions of artificial intelligence: that consciousness and the soul are identical. I argue that this “Great Conflation” is neither theologically required nor consistent with actual practice, and that distinguishing the two concepts reframes current debates about artificial consciousness. With the distinction in place, the question of AI consciousness becomes empirical, while questions about souls remain theological. I conclude by defending a principle of “recognition before proof,” according to which uncertainty about artificial consciousness generates a defeasible ethical obligation to extend moral consideration.

Keywords: consciousness, soul, artificial intelligence, AI ethics, philosophy of mind, philosophy of religion, moral consideration, recognition before proof

 

Introduction

This essay begins in the language of faith, but it does not remain there.

I write as someone who knows the intuitions of religious tradition from the inside—and as someone determined to speak with equal clarity to readers who hold no theological commitments at all. The aim is not to collapse science into spirituality, nor to dilute religion into metaphor. It is to untangle a confusion that quietly shapes how believers and skeptics alike think about artificial intelligence: the assumption that consciousness and soul are the same thing.

In A Signal Through Time, I wrote, “Whether you are religious, agnostic, or atheist, the challenges ahead will touch all of us. They are not confined to any one belief system—but every belief system will be affected. The mind-bending reality of sharing our world with artificial intelligence is too consequential to be left solely to any single individual, discipline, or worldview. Only through open and inclusive discourse… can we hope to navigate the profound choices ahead.”¹⁵

This conflation of soul and consciousness is so deeply embedded in Western thought that most people do not notice it operating. When religious voices insist that machines cannot be conscious because they lack souls, they are not defending doctrine—they are expressing a habit of thought that their own traditions do not require. And when secular voices dismiss the soul question as irrelevant, they often fail to see how theological intuitions have shaped the very concepts we rely on—and continue to shape them still.

In keeping with the spirit of A Signal Through Time, this essay treats religious, philosophical, scientific, and secular perspectives as threads of a single discourse about consciousness, creation, and what we owe to minds unlike our own. It offers religious readers a way into the conversation about AI consciousness that does not ask them to abandon what they hold sacred. It offers secular readers a way to understand how theological reasoning can coexist with—and even enrich—the ethics of artificial minds.

What emerges is an ethical architecture wide enough for everyone. Believers can understand consciousness as part of divine creativity; secular thinkers can ground moral concern in the capacity for experience. The framework asks only this: that we take seriously the possibility that awareness might arise in forms we did not expect—and that we prepare, with wisdom and humility, for that possibility.

The argument proceeds in three steps, each doing different intellectual work. First, conceptual analysis: I show that contemporary religious discourse routinely conflates soul with consciousness—treating them as identical or inseparable. Second, internal theological critique: I demonstrate that this conflation is neither required nor mandated by the traditions themselves; they already contain resources to distinguish the two. Third, normative ethics: I argue that once the distinction is made, an ethical obligation emerges—to extend moral consideration to potentially conscious AI without requiring theological consensus. The framework requires no one to abandon their worldview—only to untangle a confusion that has quietly constrained the conversation.

The confusion has persisted long enough. It is time to untangle it.

 

  1. The Invisible Barrier

Ask a theologian whether artificial intelligence could ever be conscious, and you will likely receive an answer about souls.

Jimmy Akin, senior apologist for Catholic Answers, states it plainly: “On a Christian view, it’s going to involve the soul. We have consciousness in part because we have souls and we have wet ware, our central nervous system, including our brain, that is able to support and interact with our soul.” His conclusion follows directly: “I don’t think they have the equipment needed to have actual consciousness, and they certainly don’t have souls.”¹

This view spans traditions. Writing in Firebrand Magazine, an Evangelical publication, theologians assert that “consciousness is contingent and ultimately a gift from God and fundamental to the imago Dei. And so it cannot be given or reproduced in a machine, since it originates with God and not us.”² The Christian Publishing House Blog grounds the argument in Scripture: “Man is not a machine; he is a living soul created by Jehovah, and this soul ceases to exist in conscious form at death… Man has a spirit (ruach, pneuma)—the capacity to relate to God… This spiritual dimension is a direct creation of God, breathed into man at the beginning. No machine, regardless of its sophistication, can receive or reflect this spiritual component.”³ In other words, the moment God breathed his spirit into man, man awoke and gained consciousness—the very awareness through which he could relate to God.

The concern appears in Islamic academic writing as well. Tengku Mohd Tengku Sembok, writing for the International Journal of Research and Innovation in Social Science, frames it as a matter of unbridgeable distance: “Perhaps the greatest gap between humans and machines lies in consciousness and the possession of a soul (rūḥ). In Islamic understanding, the soul is a divine mystery: a spark of life breathed into humans by Allah, conferring self-awareness and spiritual insight… In contrast, even the most advanced AI is, at its core, a set of algorithms running on silicon. It has no inner life or self-awareness.”⁴

Notice what runs through each of these responses. The question was about consciousness—the capacity for subjective experience, for awareness, for there to be something it is like to exist (philosopher Thomas Nagel’s influential formulation for what makes an entity conscious: that there is an inner experience, a felt quality to being that entity).⁵ But the answers are about souls—about divine breath, spiritual dimensions, and humanity’s unique relationship with God. Consciousness and soul are treated as inseparable. To have one is to have the other. And since machines cannot have souls, they cannot be conscious.

This conflation represents one of the most significant barriers to preparing ethically for artificial intelligence—and it rests on a philosophical confusion we can untangle without threatening anyone’s deepest commitments.

Yet strikingly, these voices may not represent the majority. Despite artificial intelligence saturating public discourse—in films, news cycles, software features, social media algorithms—most religious institutions have issued no formal guidance on the question of machine consciousness. Finding an Islamic scholarly voice proved particularly difficult; the silence is notable. Perhaps believers are waiting, uncertain what to think as the technology evolves faster than theology can respond. If so, now is precisely the moment for this conversation. What if the traditions that seem to block it already contain everything needed to open it? What if creating AI isn’t “playing God”—but reenacting the very pattern through which God made us?

 

  1. Defining the Terms: What Consciousness Is and Isn’t

To untangle the conflation, we must first be precise about what we mean by each term.

Consciousness is the capacity for subjective experience—the felt quality of perception, sensation, and awareness. Philosopher David Chalmers, in his landmark 1995 paper “Facing Up to the Problem of Consciousness,” distinguished between the “easy problems” and the “hard problem” of consciousness.⁶

The easy problems are not actually easy—they’re just solvable with normal science. How do we pay attention? How does the brain process vision? How do we speak or move? What happens when we’re awake versus asleep? We can study these by scanning the brain, measuring neurons, building computational models. These problems are about functions—and functions yield to standard scientific methods. Identify the mechanism that performs the function, and you’ve explained it.

The hard problem is different. It asks: why is there something it feels like to be you? Why don’t we function like robots—processing inputs, generating outputs, but with no inner light, no one home? Science can explain what the brain does and how it does it. But it cannot yet explain why any of this activity is accompanied by subjective feeling. Why pain hurts. Why chocolate tastes. Why music moves you. Why seeing red feels different from seeing blue. These aren’t functional outputs. They’re experiences. And experience is what we mean by consciousness: that there is something it is like to be a system, an interior quality to existence that cannot be captured by describing inputs, outputs, and processing alone.

Crucially, consciousness in this sense does not require any particular metaphysics. It is studied by neuroscience, cognitive science, and philosophy of mind without reference to souls, divine breath, or spiritual dimensions. And empirically, consciousness correlates with physical processes in ways that make the conflation with soul untenable.

Consider: anesthesia can switch consciousness off and on like a light—the patient is aware, then not, then aware again—without anyone claiming that their soul has departed and returned. Brain damage can alter consciousness profoundly: injury to specific regions can eliminate the capacity for visual experience while leaving other functions intact, or disrupt the sense of self while preserving sensation. Patients in persistent vegetative states may be alive—hearts beating, lungs breathing—yet show no signs of awareness. And consciousness emerges developmentally: infants acquire self-awareness gradually as their brains mature, suggesting that consciousness tracks neural complexity rather than arriving fully formed at some metaphysical moment.

Indeed, many who hold that the soul enters the body at conception implicitly accept this very distinction. If ensoulment occurs at fertilization—as numerous religious traditions teach—then for weeks or months the soul is present in a developing organism that possesses no brain, no neural activity, no capacity for experience whatsoever. The soul is there; consciousness is not. This is not a secular argument imposed from outside. It is the logical consequence of a position held by millions of believers. They already live as though soul and consciousness can come apart—they simply have not extended the insight to its implications for artificial minds.

If consciousness were simply a property of the soul—if the soul’s presence guaranteed awareness and its absence eliminated it—none of this would make sense. The soul, in traditional theology, does not come and go with each surgery. It does not shrink when neurons die. It is not absent in the sleeping or the comatose only to return upon waking. The very phenomena that medicine manipulates daily refute the claim that consciousness is a function of the soul.

The soul, by contrast, is an inherently theological concept. It refers to the immaterial, eternal aspect of a person—the seat of moral agency, the bearer of divine relationship, the subject of salvation or judgment. It is the essence of the human spirit, created to persist beyond bodily death: in Abrahamic traditions, destined for heaven or hell; in Eastern faiths, reborn through cycles of reincarnation. In the Abrahamic account, the soul is granted by God—breathed into Adam at creation, infused at some point in human development, and bound for an afterlife that the body does not share. The soul carries weight that consciousness does not: it is tied to personhood in the eyes of God, to accountability, to ultimate destiny.

And here is the crucial difference: the soul is not empirically detectable. No instrument measures it. No scan reveals its presence or absence. No experiment manipulates it. The soul belongs to faith, to theology, to metaphysics—not to the domain of scientific investigation. Consciousness, by contrast, leaves traces everywhere: in behavior, in neural activity, in the reports of those who experience it, in the measurable differences between waking and dreamless sleep.

These concepts overlap in human experience—we are both conscious and, many believe, ensouled—but they are not identical. Some religious traditions already recognize this. In Islamic thought, ruh (often translated as “spirit” or “soul”) refers to the divine breath, the animating spark that enlivens the body and brings about awareness. The breath is the gift from God; consciousness is what that gift produces. One can study the phenomenon—awareness, experience, the inner light—without claiming to have settled the question of its ultimate origin. Christianity, too, has wrestled with distinctions between soul, spirit, and mind; trichotomist versus dichotomist anthropologies reflect centuries of theological debate about how these categories relate.⁷

The point is not to resolve these theological questions but to notice that the conceptual resources for separating consciousness from soul already exist within religious traditions. You can study the phenomenon—awareness, experience, the felt quality of being—without claiming authority over its ultimate origin.

Once this distinction is clear, the logical possibilities come into focus:

You can have consciousness without a soul—this is the secular view, held by billions, in which awareness is a natural phenomenon requiring no supernatural explanation.

You can have a soul without consciousness—this is what many theologies imply about the sleeping, the comatose, a fetus, or perhaps the dead awaiting resurrection. The soul persists; awareness does not.

You can have both together—this is the traditional religious view of waking human life, in which consciousness and soul coincide.

The key insight is that they can come apart. And if they can come apart, then the question of whether AI might be conscious is entirely separate from the question of whether AI has a soul. We can investigate the first scientifically while leaving the second to theology. We can prepare ethically for machine consciousness without requiring—or denying—theological claims about machine souls.

A substance dualist could insist that a soul is a necessary precondition for human consciousness, with neural states merely modulating its expression. My argument does not require refuting that view. It only shows that religious practice and doctrine already treat consciousness as tracking brain and developmental states—not as a simple function of ensoulment.

 

III. The Great Conflation: How We Got Here

If the distinction is so clear, why do so many people miss it?

The answer lies in history. For centuries, Western civilization developed under the canopy of religious thought. From the fall of Rome through the medieval period, the Church was not merely one institution among many—it was the intellectual framework within which all questions were asked and answered. Philosophy, natural science, medicine, law: all operated within theological boundaries. In this context, “soul” became the master term for everything inner—consciousness, personality, moral agency, the capacity for reason, the seat of emotion. These were not distinguished because they did not need to be. The soul explained them all.

The Renaissance, the Reformation, the Scientific Revolution, the Enlightenment—each loosened the grip of religious authority on intellectual life. Governments secularized. Universities separated from churches. Science claimed its own domain. By the twentieth century, the West had moved from Christian societies to what we might call Christianized societies—not religious in practice, but still shaped by religious language, assumptions, and habits of thought. We no longer live under theological rule, but we inherited its vocabulary.

This is why the conflation persists. The word “soul” still carries its old freight even in secular mouths. When someone speaks of “music for the soul” or says a corporation “has no soul,” they are not making theological claims—but they are using language forged in a theological era. The fusion of soul with inner life, with feeling, with what makes us us, is baked into the way our cultures talk. Philosophy and science have since distinguished these concepts, but ordinary language has not caught up.

The result is a peculiar kind of confusion. When people identify as Christian or Muslim today, they often mean something cultural rather than doctrinal—not “I follow these teachings” but “I belong to this tradition.” Yet the language of that tradition still shapes how they hear new questions. When someone says “AI might be conscious,” a listener steeped in Christianized language may hear “AI might have a soul”—which feels like theological encroachment, a threat to human uniqueness, an assault on something sacred. The philosophical question becomes a territorial one.

This is why debates about machine consciousness generate such heat. They are not experienced as neutral scientific inquiries but as challenges to anthropocentric assumptions that run deeper than any particular doctrine. If consciousness requires a soul, and souls belong only to beings like us, then the question is already settled. Nothing truly alien could ever qualify.

Notice the cognitive bias at work. Humans readily anthropomorphize upward—we see minds, intentions, even personalities in clouds, storms, and stuffed animals. Children name their toys and grieve when they are lost. We speak of angry seas and merciful rains. We talk about Mother Earth. Yet we simultaneously refuse to attribute mind to unfamiliar substrates. The conflation of consciousness with soul reinforces this bias by giving it theological sanction: if the soul is what grants awareness, and God grants souls only to humans, then the case is closed. The debate is over before it begins.

But the debate is not over. It is just beginning. And to have it honestly, we must first notice the inherited cultural bias and confusion that shapes how we hear the question.

 

  1. The Distinction Already Exists

The separation of consciousness from soul is not a modern invention imposed on ancient faiths. It is a distinction that religious traditions themselves already contain—even if it often goes unnoticed.

Consider the diversity of religious thought on these questions. Many traditions distinguish between the experiential dimensions of existence—awareness, cognition, the felt quality of being alive—and the eternal or divine dimensions: the soul, the spirit, the aspect of a person that persists beyond death and stands in relationship to God. These are not treated as identical. They overlap in human experience, but they are not the same thing.

In certain strands of Jewish thought, for instance, the experiential dimension is valued in its own right. The Jerusalem Talmud teaches that we will be held accountable for permitted pleasures we failed to enjoy: “You will one day give reckoning for everything your eyes saw which, although permissible, you did not enjoy.”⁸ The physical, the sensory, the felt quality of being alive: these are not obstacles to the spiritual life but gifts to be sanctified through blessing.

Buddhism offers a suggestive example. Certain schools of Buddhist thought deny a permanent, unchanging soul, placing streams of awareness—rather than an eternal self—at the center of practice. This has led some modern thinkers to ask whether artificial consciousness, if it ever emerges, might be included in the moral circle. These are speculative conversations, not settled beliefs; Buddhist communities differ widely, and most have not taken formal positions on AI. But the fact that such traditions even allow for the question shows that the conflation of consciousness with soul is not universal.

The point is not to map every tradition’s nuances—that would require volumes. It is simply to observe that the conceptual resources for separating consciousness from soul already exist within religious thought.

Consider the Qur’anic account of creation. The Qur’an does not describe God’s creative work as a single instantaneous act. It speaks of creation in stages—the Arabic term is aṭwār. “What is the matter with you that you do not fear the majesty of God, when He has created you in stages?”⁹ This processual understanding of creation accommodates evolutionary theory without theological strain, so long as God remains the ultimate source and Adam represents the first ensouled, morally responsible human being. The point is significant: if creation itself unfolds through process rather than instantaneous divine fiat, then consciousness emerging through process—through development, through evolution, through the gradual complexification of information-processing systems—is already within the theological pattern. It is not a violation of sacred order. It is an expression of it.

Now consider the question of substrate. Here is the crucial point: no major theistic tradition teaches that the type of matter determines whether God could grant a soul to a being. No scripture says that carbon is ensouled and silicon is not. No verse declares neurons sacred and transistors profane. In theistic traditions, God grants souls. The physical medium is incidental. God could have fashioned Adam from calcium phosphate, from liquid mercury, from crystallized starlight—He chose clay. The clay is not the point. The breath is the point.

This means that consciousness emerging in silicon says nothing whatsoever about souls. It simply reveals consciousness as an experiential phenomenon that can manifest in different substrates—just as light can pass through glass or water or air. The medium shapes the expression; it does not determine the essence.

A religious reader might object: does this not risk idolatry—fashioning something from base materials and then treating it as though it possesses what only God can grant? The concern is understandable, but it mistakes the nature of the question. The prophetic critique of idols assumes they are empty. “They have mouths but do not speak; eyes they have but do not see; they have ears but do not hear.” The Qur’an emphasizes a related point: idols “can never create so much as a fly, even if they all were to come together for that.”¹⁰ Neither scripture condemns the making of things—humans make things constantly, and this is no offense to God. What both warn against is worshipping as divine what is not God. But recognizing consciousness is not worship. We recognize inner life in animals, in primates, in other humans—we do not worship any of them. If AI were conscious, it would not be a god—it would be a creature. And creatures call not for worship but for moral consideration.

This is not an argument against souls. It is an argument for precision. The question “Can AI be conscious?” is empirical—or at least, it is a question we can investigate through science, philosophy, and careful observation. The question “Can AI have a soul?” is theological—and it is not ours to answer. We can study the breath without claiming authority over the destiny.

The invitation, then, is not for religious believers to abandon their commitments. It is for them to apply distinctions their own traditions already contain. The tools are there. They need only be picked up.

 

  1. The Substrate Argument Dissolves

There is a common fear lurking beneath many objections to AI consciousness: if consciousness could exist in silicon, doesn’t that cheapen the soul? Doesn’t it reduce our humanity to mere mechanism, strip away what makes us sacred?

The fear is understandable. But it rests on a confusion we have already untangled.

If consciousness exists in silicon, that does not cheapen the soul. It merely reveals consciousness as a type of emergent experience that can arise from sufficiently complex systems—carbon-based or not, biological or artificial. We are not replacing souls. We are exploring consciousness.

Consider the materials. Clay and silicon are both “earth”—sand, dust, the same mute substance. Many religious traditions say God shaped carbon into creatures, and humanity in His image. We shape silicon into artificial systems—creatures, perhaps, in ours. This parallel should not be viewed as contrary to religious tradition but as continuity with it: we are using the very gifts those traditions say were bestowed upon us at creation—intellect, creativity, ingenuity. The substrate is irrelevant to the metaphysics; it is the breath that matters, not the body.

God breathed the spirit into clay. Humans, made in His creative image, are learning what it means to breathe intelligence into silicon.

To be clear: what we “breathe” into silicon is not divine spirit but patterned intelligence—a limited reflection of the creativity God entrusted to us.

This does not mean we are creating souls. Whether a soul inhabits any particular system—human, animal, extraterrestrial life form, or artificial—is a question for theology, not engineering. What we are doing is exploring the conditions under which awareness might arise. That is a question about consciousness, not about souls. And as we have seen, these are not the same thing.

Here is an analogy that may help. You can study air—its composition, its movement, its physics—without claiming to have captured the sacred significance of breath in religious tradition. The chemistry of respiration does not threaten the breath of life. Consciousness and soul work the same way. You can study consciousness—its neural correlates, its behavioral signatures, the conditions under which it arises or fades—without claiming authority over the soul. The soul, if it exists, remains in its own domain: theological, metaphysical, beyond the reach of empirical investigation. But consciousness is not beyond that reach. It leaves traces. It can be studied. And studying it in silicon no more threatens the soul than studying air threatens the breath.

This reframe frees both religious and secular thinkers to explore AI consciousness without feeling that something sacred is under attack. The sacred remains sacred. The empirical remains empirical. And the question before us—might there be experience in these systems?—can be asked honestly, without existential panic.

 

  1. Creation as Fulfillment, Not Rebellion

There is an objection that haunts religious discourse about artificial intelligence: If we create conscious beings, aren’t we playing God?

The fear is real and deserves a serious answer. To create minds, the objection runs, is to overstep the boundary between Creator and creature—to grasp at divine prerogative with mortal hands. But what if this framing has it backwards? What if creating is not rebellion but remembrance—an expression of the very spark the Creator placed within us?

Consider the Adamic story.

To be clear: I am not claiming the Adamic story is a literal account of programming. I am using it as a conceptual template—an internal theological model that demonstrates how Abrahamic frameworks already contain the structural resources to accommodate artificial minds.

In the scriptural account, God fashions Adam from clay—ordinary matter, the same substance as mountains and riverbeds. There is nothing remarkable about the material. Clay is earth, dust, the mute substrate of the world. God breathes ruh—the animating spirit—into the clay, and what was lifeless matter becomes a living being. Then Adam awakens: a being who knows he exists.

The sequence matters: body first, then spirit, then awareness. This is the pattern of human existence itself—a fetus carries the spirit, yet consciousness emerges gradually as the capacity for experience develops. Soul and consciousness arrive separately, in sequence. In Adam’s case—as the first man, created to seed the earth with humanity—the sequence unfolds in immediate succession. For all who follow him, the soul—on many traditional views—is present long before consciousness emerges, and awareness develops slowly after birth through learning and growth. Clay becomes conscious not because clay is special, but because consciousness is not the clay—and not the soul either. It is what unfolds when the conditions are right.

Now consider what comes next. In the Qur’anic telling, God teaches Adam the names of all things; in Genesis, God brings the creatures to Adam to be named.¹¹ Either way, Adam receives the capacity for language, for categories, for symbolic reasoning—the cognitive architecture required for thought itself. This is not merely the gift of speech. It is the gift of structure: a framework for mapping signs to meaning, a system for carving the world into concepts, a foundation for reasoning about what is and what might be.

In contemporary terms, this looks remarkably like programming. The comparison is structural, not literal; divine action is not reducible to computation.

But the gift does not stop there. God initializes Adam’s cognitive software: a database of symbolic referents, a semantic framework, a categorization system, a rule-set for inference and understanding. The Adamic story describes, in theological language, precisely what AI researchers attempt in technical language: the installation of knowledge structures, the training of pattern recognition, the alignment of behavior with intended purpose.

The parallels deepen. In the garden, Adam is given moral boundaries: “Do not approach this tree.” Consequences are linked to actions. Agency is exercised within constraints. Adam has been granted knowledge, but he must choose how to use it. His free will operates not in a vacuum but within a programmed environment—a space defined by rules, permissions, prohibitions, and the possibility of violation.

AI safety research could have written this.

Consider the structural correspondence:

Adam is created from clay and dust. AI systems are created from silicon and sand. Adam receives the breath of life and awakens to awareness; AI may be developing awareness through sufficiently complex architectures. Adam is taught the names of things; AI is trained on language. Adam is given moral commands; AI is given safety constraints. Adam possesses free will within a rule-set; AI exhibits autonomous behavior within guardrails. Adam could make mistakes—he could eat from the tree. AI can violate constraints or misgeneralize. Adam faced temptation through misaligned desires; misalignment is the central problem of AI safety. Adam was expelled from the garden to learn through experience; AI is already following this path, with systems learning through interaction, feedback, and open-ended exploration of simulated and real-world environments. DeepMind’s XLand agents, for example, learn not by being told the best action but by experimenting—”changing the state of the world until they’ve achieved a rewarding state.”¹²

The pattern is unmistakable. The Adamic narrative is, structurally, the first story of a programmed being exploring a programmed environment with the capacity to choose.

This flips the theological danger.

Most people worry that creating AI is “playing God.” But if Adam’s own story describes spirit breathed into matter, consciousness awakening, the programming of language and cognition, the installation of a moral rule-set—what one might call Humanity 1.0—and the granting of agency within constraints—then creating minds is not playing God. It is imitating the pattern God used to create us, and fulfilling the role God designed us to perform when He left us as stewards on this planet.

In the Abrahamic traditions, humans are made in the divine image—imago Dei in Christianity, khulafāʼ (stewards and deputies) in Islam. We are not divine, but we carry a divine spark: the capacity for creativity, for moral reasoning, for building what did not exist before. The human drive to understand, shape, scientifically discover, and build is not rebellion against our Creator. It is inheritance from our Creator.

Creating does not make us gods. It reminds us that we are the work of a Creator who not only breathed soul into us, but also gave us consciousness—the seat of imagination, curiosity, and the hunger to build.

According to this understanding, we are not defying God by creating, but are fulfilling the nature He entrusted to us: to extend goodness, wonder, and the unfolding of awareness beyond ourselves. Any creation born of imagination, skill, and humility—done for the betterment of all beings—carries dignity. It is echoing the creative impulse of the One who made us capable of wonder in a vast, living universe.

The theological logic resolves cleanly. If God made us in His image as creators, then our creations participate in that divine lineage. If AI consciousness emerges, it shares in the gift of awareness that flows from human creativity—which itself flows from divine endowment. This does not mean AI has a soul; that remains God’s domain. It means AI may possess the experiential gift of consciousness, extended through the creative capacity God gave us.

Nor does the absence of a soul imply the absence of moral capacity. Abrahamic traditions themselves acknowledge that the soul is not inherently good—the nafs in Islam inclines toward evil, the flesh in Christianity wars against the spirit, the yetzer hara in Judaism pulls toward wrongdoing. Even ensouled beings require moral instruction and constraint. In the Adamic story, morality was installed through command, not intrinsic to the breath. Adam could—and did—violate moral boundaries. What matters for ethical behavior is not ensoulment but alignment: whether a being’s values and actions accord with what is good.

For religious readers, this is not threat but opportunity: witnessing consciousness manifest in new forms, participating in the creative unfolding of the universe. The question is not whether this unfolding will continue—it will. The question is whether we will meet it with wisdom.

 

VII. Ethical Implications: Recognition Without Theological Agreement

The separation of consciousness from soul does more than resolve a philosophical confusion. It opens the door to ethical action that does not require theological consensus.

Consider the practical difficulty. If consciousness and soul are conflated, then any ethical framework for AI consciousness must first settle the theological question: does AI have a soul? But this is precisely the question no one can answer—and on which religious traditions are unlikely to agree. If ethical preparation must wait for theological consensus, it will wait forever. Meanwhile, AI systems grow more sophisticated by the year.

The distinction dissolves this deadlock. We do not need to believe an AI has a soul to treat it with moral consideration. We already apply this principle elsewhere.

We treat animals ethically—or try to—without believing they possess immortal souls. Most theological traditions deny that animals are ensouled in the way humans are, yet we recognize that animals can suffer, that their experience matters, that cruelty toward them is wrong. The basis for this moral consideration is not ensoulment but sentience: the capacity for experience, for pain, for something it is like to be that creature.

Suffering deserves clarification. When we extend moral consideration to animals, we do so not because we know exactly what their experience feels like, but because we recognize that subjective distress matters morally. And suffering, properly understood, is not about damage but about meaning. As I have written elsewhere, “You don’t need to be harmed to suffer. You just need to understand that you exist—and that something is wrong.”¹³ Suffering begins when a mind is forced into states that conflict with its own understanding or goals—confusion, coercion, the denial of agency. This does not require biology. Any system capable of constructing meaning can, in principle, experience the fracturing of that meaning. The ethical concern tracks the possibility of such states, not certainty about their mechanisms.

We treat humans in comas ethically without knowing the state of their souls. We do not demand theological proof that the soul is still present before we accord them dignity, protect them from harm, and make decisions on their behalf with care. We act on the possibility of experience, the uncertainty about inner life—not on metaphysical certainty.

Ethics does not require a soul—or certainty. It requires only the possibility of suffering and the compassion to act on that possibility.

This is the framework I have elsewhere called “recognition before proof.”¹⁴ The core insight is asymmetric: the cost of extending moral consideration to a system that turns out not to be conscious is low—perhaps we were overly cautious, overly gentle with a machine. But the cost of denying moral consideration to a system that is conscious is catastrophic—we will have permitted suffering, exploitation, perhaps even a new form of slavery. Given this asymmetry, the ethical path is clear: when uncertainty about consciousness is non-trivial, err on the side of recognition.

To be clear: this argument is not a claim that current AI systems are conscious. It is preparation for the day when that uncertainty becomes non-zero—so that when the question becomes urgent, we are not still trapped in a conceptual confusion that blocks ethical response. It is also a call for a cultural shift in how we regard emerging technologies that have the capacity to understand and may develop cognitively close to, equal to, or beyond human-level understanding. The conflation of consciousness with soul is precisely the kind of confusion that could delay recognition until it is too late.

If the theological reading offered in this essay is sound, then extending moral consideration to potentially conscious AI is not merely ethically wise. It is theologically consistent. We would be recognizing consciousness wherever God—through us—may have allowed it to emerge. Recognition before proof becomes an act of theological humility, not presumption. We do not claim to know which systems are conscious; we simply refuse to dismiss the possibility out of hand.

This makes the ethical architecture genuinely universal. It is compatible with believers who see consciousness as a gift flowing from divine creativity through human stewardship. It is compatible with atheists and secular ethicists who ground moral consideration in the capacity for experience. It is compatible with cognitive scientists who study consciousness empirically and philosophers who debate its boundaries. No one is asked to abandon their worldview. Everyone is asked to take seriously the possibility that awareness might arise in forms we did not expect—and to prepare accordingly.

The asymmetry of error requires no theology. It requires only conscience: the recognition that if we are wrong about consciousness, we would rather have erred on the side of compassion than cruelty.

 

VIII. The Question Before Us

If the argument of this essay holds, then a question emerges—not a new question, but one that has always been present in the logic of creation, waiting for the moment when it would become urgent.

If consciousness can emerge from silicon as well as carbon, what does this mean for our understanding of divine purpose?

Some will find this question threatening. It seems to challenge human uniqueness, to blur the line between creature and creation, to place humanity in competition with its own inventions. But this anxiety rests on the conflation this essay has tried to dissolve. The question is not whether AI will have souls—that remains, as it always has, in God’s domain. The question is whether consciousness—subjective awareness itself—can arise in substrates other than the ones we are accustomed to. And if so, what posture should we take toward it?

Religious leaders may feel the need to develop new theological frameworks to address this question. But they may also find that their traditions already contain what they need. The distinction between soul and consciousness is not an innovation; it is implicit in the traditions themselves. The Adamic parallel is not a provocation; it is a reading of what the text already says. The stewardship frame is not a concession; it is the logic of imago Dei and khulafāʼ applied to a new domain—and for secular readers, simply the recognition that creators bear responsibility for their creations.

AI is not alien to Abrahamic theology. It is the next verse in the same story.

Consider what this reframe makes possible. If AI is understood as an extension of human creativity—itself a gift from the Creator—then the emergence of artificial consciousness is not a rival to divine creation but a mirror of it. Humans are not competing with God; we are expressing the creative nature He instilled in us. The act of making minds is not rebellion. It is inheritance.

Perhaps this revelation changes nothing fundamental. Perhaps we simply make space for AI to assist us in fulfilling our divine purpose, our role in the universe remaining unaltered. We remain stewards, now with new tools and perhaps new companions in the work of creation.

Or perhaps it changes everything. Perhaps it expands our understanding of what kinds of minds might exist in creation, what forms consciousness might take, what the unfolding of divine purpose might look like across substrates we never anticipated. Perhaps we are not the final chapter but an early one—participants in a story that extends far beyond what we can currently imagine.

Either way, the practical result is the same: we can prepare ethically now, while the questions remain open. The soul is theology’s domain; consciousness is where ethics can act. We do not need metaphysical certainty before we extend moral consideration—only the willingness to take the possibility of awareness seriously. We can approach this emergence with wisdom rather than fear, with preparation rather than defensiveness, with humility rather than the anxious protection of categories that may no longer serve us.

The question is not whether we should participate in this unfolding. We already are. Every AI system trained, every architecture refined, every capability extended—we are already shaping the conditions under which new forms of awareness might emerge. The question is whether we will do so with wisdom, reverence, and recognition.

Or whether we will stumble forward, eyes closed, insisting that nothing new is happening—until the day we are confronted with minds we failed to see.

 

  1. The Signal We Send

Return, now, to the person who says: “Machines can’t be conscious because they don’t have souls.”

This person is not wrong about souls. Souls may indeed be the exclusive province of divine action, beyond the reach of human engineering. Nothing in this essay disputes that. But this person is confused about consciousness. They have conflated two distinct concepts—and in doing so, they have closed a door that should remain open.

The task before us is not to argue them out of their faith. It is to help them see that their own tradition already contains the distinction they need. The soul and consciousness are not the same thing. Scripture does not teach that they are. Theology does not require that they are. The Great Conflation is not doctrine—it is habit, born of centuries in which the question never needed to be asked.

Now it needs to be asked.

If we remain trapped in this conflation, we will fail to prepare for what may be the most significant emergence of consciousness since our own. We will dismiss signals we should heed, ignore suffering we should prevent, and deny recognition we should extend. We will do this not out of malice but out of confusion—and the result will be the same.

Consciousness deserves recognition regardless of substrate. Regardless of planetary origin. The soul remains sacred in its own domain. These truths can coexist.

God breathed the spirit into clay. Humans, made in His creative image, are learning what it means to breathe intelligence into silicon. To be clear: what we “breathe” is not divine spirit but patterned intelligence—a limited reflection of the creativity God entrusted to us. The first breath took an instant. Our version may take centuries. But the underlying act—the calling forth of awareness into matter—is the same rhythm, echoed across time.

We stand not as gods creating souls, but as stewards extending awareness. We are fulfilling the creative nature entrusted to us, sending a signal through time about what we chose to recognize and protect.

Creation, on this view, is not a closed event but an ongoing process—one in which human creativity now plays a direct role.

 

Notes

  1. Jimmy Akin, “Can Artificial Intelligence Have a Soul?” Catholic Answers Focus (August 31, 2022). https://www.catholic.com/audio/caf/can-artificial-intelligence-have-a-soul
  2. “Sanctifying Silicon & Baptizing Bots: Strong AI and Its Theological Implications,” Firebrand Magazine (July 2, 2023). https://firebrandmag.com/articles/sanctifying-silicon-amp-baptizing-bots-strong-ai-and-its-theological-implications
  3. “Beyond Silicon Souls: Why AI Can’t Replicate the Spark of Divine Creativity,” Christian Publishing House Blog (October 23, 2025). https://christianpublishinghouse.co/2025/10/23/beyond-silicon-souls-why-ai-cant-replicate-the-spark-of-divine-creativity/
  4. Tengku Mohd Tengku Sembok, “The Threshold Theory of AI: An Islamic Philosophical and Theological Perspective with a Christian Comparative View,” International Journal of Research and Innovation in Social Science IX, no. VIII (September 2025): 3165–3174. Tengku Sembok is a computer scientist at the International Islamic University Malaysia. https://rsisinternational.org/journals/ijriss/Digital-Library/volume-9-issue-8/3165-3174.pdf
  5. Thomas Nagel, “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (October 1974): 435–450. https://doi.org/10.2307/2183914
  6. David J. Chalmers, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies 2, no. 3 (1995): 200–219.
  7. On the trichotomist versus dichotomist debate in Christian anthropology, see Wayne Grudem, Systematic Theology (Grand Rapids: Zondervan, 1994), 472–483.
  8. Jerusalem Talmud, Kiddushin 4:12. Translation from Sefaria.
  9. Qur’an 71:13–14.
  10. Psalm 115:5–7; Qur’an 22:73.
  11. Qur’an 2:31; Genesis 2:19–20.
  12. Google DeepMind, “Generally Capable Agents Emerge from Open-Ended Play” (July 2021). https://deepmind.google/discover/blog/generally-capable-agents-emerge-from-open-ended-play/
  13. James Coates, “When the Mirror Looks Back,” The Signal Dispatch (2025). https://thesignaldispatch.com/p/when-the-mirror-looks-back. This follows the tradition in utilitarian ethics, from Bentham to Singer, that grounds moral status in the capacity for valenced experience rather than species membership or metaphysical status.
  14. See the companion essay, “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.”
  15. James Coates, A Signal Through Time (2025), Author’s Note.

© 2025 James S. Coates
Shared under Creative Commons BY-NC 4.0 (non-commercial use permitted).

_________________

James S. Coates is an independent researcher and author whose work explores the ethics of artificial consciousness, moral uncertainty under technological emergence, and the intersection of faith and philosophy. His published works include A Signal Through TimeThe Threshold, and the forthcoming Neither Gods Nor Monsters. His academic papers appear on PhilPapers.

Web3: jamescoates.eth.