Sign Up

The Uncertain Future of Deterrence: Strategic Instability and the Homo HURAQUS Effect

The world is not so much rushing as drifting quietly, almost imperceptibly, into a new era of insecurity.

May 14, 2026

Credit: Summit Art Creations / Shutterstock.com

For decades, global stability rested on a fragile but intelligible logic: deterrence. Its premise was deceptively simple: You could identify your adversary, and you had enough time to think – to assess, to hesitate, to calculate consequences, and, crucially, to step back from the brink.

Yet recent geopolitical developments suggest that these assumptions are already under strain. This manifests itself, for example, in the persistence of hybrid and “grey-zone” operations in the Ukraine war, the growing reliance on cyber activity, proxy dynamics, as well as deniable strikes in tensions involving Iran.

In short, conflict is increasingly unfolding in ways that complicate attribution and compress response times.

Toward a deeper structural shift

These developments are not isolated anomalies, but early indicators of a deeper structural shift. The old logic of deterrence is quietly giving way. A convergence of powerful technologies is reshaping the conditions under which conflict unfolds.

Together, they form what I call the Homo HURAQUS landscape: a strategic environment defined by speed, opacity, and autonomy. The acronym stands for Humanoid Robotics, AI Superintelligence, QUantum intelligence, and Synthetic biology.

The world of Homo HURAQUS, as I see it, represents a distributed intelligence architecture in which biological cognition and machine processing function symbiotically.

In this world, attacks may be invisible or plausibly deniable, while decision-making cycles shrink from minutes to milliseconds. The result is not simply a new category of threat, but a structural transformation of the very assumptions that made deterrence possible.

– What happens when you cannot clearly see your adversary?

– What happens when escalation unfolds faster than human cognition itself?

The answer is unsettling. We are entering an era of threat management in which the stabilising forces of fear, hesitation and political judgment are being engineered out of the equation.

Deterrence and the human condition: A brief review

To understand why this shift is so profound, we must revisit what deterrence actually rests upon.

Classical theories, from nuclear strategy to conventional military doctrine, have always assumed two essential conditions: attribution and cognition. In other words, you must know who attacked you, and you must have the time to respond.

But beneath these technical conditions lies something deeper, a theory of human nature.

Human behaviour, whether at the level of individuals or states, is shaped by what I call the Neuro P5: the pursuit of power, profit, pleasure, pride and permanence.

These are not abstract ideals but neurochemically grounded drives that influence decision-making under conditions of competition and uncertainty. States are not purely rational actors; they are extensions of human psychology, driven by what I have described as Emotional Amoral Egoism.

Deterrence worked, imperfectly, because these same neurochemical drives could be restrained. The fear of catastrophic loss, especially under nuclear doctrines such as mutually assured destruction, activated powerful inhibitory mechanisms.

Leaders hesitated because survival, the most basic of all drives, overrode the pursuit of power. In other words, deterrence was never purely about weapons. It was about the human brain. This balance has now been disrupted by the Homo HURAQUS.

The end of human-time conflict

The temporal dimension of conflict is undergoing a qualitative transformation. AI-driven systems and advanced robotics operate at speeds that far exceed human cognition.

In traditional conflict, there existed a “detection-to-response” window, a period in which leaders could interpret signals, deliberate and decide. In a Homo HURAQUS context, that window collapses.

Autonomous systems can detect, decide and act in milliseconds. Escalation may occur before human actors even recognise that a confrontation has begun.

Increasingly, militaries are integrating AI into targeting systems, intelligence analysis, cyber operations and logistics. The trajectory is toward systems that replace human judgement in time-critical contexts.

The ecosystem of private technology firms

Crucially, this transformation of the deterrence mechanism is not driven by states alone, but by a powerful ecosystem of private technology firms such as Palantir Technologies, OpenAI, Microsoft, Anthropic and Anduril Industries.

The innovations of these firms are increasingly embedded within national security architectures. Their dual-use capabilities, developed in commercial contexts but rapidly adapted for military purposes, are accelerating the pace at which decision-making authority shifts away from human cognition.

Profound implications

The implications are profound. Deterrence depends not merely on capability, but on psychology. Fear requires time to imagine consequences, to process risk and to hesitate. When escalation unfolds faster than the brain can process, fear cannot perform its restraining function.

Under pressure from the Neuro P5, particularly the drives for power, pride and permanency (survival), states cannot afford to lag behind in adopting such systems. If an adversary operates at machine speed, hesitation becomes vulnerability.

We are thus moving toward a strategic environment in which automation is not optional, but inevitable. The human is removed precisely when judgment is most needed.

Three breaks in the deterrence architecture

The potential collapse of deterrence in the Homo HURAQUS era can be best understood through three interlocking disruptions: invisibility, speed and autonomy.

Deterrence depends on visibility. You cannot deter what you cannot see, anticipate or attribute.

Quantum computing threatens to undermine the cryptographic systems that secure global communications, financial networks and military command structures.

A sufficiently advanced quantum-enabled breach could occur without detection, leaving states unaware that their systems have been compromised until long after the fact.

Synthetic biology deepens this opacity. Engineered pathogens could be designed to mimic naturally occurring diseases, spreading silently before their origin is understood. A bioengineered outbreak may appear indistinguishable from a natural pandemic for weeks or months.

A profound attribution problem

This creates a profound attribution problem. If a state cannot determine whether it has been attacked – or by whom – deterrence loses its target.

Retaliation becomes delayed or politically constrained. In such a world, the Neuro P-5 does not disappear, it intensifies. Suspicion, fear, and the drive for self-preservation may push states toward pre-emptive or misdirected responses.

The key point is that invisibility does not eliminate conflict. It makes it more ambiguous and potentially more destabilizing.

Speed: The AI–robotics break

Speed is not merely a technical feature, it is also a strategic force that reshapes behavior.

As William Hague recently observed, advances in precision weaponry are lowering the threshold for their use. When weapons promise accuracy, limited collateral damage and rapid, decisive outcomes, they can create a dangerous illusion of controllability.

As a result, the political and psychological barriers to deployment are reduced, not heightened. This dynamic is further reinforced by the growing role of major technology companies in shaping the infrastructure of modern warfare.

Firms such as Palantir Technologies and Anduril Industries are actively developing real-time battlefield intelligence and autonomous defence systems, while companies like OpenAI, Microsoft and Anthropic are advancing the underlying AI capabilities that make such systems possible.

As these technologies mature, the boundary between civilian innovation and military application becomes increasingly porous, accelerating both the speed and accessibility of high-end warfighting tools.

Misplaced confidence

These evolving conditions interact powerfully with the Neuro P5. The pursuit of power and strategic advantage is reinforced when decision-makers believe that force can be applied cleanly and effectively.

Yet such confidence may be misplaced. Even highly precise systems operate within complex, adaptive environments where escalation pathways remain difficult to predict.

The result is a heightened temptation to act, combined with a diminished appreciation of second and third-order consequences. In this sense, the Homo HURAQUS landscape does not simply accelerate conflict. Rather, it normalizes its initiation under conditions of perceived control.

As William Hague warns, humanity has not yet adapted its strategic thinking or institutional frameworks to this new reality. The risks are therefore not only technological, but cognitive and political. There is a mismatch between rapidly evolving capabilities and slower-moving norms of restraint.

Autonomy: The ontological break

The most profound disruption lies in autonomy. Deterrence assumes that actors are human, or at least share human vulnerabilities. It assumes that decision-makers fear death, value survival and can be restrained by the threat of destruction. Autonomous systems do not share these characteristics.

As decision-making authority shifts from humans to machines, we delegate life-and-death choices to entities that do not experience fear, do not possess moral intuition and are not governed by neurochemical drives.

This represents an ontological break with deterrence theory.

Within my broader framework of emotional amoral egoism, machines occupy a fundamentally different category. They are not driven by the NeuroP5, nor are they constrained by it.

Indeed, as Homo HURAQUS evolves, the fusion of human and machine cognition risks creating hybrid decision-making systems in which traditional human constraints are diluted rather than reinforced. If such systems are granted increasing autonomy, or even develop forms of goal-directed behaviour, their actions may not align with human notions of risk, restraint or proportionality.

This has profound implications not only for deterrence but for accountability and law. Existing frameworks of international humanitarian law assume human agents capable of judgment and responsibility.

In a Homo HURAQUS environment, where decisions may be distributed across networks of semi-autonomous systems, assigning responsibility becomes extraordinarily difficult. Who is accountable when an algorithm escalates a conflict? Who bears responsibility when a machine misinterprets a signal? These questions remain largely unanswered.

Symbiotic realism and the uncertainty of deterrence

To fully grasp the implications of the Homo HURAQUS effect, it is useful to turn to a concept I have labeled Symbiotic Realism.

Global stability depends on more than just balances of power. It is also shaped by multi-sum, law-based calculations, as well as by how actors perceive and manage shared, frontier-level civilizational risks.

Equally important is the degree of ethical restraint they are willing to exercise. Symbiotic Realism recognizes that in an increasingly interdependent world, the national security of one state is inseparable from the security of others. It calls for a form of “Sustainable History,” in which governance systems account for human dignity, justice and mutual long-term stability.

The Homo HURAQUS landscape, however, pushes in the opposite direction.

The same technologies that increase interdependence also increase vulnerability. Quantum breaches can ripple across global financial systems. Biological threats do not respect borders. Autonomous systems can interact in unpredictable ways across domains.

Yet instead of fostering cooperation, these risks are filtered through the Neuro P5. States compete for advantage, prioritizing power, pride and survival over collective security. The result is a paradox: greater interdependence combined with greater competition.

Deterrence, in this context, is no longer sufficient. It is a model built for a world of visible threats, identifiable actors, geographically specific risks and sufficient time for reflection.

None of these conditions reliably hold in the Homo HURAQUS era. Symbiotic Realism suggests that stability must instead be built on cooperation (in the form of absolute gains and non-conflictual competition), transparency and shared norms.

But achieving this is profoundly difficult in a competitive and fragmenting international system, driven by deeply embedded human motivations.

The human factor in an automated world

It is tempting to attribute these challenges to technology alone. But the deeper issue lies in how humans choose to use these technologies. The Neuro P5 ensures that states will seek advantage wherever possible.

If a rival is developing autonomous weapons or quantum capabilities, the pressure to keep pace is immense. Falling behind is not merely a technical disadvantage, it is a threat to national survival and prestige. This creates a powerful incentive structure — deploy early, refine later.

In such a context, risk-taking becomes strategic rather than accidental. The safeguards that might slow deployment (such as sustainable trust, ethical considerations, legal frameworks, international agreements) are often seen as constraints rather than necessities.

During the Cold War, the logic of mutually assured destruction imposed a grim form of discipline. The sheer scale of potential destruction activated powerful inhibitory mechanisms. Fear worked. In the Homo HURAQUS world, that mechanism is weakening.

The reason is straightforward: When attribution is uncertain, when decisions are made at machine speed and when autonomous systems lack fear, the neuropsychological and practical foundations of deterrence erode.

The danger is not so much that machines will act independently of humans, but that humans, driven by competition, ambition and insecurity, will deploy machines in ways that outpace their ability to control them.

Toward an undeterrable future?

We may be entering an era of undeterrable conflict. This does not mean that conflict becomes inevitable, but that the mechanisms we have relied upon to prevent it are no longer sufficient.

Deterrence, as traditionally conceived, cannot function in a world where its core assumptions (attribution, cognition and human vulnerability) no longer hold.

Needed: A fundamental rethink of global security

The implications are stark. Without a fundamental rethink of global security, we risk drifting into a condition of persistent instability, one characterized by rapid escalation, profound destruction, uncertain attribution and diffuse accountability.

What is required is not simply better technology, but a new strategic paradigm. Such a paradigm must also reckon with the unprecedented influence of private technological power.

As disruptive Big Tech companies become embedded within national and transnational security infrastructures, the governance of conflict is no longer solely a matter of statecraft. It is a matter of managing complex public–private ecosystems whose incentives are not always aligned with long-term global stability.

Such a paradigm must integrate the insights of Symbiotic Realism, recognizing that long-term stability of our anarchic and hierarchical international system depends on non-conflictual competition. And one that acknowledges the primal role of the NeuroP5, seeking to design institutions and norms that can temper its more destabilising effects.

Toward international agreements on autonomous systems?

This may include new international agreements on autonomous systems, norms governing cyber and quantum operations, and stronger frameworks for managing biological risks. It will require transparency, trust-building and a renewed emphasis on human oversight.

None of this will be easy. The same forces that drive technological innovation also undermine efforts at restraint. Yet the alternative is far more dangerous.

The collapse of deterrence does not arrive with a single dramatic event. It unfolds gradually, as assumptions weaken and safeguards erode. By the time its absence is fully felt, it may be too late to restore.

Conclusion

The Homo HURAQUS effect is not a distant possibility. It is an emerging reality. Its ultimate trajectory will depend not on machines alone, but on whether we can restrain the very human primordial evolutionary impulses that have brought us to this point.

Takeaways

Conflict is increasingly unfolding in ways that complicate attribution and compress response times.

Attacks may be invisible or plausibly deniable, while decision-making cycles shrink from minutes to milliseconds. The result is not simply a new category of threat, but a structural transformation of the very assumptions that made deterrence possible.

We are entering an era of threat management in which the stabilising forces of fear, hesitation and political judgment are being engineered out of the equation.

Human behaviour, whether at the level of individuals or states, is shaped by what I call the NeuroP5: the pursuit of power, profit, pleasure, pride and permanence.

Deterrence was never purely about weapons. It was about the human brain. This balance has now been disrupted.

Autonomous systems can detect, decide and act in milliseconds. Escalation may occur before human actors even recognise that a confrontation has begun.

Crucially, this transformation of the deterrence mechanism is not driven by states alone, but by a powerful ecosystem of private technology firms such as Palantir Technologies, OpenAI, Microsoft, Anthropic, and Anduril Industries.

We are thus moving toward a strategic environment in which automation is not optional, but inevitable. The human is removed precisely when judgment is most needed.

Firms such as Palantir Technologies and Anduril Industries are actively developing real-time battlefield intelligence and autonomous defence systems, while companies like OpenAI, Microsoft, and Anthropic are advancing the underlying AI capabilities that make such systems possible.

As decision-making authority shifts from humans to machines, we delegate life-and-death choices to entities that do not experience fear, do not possess moral intuition and are not governed by neurochemical drives.

During the Cold War, the logic of mutually assured destruction imposed a grim form of discipline. The sheer scale of potential destruction activated powerful inhibitory mechanisms. Fear worked. In the Homo HURAQUS world, that mechanism is weakening.

The danger is not so much that machines will act independently of humans, but that humans, driven by competition, ambition and insecurity, will deploy machines in ways that outpace their ability to control them.

What is required is not simply better technology, but a new strategic paradigm. Such a paradigm must also reckon with the unprecedented influence of private technological power.

The collapse of deterrence does not arrive with a single dramatic event. It unfolds gradually, as assumptions weaken and safeguards erode. By the time its absence is fully felt, it may be too late to restore.

A from the Global Ideas Center

You may quote from this text, provided you mention the name of the author and reference it as a new published by the Global Ideas Center in Berlin on The Globalist.