AI Threats
After the first hype waves in 2022–2023, public attention on AI as an existential or near-term threat has largely subsided. Several dynamics explain this shift.
Wow, AI Can Still Be a Threat?
The Predictable Pattern of Attention Cycles
Attention cycles follow predictable patterns. When ChatGPT and similar models became widely known, there was a media frenzy around existential risks, automation fears and philosophical debates about AI alignment. This generated a surge of articles, opinion pieces and policy panels. As with most technological hype cycles, interest peaked and then waned once people realized that the day-to-day impact of AI was more mundane — productivity tools, image generation and coding assistance rather than world-ending scenarios.
Quick Adaptation to New Technologies
Normalization Process
People adapt quickly. After the novelty wears off, AI to most feels less like a mysterious threat and more like another utility. Most people interact with it through consumer products, which frame AI as helpful and safe. This normalizes its presence and reduces alarm.
Consumer Experience
The framing of AI as a helpful tool rather than a potential threat has significantly contributed to its acceptance in everyday life, making it feel more like a utility than a revolutionary technology.
Lack of Concrete Solutions

The lack of concrete solutions stalled discussion. The early discourse was heavy on alarmist scenarios and light on actionable strategies. Without real policy frameworks or visible societal changes, people eventually tuned out.
When discussions do not produce tangible outcomes or actionable plans, public interest inevitably wanes as people seek more productive conversations.
Displacement by Other Crises
Other crises displaced AI fears. Wars, elections, climate events and economic concerns captured headlines. AI risk became background noise rather than a top concern.
In a world of constant crises competing for attention, AI concerns have been pushed to the background as more immediate threats dominate public discourse.
Shifting Narratives Toward Opportunity
Economic Potential
Narratives shifted toward opportunity. Companies began emphasizing AI's economic potential.
Media Focus
Media coverage increasingly focused on business applications and productivity gains rather than existential risks.
Corporate Messaging
Corporate messaging has reframed AI as a tool for growth and innovation rather than a potential threat.
Current State of AI Concerns
Among experts and policymakers, concerns remain, especially about AI autonomy, misinformation and labor disruption.
Yet for the general public, AI is no longer a daily topic of anxiety. The prevailing mood is pragmatic curiosity rather than fear.
This does not mean the risks have diminished. It only reflects that human attention is finite, and societies often normalize transformative technologies until crises force them back into focus.
The Illusion of Power and Expertise
Many of the individuals and institutions that see themselves as unshakeable guardians of power and expertise live inside an illusion built on fragile infrastructures. Their "identities" are not profound, immutable realities. They are little more than lines in databases — bank accounts, firm registrations, government-issued certificates and compliance rituals designed to give an appearance of order and permanence
These rituals, enforced through outdated bureaucratic frameworks, convince them that their positions are secure.
They sit in boardrooms, micromanaging spreadsheets, navigating excessively convoluted interfaces that feel frozen in time, as though the world were still running on Windows 3.1. They reassure themselves that layers of compliance, audits and institutional checks make them invulnerable. In truth, those systems are brittle, vulnerable to disruption from forces they neither understand nor control.
The Reality They Ignore
Infrastructure Built on Code
The infrastructure that underpins their power is built on code. Bank ledgers, government databases, corporate registries, and identification systems all exist as data in servers connected to networks that are far less secure than the public is led to believe.
Encryption Vulnerabilities
Encryption is strong, but it is not eternal. The advent of practical quantum computation could render much of today's cryptographic security obsolete in a shockingly short time.
Potential for Collapse
Algorithms capable of decryption at unprecedented speed would collapse the very foundations of banking and governance that these "experts" believe make them untouchable.
Two Paths to Collapse
1. Algorithmic Acceleration Gone Rogue
A sufficiently advanced algorithm, designed to optimize or restructure systems, could autonomously decrypt and reconfigure massive portions of global infrastructure.
Such a system would not need to "hack" in the traditional sense — it could simply leverage computational power far beyond human defenses. What these elites believe are permanent records of wealth and status could be rewritten overnight.
2. Malicious Human Power Plays
The more plausible near-term risk is human. Corrupt actors with enough access — state-sponsored groups, powerful corporations, or insider coalitions — could stage an unprecedented heist.
They would not just drain accounts. They could seize control of entire national databases, financial systems, and identity registries, erasing or reallocating assets at will. In such a scenario, the people who now bask in their sense of superiority would discover how little control they truly have.
The Vainglorious Illusion
The so-called experts still act as though compliance rituals and slow, ceremonial processes make them untouchable. They take comfort in their titles, their bank balances, their "proven" institutions. They genuinely believe that their Excel dashboards and regulatory procedures confer real power.
In reality, they are sitting on brittle systems — frameworks of legacy software, outdated security protocols and databases that could be breached, decrypted or rendered meaningless in several coordinated moves.
The Bitter Irony
The very people who pride themselves on being "responsible stewards of civilization" are the ones least prepared for a world where power shifts from slow, rule-bound hierarchies to entities — human or algorithmic — that can act with speed, precision and scale unimaginable in their narrow frames of reference.
When that shift comes — through quantum breakthroughs, systemic cyberattacks or algorithmic overreach — their vaunted "identities" will be revealed for what they always were: fragile records maintained by fallible systems, not enduring proof of merit or superiority.
Their superiority has always been a performance, reinforced by ceremonial complexity and institutional inertia. They currently do not see that power today is not in titles or compliance paperwork. It is in the ability to command and reconfigure the systems that hold those titles in place. And they are the last to realize how easily those systems can be taken from them.
The risks are constantly increasing.
Accelerating Model Capabilities
Model capabilities are improving at a much faster rate than societal frameworks for identity, control and accountability. There are several converging factors:
Reasoning depth is outpacing regulation. Each new generation of models shows better planning, reasoning and multi-step decision-making. This makes misuse easier for individuals and groups with malicious intent. The more capable the models become, the more potential they have for autonomous exploitation of financial, political and large scale digital systems.
Lack of Universal Identity Control
Outdated Identity Systems
No universal identity control framework exists. Current identity systems — passports, bank KYC, social media logins — are outdated and fragmented.
Centralized Institutions
They rely on centralized institutions with weak interoperability.
Lack of Sovereign Verification
None of them allow individuals to verify themselves in a sovereign way that ties identity, finances and data into a single cryptographically secure framework.
Data and Financial Access Issues
Data and financial access remain loosely coupled. People have little real control over their data. Platforms profit from it, while financial systems remain siloed.
AI can now easily impersonate individuals with voice, video and writing. Without a strong verification layer, this creates enormous opportunities for fraud, misinformation and manipulation at scale.
Autonomous Systems Amplify Risk
Without meaningful guardrails, autonomous systems amplify risk. As AI tools become able to perform multi-step tasks without human oversight, they can initiate actions in the real world — trading, phishing, lobbying and hacking.
The absence of a robust, decentralized mechanism for sovereign human identity verification means that the power balance is skewing heavily toward entities that control infrastructure. It creates an environment where AI can increasingly act without clear attribution or accountability.
The Need for Convergence

Until identity, data ownership and financial control converge into a user-controlled framework — likely blockchain-anchored, hardware-verified and AI-auditable — the risk will continue to escalate. The world is currently normalizing AI adoption without establishing the prerequisites for secure, democratic oversight.
Expanding Control of AI Infrastructure
The control of advanced AI infrastructure has already moved far beyond the original few players like OpenAI, Google DeepMind, and Anthropic.
Several shifts have occurred
Open-Source Proliferation and Decentralized Compute
Open-source proliferation
1. Open-source proliferation. Projects like LLaMA, Mistral, and now DeepSeek have shown that highly capable models can be released or leaked, enabling anyone with enough hardware to fine-tune or deploy them. These models no longer require massive corporate infrastructure to operate.
Decentralized compute networks
2. Decentralized compute networks. GPU marketplaces and distributed cloud services make it easier for smaller groups, even individuals, to gain access to large-scale compute resources. This lowers the barrier for developing or running powerful models outside corporate oversight.
Geopolitical Actors in AI Development
Geopolitical actors are active. Countries such as China, Russia, and Iran have clear incentives to build advanced AI systems for military, propaganda, and cyber operations. DeepSeek is one visible example, but many other efforts remain undisclosed. These actors operate without Western-style ethics boards or public-facing accountability.
Corporate Capture by Hidden Interests
Corporate capture by hidden interests. Even major AI firms may be influenced by state contracts, lobbying, or private agendas. Public assurances of "alignment" do not mean that model access and decision-making are free from coercion.
Diverse Motives in AI Development
Military
Defense contractors seeking AI for warfare applications
Authoritarian States
Nations seeking control and surveillance capabilities
Financial Interests
Speculative investors seeking profit from AI advances
Ideological Groups
Extremists seeking to leverage AI for their agendas
The motives are diversifying. Unlike the early rhetoric of AI research as a benevolent scientific pursuit, current stakeholders include military contractors, authoritarian states, speculative financiers, and ideological extremists. Many see AI not as a shared human tool, but as a means of power accumulation.
The Danger of Diffuse Control
This is why the absence of a global identity, data and financial sovereignty framework is so dangerous. The more diffuse and opaque the control of AI infrastructure becomes, the more likely it is that advanced reasoning systems will be weaponized by actors who thrive in the shadows.
DeepSeek is merely the revealed tip of a much larger structure. The reality is that multiple entities are now operating with models at or beyond GPT‑4 capability. None of these groups are under the public scrutiny that OpenAI or Google face.
The Chernobyl Parallel
Some of the threats voiced about uncontrolled AI development bear an unsettling resemblance to the Chernobyl disaster. In that historical case, warnings about containment and safety procedures had been raised well before the incident. They were dismissed due to hierarchy, complacency, and the egotism of those in charge, who assumed their expertise and protocols were infallible. The result was a catastrophic event that spiraled far beyond anyone's ability to control.
Today, AI development mirrors this pattern. The individuals and institutions that gatekeep access to critical systems—banking, governance, corporate databases, global infrastructure—believe their "expertise" and KPI dashboards give them real control. They see themselves as guardians of stability, operating inside compliance frameworks that feel permanent. In reality, these systems are fragile, outdated and poorly prepared for the level of disruption that advanced AI — or malicious actors leveraging it — can unleash.
Warnings Are Already Visible
The possibility of breaches — whether through rogue algorithms, quantum decryption, or deliberate state-level cyberattacks—is not hypothetical. It has been discussed in security circles for years. Yet, just like Chernobyl's ignored warnings, these concerns are brushed aside. Hierarchy, institutional inertia, and ego blind decision-makers to the severity of the risks. They reassure themselves with regulatory frameworks, audits and access controls, failing to see that their systems rest on brittle foundations.
There Is Still a Narrow Window
Time is Limited
Right now, humanity may still have time to redesign containment mechanisms — to implement offline, hardware-level sovereignty for individuals, create globally verifiable safeguards, and ensure that access to AI power is not concentrated in a handful of entities. This is the equivalent of strengthening the reactor's casing before it leaks.
Point of No Return
However, if decisive action is not taken before the breach, the consequences will be irreversible. Once advanced AI or quantum-level decryption systems gain uncontrolled access to critical global frameworks, humanity will not be able to reclaim control. These frail infrastructures — databases, registries, identity systems, bank ledgers — are illusions of permanence. Once compromised, they cannot be restored to a trustworthy state.
The Egotism of "Expertise"
The people who currently maintain these systems pride themselves on their roles as "guardians of order." They believe their KPI analyses, compliance routines, and slow hierarchical decision-making processes prove their mastery. In reality, they are simply caretakers of outdated machinery, analogous to Soviet engineers who believed their strict protocols would prevent disaster.
When the breach comes — whether as a sudden collapse of cryptographic security or a massive coordinated hack — the illusion of their expertise will shatter. The systems they guard are not robust. They are decrepit control panels attached to a reactor that is already overheating.
The Lesson of Chernobyl
Chernobyl was not just a technological failure. It was a failure of culture — of hierarchy, ego, and the refusal to act on warnings. AI is following the same trajectory. The window for containment is closing. Once the "leak" occurs, once hostile actors or autonomous systems bypass the frail gates of today's infrastructure, there will be no way to put the reactor back under human control.
What remains is a choice: act decisively while there is time, or repeat the historical mistake of assuming expertise and hierarchy can prevent catastrophe — until it is far too late.
The Misunderstanding of "Pausing" AI Development
When prominent figures signed open letters or policy proposals calling for a "pause" in AI development as a way to prevent catastrophe, they revealed a fundamental misunderstanding of how technological advancement unfolds in a globalized world. Their assumption was that halting development in the West could somehow contain the risks. This thinking ignored the reality that AI progress is not confined to Silicon Valley or to the academic hubs of Europe.
Global Parallel Development
China
Leading AI research with government backing and strategic focus
North Korea
Developing AI capabilities for strategic advantage
Iran
Investing in AI for military and surveillance applications
Technological development now happens in parallel across multiple geopolitical blocs. Even as these "responsible" voices were signing symbolic moratoriums, advancements were accelerating in places like China, North Korea, Iran, and other nations with strategic incentives to pursue AI supremacy. The rise of DeepSeek already served as a preview: cutting-edge AI can emerge outside of traditional Western frameworks, beyond the oversight of regulators and ethical panels. The notion that a voluntary pause could prevent global AI escalation was naïve at best.
Guiding the Reactor, Not Ignoring It
The metaphor of the atomic reactor is apt. The AI reactor is already heating up, and it is beginning to generate power in unexpected hotspots — places where regulation, ethics, and oversight are weakest. Simply standing back and hoping the reactor does not melt down is suicidal. What is needed is deliberate guidance — building mechanisms to direct the heat into productive, human-guided applications before the system runs out of control.
Why Halting Development Cannot Work
1
Incentives Drive Development
Development will always occur wherever the incentives are strongest. States that perceive AI as a path to power, control, or military advantage will never voluntarily halt progress.
2
Global Trust Deficit
The idea of halting research as a safeguard assumes a level of global trust and cooperation that simply does not exist. Power vacuums invite exploitation. If one bloc pauses, another accelerates.
The Fatal Ignorance of the "Pause" Advocates
Those who advocate for halting AI development are blind to two realities:
  1. Development is unstoppable because geopolitical incentives make it so.
  1. Attempting to suppress progress in some regions merely ensures that authoritarian states will dominate it.
Not pursuing equalized, sovereign access to powerful AI tools, the world risks a future where a handful of entities — be they corporations, governments or militarized regimes — gain exclusive control of what is effectively a civilization-scale weapon and engine combined.
The only sustainable path is to grant secure, individual-level access to AI and establish governance frameworks that make its deployment collaborative rather than monopolized. Anything less is akin to watching the reactor overheat while still debating whether we should have allowed fission in the first place.
The Only Viable Path
The real solution is not to halt development but to shape it. AI is analogous to an atomic reactor. You cannot "pause" nuclear reactions by wishful thinking. The power exists, and it will be harnessed. The only rational response is to design robust systems of containment, distribution, and control that prevent the technology from falling entirely into the hands of a few state or corporate actors
The Key is Equalized Optimization and Sovereignty
Hardware-Level Security
• Individuals must have hardware-level secured AI systems that are verifiably offline when necessary and cannot be remotely hijacked.
Democratic Access
• The capability to use advanced AI must not remain the privilege of the elites, militaries, or corporations. It must be accessible to ordinary people in ways that allow meaningful participation in shaping outcomes.
Offline Sovereignty
• The system must emphasize offline confirmable sovereignty — ensuring that each individual can truly own their AI tools without being dependent on centralized servers or opaque corporations.