RegulatingAI Podcast with Sanjay Puri: Dr. Hoda A. Alkhzaimi on Sovereign AI & the Future of Global Governance

RegulatingAI Podcast

Dr. Hoda A. Alkhzaimi, Associate Vice Provost for Research Translation & Innovation at New York University, Abu Dhabi with Sanjay Puri, President of RegulatingAI

On the RegulatingAI Podcast, Sanjay Puri speaks with Dr. Hoda A. Alkhzaimi on sovereign AI & how nations can balance innovation with AI governance.

AI sovereignty is not just about models. It is about energy, minerals, compute, talent, and data all working together.”
— Dr. Hoda A. Alkhzaimi
WASHINGTON, DC, UNITED STATES, March 11, 2026 /EINPresswire.com/ -- In a special Women’s Day episode of the RegulatingAI Podcast, host Sanjay Puri sat down with cybersecurity expert and policy leader Hoda A. Alkhzaimi to explore the deeper forces shaping the future of artificial intelligence governance.
Dr. Hoda A. Alkhzaimi is Associate Vice Provost for Research Translation and Innovation at New York University Abu Dhabi and co-chair of the cybersecurity council at the World Economic Forum. In her conversation with Sanjay Puri on the RegulatingAI Podcast, she offered a rare perspective that blends mathematics, cybersecurity, economics, and geopolitics—arguing that AI governance must look far beyond algorithms alone.

From Sovereign Wealth to Cryptography
One of the most fascinating aspects of Dr. Alkhzaimi’s career journey, discussed on the RegulatingAI Podcast with Sanjay Puri, is her transition from sovereign wealth fund management into cryptography research.
While the shift might appear dramatic, she explained that both fields share a common foundation: managing uncertainty. Sovereign wealth funds allocate capital based on probabilistic risk models and strategic foresight. Cryptanalysis, similarly, requires understanding mathematical complexity and identifying structural weaknesses in encryption systems.
For Alkhzaimi, both disciplines revolve around reducing uncertainty through rigorous analysis. This intellectual curiosity eventually led her to co-author independent research analyzing cryptographic systems designed by the National Security Agency.

Trust, Transparency, and Testing Technology
During the RegulatingAI Podcast discussion with Sanjay Puri, Alkhzaimi emphasized that independent verification is essential for building trust in any technological system.
Whether the system is cryptographic infrastructure or an AI model, transparency and auditability are critical. Technologies designed by governments or corporations must withstand rigorous external testing. Without independent scrutiny, claims of security or reliability cannot truly be trusted.
This principle, she noted, directly applies to the governance of artificial intelligence.

The Danger of “AI Myopia”
A major theme of the RegulatingAI Podcast conversation was what Dr. Alkhzaimi calls “AI myopia.” Too often, policymakers regulate AI in isolation without considering the broader technological ecosystem.
In reality, AI is deeply interconnected with many other domains—from semiconductors and supply chains to energy systems and biotechnology. According to Alkhzaimi, more than 200 emerging technologies are converging to shape the AI landscape.
Ignoring this convergence can lead to incomplete regulations that fail to address real risks, such as resource concentration, geopolitical dependencies, or infrastructure vulnerabilities.

What Sovereign AI Really Means
Another critical issue discussed on the RegulatingAI Podcast with Sanjay Puri was the idea of sovereign AI. While many governments focus on building national AI capabilities, Alkhzaimi argues that sovereignty extends far beyond developing models.
True AI sovereignty includes control over compute infrastructure, energy resources, data governance, mineral supply chains, and talent development. Countries must also consider the sovereignty of datasets, algorithms, and the workforce required to maintain AI systems.
At the same time, she emphasized that national capabilities should not come at the expense of global collaboration.

A Path Forward for the Global South
For countries with fewer resources, Alkhzaimi offered practical guidance during her conversation on the RegulatingAI Podcast. Instead of trying to replicate large-scale AI ecosystems from scratch, emerging economies should focus on strategic governance frameworks.
This includes building transparent AI systems, implementing strong audit standards, ensuring lifecycle accountability for models, and investing in talent development. By combining these policies with economic incentives and cross-border partnerships, nations can still participate meaningfully in the AI economy.

Agility as the UAE’s Governance Strategy
The conversation concluded with insights into how the United Arab Emirates is positioning itself as a leader in AI governance.
According to Alkhzaimi, the UAE’s key strategy is agility—creating flexible regulatory frameworks and innovation sandboxes that allow experimentation while maintaining accountability. This balance between speed and oversight may prove crucial as technological development accelerates.
As the RegulatingAI Podcast discussion between Sanjay Puri and Dr. Hoda A. Alkhzaimi makes clear, the future of AI governance will depend not only on regulating algorithms but also on understanding the complex technological, economic, and geopolitical systems surrounding them.

Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.