By Esther George and the Zyber Global Research Team
As professionals from all over the world convened for the annual conference of the Global Cyber Security Capacity Centre (GCSCC), the late September sunlight slanted across the glass façade of the Sultan Nazrin Shah Centre at Worcester College, Oxford. The theme for this year “Securing the Cyber Future: Cyber Resilience in the Age of AI and Geopolitical Uncertainty” encapsulated both ambition and urgency.
Conversations felt different inside. The days of AI being viewed as a side topic in cybersecurity circles are long gone. It now occupies a central position in the threat landscape and is increasingly at centre of the defence solutions being developed.
I was invited to speak on Panel 3: The Evolution of Transnational Cybercrime with AI, and the shift in tone across the sector was unmistakable. AI is now the most potent enabler of cybercriminals, not just a supplementary tool. I stated that cybercriminals are increasingly taking advantage of trust, scale, and speed all of which AI improves, in addition to system vulnerabilities.
A Centre Evolving with the Times
The University of Oxford’s Department of Computer Science is home to the GCSCC, which has long been a cornerstone of international cyber capacity-building. The GCSCC’s Cybersecurity Capacity Maturity Model for Nations (CMM) is the de facto framework for evaluating how prepared countries are to prevent, detect, and respond to cyber threats. For many years, the Centre’s work focused mostly on workforce development, critical-infrastructure protection, CERT’s and benchmarking and bolstering national capabilities legal frameworks.
However, it was evident at Oxford’s 2025 AI Cybersecurity Conference that the GCSCC’s mission is changing. The Centre is currently addressing the intersection of artificial intelligence and cybersecurity, a field where machine intelligence transforms both the defence and the threat. GCSCC is evolving from a research centre that focuses on “what makes a cyber-mature nation,” to a forum for discussion on how AI is changing global resilience and governance.
This change is appropriate “AI is making old crimes faster, smarter, and harder to trace,” as I mentioned in my remarks.

How AI Is Reshaping Cybercrime
Experts on the panels that day discussed the same concern: AI is making it easier for cybercriminals to get started. A generative model and a few prompts can now perform tasks that previously required technical expertise.
Phishing emails are now linguistically flawless and context aware, free of grammatical errors. Social engineering is nearly identical to reality due to deepfakes and synthetic identities. These days, criminal organisations use AI to modify attacks in real time, personalising scams on a large scale and taking advantage of psychological triggers more skilfully than in the past.
I told the audience, “AI is a crime multiplier, making existing crimes more efficient and enabling new crimes that were previously too complex or resource heavy.”
This creates an asymmetry for law enforcement that is as much about speed as it is about sophistication. While investigators are still constrained by cumbersome legal and institutional processes, criminals are free to experiment with open-source AI tools.
The Challenge of Law and International Relations
A major focus of my panel was how AI confuses traditional legal frameworks. “Current laws were written for crimes committed by humans, not algorithms,” I contended.
When an AI system “decides” to behave in a way that its designers didn’t intend, who bears the blame? When a neural network produces quantifiable harm but lacks consciousness, how do you attribute intent? These are not academic questions, rather they are at the core of contemporary cross-border investigations.
Safe havens have already been established because of jurisdictions varying rates of AI regulation. When it comes to AI governance some nations have jumped ahead of others, leaving gaps that criminals take advantage of to conduct cross-border operations with impunity.
“When crimes are committed at machine speed, laws designed for human intent struggle,” I cautioned. “If AI creates gaps in countries’ response capabilities, those gaps will become the new safe havens for cybercriminals.”
The Capacity Divide: A Two-Speed Justice System
I also expressed concern about the possibility of a global divide between countries with and those without AI-enabled investigative capabilities.
I issued a warning, saying, “We must be careful of creating a two-speed justice system where criminals use AI globally and law enforcement remains constrained by local laws and outdated tools.”
Heads nodded around the room. The inability of forensic tools to handle deepfake evidence, courts’ inability to evaluate algorithmic outputs; and training programmes that continue to emphasis legacy cyber threats were among the common frustrations expressed by several delegates.
The GCSCC was established to promote the very ideals of international cooperation, coordinated training and shared platforms, all of which are necessary to close this gap. Additionally, it increasingly entails integrating AI-enabled investigation and evidence handling into pre-existing frameworks rather than considering them as an afterthought.
Developing Adaptive, Global Frameworks
The peril of strict regulation was one of the most relevant topics of the day. In my remarks, I urged lawmakers to refrain from enacting rigid regulations pertaining to a rapidly changing technology.
“Frameworks need to be adaptable,” I stated. “Laws may already be outdated tomorrow if we enact them too strictly today”.
Rather, I put forth a model that accommodates technological advancements while upholding fundamental ethical tenets, based on the principles of transparency, accountability, and proportionality. Additionally, I emphasised the importance of public–private collaboration: Industry, not the government is home to many AI tools and expertise. Without them we cannot construct defences.
The Growing Role of GCSCC
Many of these imperatives are reflected in the recent initiatives of the GCSCC. In addition to traditional cyber indicators, its researchers are investigating how to incorporate AI-specific metrics into national cybersecurity assessments, to gauge for AI-enabled threats. Additionally, it is encouraging discussions that go beyond national frameworks to transnational governance models, acknowledging the unrestricted cross-border flow of AI risk and cybercrime.
This development marks a substantial shift from the Centre’s initial efforts. It indicates an understanding that the development of cyber capacity building must now change as quickly as the threats it aims to neutralise.
From Oxford to the World
By the end of the conference, everyone agreed that AI presents both opportunities and challenges. Digital forensics can be strengthened by using the same algorithms that allow automated phishing and deepfakes to identify anomalies and predict attacks.
“AI is not only the problem, but it must also be part of the solution,” I said. As I called for balance at the end of my talk. Cyber defence can be powered by the same algorithms that drive cybercrime.
“The unequal pace of AI development will widen the gap between criminals and law enforcement, as well as between nations, if we do not act together,” I continued. However, AI can help level the playing field if we are successful.
In Oxford that message struck a deep chord following the panels delegates talked about new international alliances, cooperative training initiatives, and pilot projects to speed up capability-building. There was a sincere spirit of collaboration and forward momentum.
The Way Ahead
The work at hand is obvious. Efforts to increase capacity building must be flexible, collaborative, and truly global. In addition to training people, nations must create learning systems. To address machine-speed crime, legal frameworks must change from being static rules to dynamic mechanisms. Additionally, AI itself needs to be incorporated into all facets of cyber defence, from detection to prosecution.
As I left the GCSCC conference, I felt inspired and challenged. Both the threats and the resolve to confront them are genuine. The work being done at Oxford demonstrates that the cybersecurity community is learning to co-evolve with AI rather than catching up to it.
Because, as I reminded the audience, the question in this new environment is not just, “How secure are our networks?” but also, “How secure are our models, our data, and our assumptions about the adversary?”
Whether we secure not only our systems but also our collective digital future depends on how we collectively respond to that question.
Note from the author: On September 30, 2025, I gave a presentation at the GCSCC AI Cybersecurity Conference held at the University of Oxford. I want to express my gratitude to the session moderators, my fellow panellists on panel 3, the organisers, and all delegates for an insightful discussion.
Global Cyber Security Capacity Centre (GCSCC) - https://gcscc.ox.ac.uk/home-page
GCSCC Hosts AI Cybersecurity Conference: Securing the Cyber Future, ‘Cyber Resilience in the Age of AI and Geopolitical Uncertainty’ - https://gcscc.ox.ac.uk/article/gcscc-hosts-ai-cybersecurity-conference-securing-cyber-future-cyber-resilience-age-ai-and