GameSec 2025

Conference on Game Theory and AI for Security

October 13-15, 2025, Athens, Greece

Paper Important Dates

Submission

June 13, 2025
June 30, 2025
July 7, 2025 (final)

Decision Notification

July 25, 2025

Camera-ready

August 20, 2025

Author Registration Deadline

August 29, 2025

The conference program is now available. Click here to download.

General Description

The 16th Conference on Game Theory and AI for Security (GameSec-25) will take place from October 13-15 2025 in Athens, Greece.

With the rapid advancement of artificial intelligence, game theory, and security technologies, the resilience and trustworthiness of modern systems is more critical than ever. The 2025 Conference on Game Theory and AI for Security focuses on leveraging strategic decision-making, adversarial reasoning, and computational intelligence to address security challenges in complex and dynamic environments.

The conference invites novel, high-quality theoretical and empirical contributions that apply game theory, AI, and related methodologies to security, privacy, trust, and fairness in emerging systems. The goal is to bring together researchers from academia, industry, and government to explore interdisciplinary connections between game theory, reinforcement learning, adversarial machine learning, mechanism design, risk assessment, behavioral modeling, and cybersecurity. Through rigorous and practically relevant analytical methods, the conference aims to advance the understanding and application of AI-driven strategies for securing critical infrastructures and emerging technologies.

Keynote Speakers

We are happy to announce the following Keynote Speakers:


Marta Kwiatkowska, Professor

University of Oxford, England

Talk Title: Stochastic Games with Neural Perception Mechanisms: A Formal Methods Perspective

Marta KwiatkowskaMarta Kwiatkowska

Abstract: Strategic reasoning is necessary to ensure stable multi-agent coordination in complex environments, as has been demonstrated in fields such as economics and computer networks. As AI becomes embedded in computing infrastructure, there is a growing need for modelling methodologies to support the development of emerging applications in multi-robot planning or autonomous driving. Stochastic games are a well established model for multi-agent sequential decision making under uncertainty, which has been employed for strategy synthesis as well as formal verification. More recently, however, agents in these models perceive their environment using data-driven approaches such as neural networks trained on continuous data.
Show more...

Biography

Marta Kwiatkowska is a Professor at the University of Oxford and Fellow of Trinity College. Her area of expertise lies in probabilistic and quantitative verification techniques and the synthesis of correct-by-construction systems from quantitative specifications. She led the development of the probabilistic model checker PRISM, winner of the 2024 ETAPS Test-of-Time Tool Award, which has been used to model and verify numerous case studies across a variety of application domains. Recently, she has been focusing on safety and trust in artificial intelligence, with an emphasis on robustness guarantees for machine learning. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM, Member of Academia Europea and International Honorary Member of AAAS.




Milind Tambe, Professor

Harvard University and Google Deepmind, USA

Talk Title: Generative AI and Green Security Games for social impact: From conservation to public health

Milind TambeMilind Tambe

Abstract: For nearly two decades, my team's work on AI for Social Impact (AI4SI) has focused on optimizing limited resources in public health, conservation, and public safety. I will begin by highlighting our work on green security games, which adapts the Stackelberg security game framework to protect natural resources and combat environmental crime. We have used these models in national parks globally, and my talk will focus on our most recent efforts: using generative AI (specifically flow models) to build more accurate models of poacher behavior. We then combine these predictions with game theory to design strategic patrol plans. To address settings with limited data, I will also showcase use of composite flow matching models to aid transfer reinforcement learning. We apply a similar methodology of combining machine learning with resource optimization across our portfolio.
Show more...

Biography

Milind Tambe is Gordon McKay Professor of Computer Science at Harvard University; concurrently, he is also Principal Scientist at Google Deepmind. Prof. Tambe and his team have developed pioneering AI systems that deliver real-world impact in public health (e.g., maternal and child health), public safety, and wildlife conservation. He is recipient of the AAAI Award for Artificial Intelligence for the Benefit of Humanity, AAAI Feigenbaum Prize, IJCAI John McCarthy Award, AAAI Robert S. Engelmore Memorial Lecture Award, AAMAS ACM Autonomous Agents Research Award, INFORMS Wagner prize for excellence in Operations Research practice, Military Operations Research Society Rist Prize, Columbus Fellowship Foundation Homeland security award and commendations and certificates of appreciation from the US Coast Guard, the Federal Air Marshals Service and airport police at the city of Los Angeles. He is a fellow of AAAI and ACM.




Michael Jordan, Professor

Inria Paris, France and University of California, Berkeley, USA

Talk Title: A Collectivist, Economic Perspective on AI

Michael JordanMichael Jordan

Abstract: Information technology is in the midst of a revolution in which omnipresent data collection and machine learning are impacting the human world as never before. The word "intelligence" is being used as a North Star for the development of this technology, with human cognition viewed as a baseline. This view neglects the fact that humans are social animals, and that much of our intelligence is social and cultural in origin. Thus, a broader framing is to consider the system level, where the agents in the system, be they computers or humans, are active, they are cooperative, and they wish to obtain value from their participation in learning-based systems. Agents may supply data and other resources to the system only if it is in their interest to do so, and they may be honest and cooperative only if it is in their interest to do so. Critically, intelligence inheres as much in the overall system as it does in individual agents. This is a perspective that is familiar in economics, although without the focus on learning algorithms. A key challenge is thus to bring (micro)economic concepts into contact with foundational issues in the computing and statistical sciences. I'll discuss some concrete examples of problems and solutions at this tripartite interface.

Biography

Michael I. Jordan is a researcher at Inria Paris and Professor Emeritus at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He was a winner of a BBVA Foundation Frontiers of Knowledge Award in 2025 and was the inaugural winner of the World Laureates Association (WLA) Prize in 2022. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He has received the Ulf Grenander Prize from the American Mathematical Society, the IEEE John von Neumann Medal, the IJCAI Research Excellence Award, the David E. Rumelhart Prize, and the ACM/AAAI Allen Newell Award. In 2016, Prof. Jordan was named the "most influential computer scientist" worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.




Lorenzo Cavallaro, Professor

University College London (UCL), England

Talk Title: Trustworthy AI... for Systems Security

Lorenzo CavallaroLorenzo Cavallaro


Abstract:
No day goes by without reading about machine learning (ML) success stories in every walk of life. Systems security is no exception, where ML’s tantalizing performance may leave us wondering whether any problems remain unsolved. Yet ML has no clairvoyant abilities, and once the magic wears off, we are left in uncharted territory. Can it truly help us build secure systems? In this talk, I will argue that performance alone is not enough. I will highlight the consequences of adversarial attacks and distribution shifts in realistic settings, and discuss how semantics may provide a path forward. My goal is to foster a deeper understanding of machine learning’s role in systems security and its potential for future advancements.

Biography

Lorenzo Cavallaro grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. He is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab. Lorenzo’s research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. To this end, he and his team investigate the interplay among program analysis abstractions, engineered and learned representations, and grounded models, and their crucial role in creating Trustworthy AI for Systems Security. Lorenzo publishes at and sits on the Program Committee of leading conferences in computer security and ML, received the Distinguished Paper Award at USENIX Security 2022, ICML 2024 Spotlight Paper, and DLSP 2025 Best Paper Award (co-located with IEEE S&P)/ He is also Associate Editor of ACM TOPS and IEEE TDSC. In addition to his love for food, Lorenzo finds his Flow in science, music, and family.



Conference Topics

Indicative topics, but not exhaustive, are listed below, and the conference welcomes a broad range of contributions exploring the intersection of game theory, AI, and security.

Conference Topics
  • Stackelberg and Bayesian games for cybersecurity
  • Mechanism design for secure and resilient systems
  • Multi-agent security games and adversarial interactions
  • Dynamic and repeated games in security applications
  • Coalitional game theory for trust and privacy
  • Evolutionary game theory in cyber defense
  • Game-theoretic models for deception and misinformation detection
  • Auction-based security mechanisms for resource allocation
  • Nash equilibria in adversarial security settings
  • Aggregative Games for security
  • Adversarial machine learning and robust AI models
  • Reinforcement learning for cyber defense strategies
  • AI-driven risk assessment and threat intelligence
  • Secure federated learning and privacy-preserving AI
  • AI for zero-trust architectures and intrusion detection
  • Explainable AI in security decision-making
  • Large language models for cybersecurity applications
  • AI-powered malware and phishing detection
  • Automated penetration testing and ethical hacking using AI
  • Game-theoretic approaches for securing IoT and edge computing
  • Security strategies for autonomous systems and UAVs
  • AI-driven attack detection in smart grids and critical infrastructures
  • Secure network protocols and AI-powered anomaly detection
  • Blockchain and game theory for decentralized security
  • Cyber-physical system resilience through game-theoretic modeling
  • Security strategies for smart cities and intelligent transportation systems
  • AI-enhanced situational awareness in cyber-physical environments
  • Incentive mechanisms for cybersecurity investments
  • Human-in-the-loop security and behavioral game theory
  • Trust and reputation models in decentralized systems
  • AI-powered fraud detection in financial systems
  • Privacy-aware mechanism design and data-sharing incentives
  • Economic impact of cyber threats and attack mitigation strategies
  • Psychological and cognitive biases in security decision-making
  • Red teaming and AI-generated attack simulations
  • Robust AI models against adversarial perturbations
  • AI-powered misinformation and propaganda detection
  • Security challenges in generative AI and large language models
  • Ethical AI and fairness in security decision-making
  • AI for detecting and mitigating deepfake threats
  • Secure AI model training and adversarial robustness testing
  • Reinforcement learning under adversarial conditions
  • Game-theoretic approaches to securing blockchain networks
  • AI for decentralized identity and authentication management
  • Security challenges in multi-agent and swarm intelligence systems
  • Incentive-driven security solutions for distributed systems
  • AI-powered smart contract verification and fraud detection
  • Secure consensus mechanisms in blockchain and distributed ledgers
  • AI-driven security in autonomous transportation
  • Game theory for cloud security and access control
  • AI-enhanced cyber resilience in government and military networks
  • AI for misinformation mitigation in social networks
  • AI and game theory applications in healthcare cybersecurity
  • Security in quantum computing and post-quantum cryptography
  • AI-powered cybersecurity solutions for industrial control systems
  • AI in securing 5G/6G and next-generation communication networks

Conference Sponsors and Supporters

We invite you to participate in the sponsor program for GameSec-25. The conference will be held in person October 13-15 2025 in Athens, Greece. GameSec is an annual international conference that started in 2010 and it focuses on the protection of heterogeneous, large-scale, and dynamic cyber-physical systems as well as managing security risks faced by critical infrastructures through rigorous and practically relevant analytical methods, especially game-theoretic and decision-theoretic methods. The proceedings of the conference are published by Springer.

GameSec conference attracts 50-100 students, researchers, and practitioners every year from all around the world. Your participation in the GameSec sponsor program will give you visibility to this diverse group that has interest and expertise in security, privacy, game theory, decision theory, and more.

Sponsor benefits include:

  • • Sponsor company name and logo will be displayed on website and at the venue
  • • Opportunity for sponsored awards (best paper and best paper honorable mention)
  • • Opportunity to provide named travel grant
  • • Acknowledgment in opening talk and closing remarks

  • National Technical University of Athens

    NTUA NTUA

    Athena Research Center, Greece 

    NTUA

    The Institute for Systems Research - University of Maryland College Park, USA

    NTUA

    Springer (Best Paper Award)

    Springer

Code of Conduct

The GameSec community values Diversity, Equity, and Inclusion (DEI). GameSec's Code of Conduct clearly outlines undesirable behaviors and subsequent corrective actions in detail.


GameSec 2025 Proceedings

GameSec 2025 proceedings will be published by Springer as part of the LNCS series.