GameSec 2025

Conference on Game Theory and AI for Security

October 13-15, 2025, Athens, Greece

Paper Important Dates

Submission

June 13, 2025

Decision Notification

July 25, 2025

Camera-ready

August 29, 2025

General Description

The 16th Conference on Game Theory and AI for Security (GameSec-25) will take place from October 13-15 2025 in Athens, Greece.

With the rapid advancement of artificial intelligence, game theory, and security technologies, the resilience and trustworthiness of modern systems is more critical than ever. The 2025 Conference on Game Theory and AI for Security focuses on leveraging strategic decision-making, adversarial reasoning, and computational intelligence to address security challenges in complex and dynamic environments.

The conference invites novel, high-quality theoretical and empirical contributions that apply game theory, AI, and related methodologies to security, privacy, trust, and fairness in emerging systems. The goal is to bring together researchers from academia, industry, and government to explore interdisciplinary connections between game theory, reinforcement learning, adversarial machine learning, mechanism design, risk assessment, behavioral modeling, and cybersecurity. Through rigorous and practically relevant analytical methods, the conference aims to advance the understanding and application of AI-driven strategies for securing critical infrastructures and emerging technologies.

Conference Topics

Indicative topics, but not exhaustive, are listed below, and the conference welcomes a broad range of contributions exploring the intersection of game theory, AI, and security.

Conference Topics
  • Stackelberg and Bayesian games for cybersecurity
  • Mechanism design for secure and resilient systems
  • Multi-agent security games and adversarial interactions
  • Dynamic and repeated games in security applications
  • Coalitional game theory for trust and privacy
  • Evolutionary game theory in cyber defense
  • Game-theoretic models for deception and misinformation detection
  • Auction-based security mechanisms for resource allocation
  • Nash equilibria in adversarial security settings
  • Aggregative Games for security
  • Adversarial machine learning and robust AI models
  • Reinforcement learning for cyber defense strategies
  • AI-driven risk assessment and threat intelligence
  • Secure federated learning and privacy-preserving AI
  • AI for zero-trust architectures and intrusion detection
  • Explainable AI in security decision-making
  • Large language models for cybersecurity applications
  • AI-powered malware and phishing detection
  • Automated penetration testing and ethical hacking using AI
  • Game-theoretic approaches for securing IoT and edge computing
  • Security strategies for autonomous systems and UAVs
  • AI-driven attack detection in smart grids and critical infrastructures
  • Secure network protocols and AI-powered anomaly detection
  • Blockchain and game theory for decentralized security
  • Cyber-physical system resilience through game-theoretic modeling
  • Security strategies for smart cities and intelligent transportation systems
  • AI-enhanced situational awareness in cyber-physical environments
  • Incentive mechanisms for cybersecurity investments
  • Human-in-the-loop security and behavioral game theory
  • Trust and reputation models in decentralized systems
  • AI-powered fraud detection in financial systems
  • Privacy-aware mechanism design and data-sharing incentives
  • Economic impact of cyber threats and attack mitigation strategies
  • Psychological and cognitive biases in security decision-making
  • Red teaming and AI-generated attack simulations
  • Robust AI models against adversarial perturbations
  • AI-powered misinformation and propaganda detection
  • Security challenges in generative AI and large language models
  • Ethical AI and fairness in security decision-making
  • AI for detecting and mitigating deepfake threats
  • Secure AI model training and adversarial robustness testing
  • Reinforcement learning under adversarial conditions
  • Game-theoretic approaches to securing blockchain networks
  • AI for decentralized identity and authentication management
  • Security challenges in multi-agent and swarm intelligence systems
  • Incentive-driven security solutions for distributed systems
  • AI-powered smart contract verification and fraud detection
  • Secure consensus mechanisms in blockchain and distributed ledgers
  • AI-driven security in autonomous transportation
  • Game theory for cloud security and access control
  • AI-enhanced cyber resilience in government and military networks
  • AI for misinformation mitigation in social networks
  • AI and game theory applications in healthcare cybersecurity
  • Security in quantum computing and post-quantum cryptography
  • AI-powered cybersecurity solutions for industrial control systems
  • AI in securing 5G/6G and next-generation communication networks

Submission Guidelines

Submission Guidelines can be found here