GameSec 2021

Conference on Decision and Game Theory for Security

October 25 - 27, 2021, Prague, Czech Republic* (Online Conference)

2021 Conference on Decision and Game Theory for Security

GameSec 2021, the 12th Conference on Decision and Game Theory for Security, will be a fully online conference, October 25-27, 2021.

The conference proceedings will be published by Springer as part of the LNCS series.

GameSec 2021 and Covid-19: Due to the current status of the Covid-19 situation, the GameSec 2021 organizing committee has decided to hold the GameSec 2021 Conference as a fully online, interactive event. All matters related to publication and indexing remain unchanged.

Registration is now open.

The registration for the conference is free and it is now open -- please, register here by filling out the Google Form.

Program is now online.

Please, see the program here. The link to the zoom that will be used for conference presentations will be sent during the week Oct 18 - Oct 22 to all registered participants.

Description

Modern societies are becoming dependent on information, automation, and communication technologies more than ever. Managing the security of the emerging systems, many of them safety critical, poses significant challenges. The 12th Conference on Decision and Game Theory for Security (GameSec 2021) focuses on protection of heterogeneous, large-scale and dynamic cyber-physical systems as well as managing security risks faced by critical infrastructures through rigorous and practically-relevant analytical methods. GameSec 2021 invites novel, high-quality theoretical and practically-relevant contributions, which apply decision and game theory, as well as related techniques such as optimization, machine learning, dynamic control and mechanism design, to build resilient, secure, and dependable networked systems. The goal of GameSec 2020 is to bring together academic and industrial researchers in an effort to identify and discuss the major technical challenges and recent results that highlight the connections between game theory, control, distributed optimization, machine learning, economic incentives and real-world security, reputation, trust and privacy problems.

Conference Topics include (but are not restricted to):

GameSec solicits research papers, which report original results and have neither been published nor submitted for publication elsewhere, on the following and other closely related topics:

  • Game theory, control, and mechanism design for security and privacy
  • Decision making for cybersecurity and security requirements engineering
  • Security and privacy for the Internet-of-Things, cyber-physical systems, cloud computing, resilient control systems, and critical infrastructure
  • Pricing, economic incentives, security investments, and cyber insurance for dependable and secure systems
  • Risk assessment and security risk management
  • Security and privacy of wireless and mobile communications, including user location privacy
  • Socio-technological and behavioral approaches to security
  • Empirical and experimental studies with game, control, or optimization theory-based analysis for security and privacy
  • Adversarial Machine Learning and the role of AI in system security
  • Modeling and analysis of deception and antagonistic intrusion of information flow within a game-theoretic framework

Paper Submission

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.

Keynote Speakers

We are happy to annonuce two distinguished keynote speakers:
Lorrie Faith Cranor
Lorrie Faith Cranor

Bio: Lorrie Faith Cranor is the Director and Bosch Distinguished Professor in Security and Privacy Technologies of CyLab and the FORE Systems Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University. She is also co-director of the Collaboratory Against Hate: Research and Action Center at Carnegie Mellon and the University of Pittsburgh. She directs the CyLab Usable Privacy and Security Laboratory (CUPS) and co-directs the MSIT-Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission, working in the office of Chairwoman Ramirez. She is also a co-founder of Wombat Security Technologies, Inc, a security awareness training company that was acquired by Proofpoint. She has authored over 200 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability (O'Reilly 2005) and founded the Symposium On Usable Privacy and Security (SOUPS). She also co-founded the Conference on Privacy Engineering Practice and Respect (PEPR). She chaired the Platform for Privacy Preferences Project (P3P) Specification Working Group at the W3C and authored the book Web Privacy with P3P (O'Reilly 2002). She has served on a number of boards and working groups, including the Electronic Frontier Foundation Board of Directors, the Computing Research Association Board of Directors, the Aspen Institute Cybersecurity Group, and on the editorial boards of several journals. In her younger days she was honored as one of the top 100 innovators 35 or younger by Technology Review magazine. More recently she was elected to the ACM CHI Academy, named an ACM Fellow for her contributions to usable privacy and security research and education, named an IEEE Fellow for her contributions to privacy engineering, and named a AAAS Fellow. She has also received an Alumni Achievement Award from the McKelvey School of Engineering at Washington University in St. Louis, the 2018 ACM CHI Social Impact Award, the 2018 International Association of Privacy Professionals Privacy Leadership Award, and (with colleagues) the 2018 IEEE Cybersecurity Award for Practice. She was previously a researcher at AT&T-Labs Research and taught in the Stern School of Business at New York University. She holds a doctorate in Engineering and Policy from Washington University in St. Louis. In 2012-13 she spent her sabbatical as a fellow in the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University where she worked on fiber arts projects that combined her interests in privacy and security, quilting, computers, and technology. She practices yoga, plays soccer, walks to work, and runs after her three teenagers. Her pandemic pet is a bass flute.


Title: Keeping it real and accounting for risk: usable privacy and security study challenges

User studies are critical to understanding how users perceive and interact with security and privacy software and features including browser security warnings and private browsing modes, password creation interfaces, ad privacy settings, and website privacy notices. While it is important that users be able to configure and use security and privacy tools when they are not at risk, it is even more important that the tools continue to protect users during situations where their security or privacy may be breached. However, ethically placing users in the context of realistic scenarios in which they are motivated to behave as they would if they actually had something at risk is challenging. In our research at Carnegie Mellon University we have used a variety of strategies to overcome these challenges and place participants in situations where they will believe their security or privacy is at risk, without subjecting them to increases in actual harm. For example, in some studies, we have recruited participants to perform real tasks not directly related to security so that we can observe how participants respond to simulated security-related prompts or cues that occur while users are focused on primary tasks. In other studies, we have created a hypothetical scenario and try to get participants sufficiently engaged in it that they will be motivated to avoid simulated harm. We have occassionally found (and created) opportunities to observe real, rather than simulated attacks, without researcher intervention. In this talk I will discuss some of these studies and use them to illustrate ways that researchers can conduct usable privacy and security studies that account for risk in a realistic way.




Vincent Conitzer
Vincent Conitzer

Bio: Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders?
Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).


Title: AI Agents May Cooperate Better If They Don’t Resemble Us

AI systems control an ever growing part of our world. As a result, they will increasingly interact with each other directly, with little or no potential for human mediation. If each system stubbornly pursues its own objectives, this runs the risk of familiar game-theoretic tragedies -- along the lines of the Tragedy of the Commons, the Prisoner’s Dilemma, or even the Traveler’s Dilemma -- in which outcomes are reached that are far worse for every party than what could have been achieved cooperatively.

However, AI agents can be designed in ways that make them fundamentally unlike strategic human agents. This option is often overlooked, as we are usually inspired by our own human condition in the design of AI agents. But I will argue that this approach has the potential to avoid the above tragedies. The price to pay for this, for us as researchers, is that many of our intuitions about game and decision theory, and even belief formation, start to fall short. I will discuss how foundational research from the philosophy and game theory literatures provides a good starting point for pursuing this approach.

This talk covers joint work with Caspar Oesterheld, Scott Emmons, Andrew Critch, Stuart Russell, and Abram Demski.

Conference Sponsors and Supporters

We thank all our sponsors for their kind support.

GameSec 2021 Proceedings

GameSec 2021 proceedings will be published by Springer as part of the LNCS series. During the conference, the proceedings will be available free of charge online.