7th Deep Learning Security
and Privacy Workshop
co-located with the 45th IEEE Symposium on Security and Privacy
May 23, 2024
Photo: Pixabay


Experiences deploying an adversarial ML tool at scale
Ben Zhao, University of Chicago

Abstract: Generative AI has transformed our society in many ways, including significant harms to human creatives in many fields through its misuse. It has been more than a year since the first release of Glaze in March 2023, an adversarial tool designed to provide protection to individual artists against style mimicry attacks commoditized by platforms such as civitai. Since its release, Glaze has been downloaded more than 2.6 million times by artists around the globe. In this talk, I will describe some of the challenges we faced deploying tools like Glaze, and lessons along the way. I will highlight incorrect assumptions about threat models and user priorities, and discuss current and future plans to protect human creatives in a variety of modalities.

Bio: Ben Zhao is Neubauer Professor of Computer Science at University of Chicago. He completed his Ph.D. at U.C. Berkeley (2004), and B.S. from Yale (1997). He is a Fellow of the ACM, and a recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), USENIX Internet Defense Prize, ComputerWorld Magazine's Top 40 Technology Innovators award, IEEE ITC Early Career Award, and Faculty awards from Google, Amazon, and Facebook. His work has been covered by many media outlets including New York Times, CNN, NBC, ABC, BBC, MIT Tech Review, Wall Street Journal, Forbes, and New Scientist. His current research focuses on mitigating the harms of misused AI. He has published over 190 articles in areas of security and privacy, machine learning, networking, and HCI. He served as TPC (co-)chair for the World Wide Web conference (WWW 2016) and ACM Internet Measurement Conference (IMC 2018), and on the Steering Committee of IEEE Conference on Secure and Trustworthy Machine Learning (SaTML).

Security for Language Models [Slides]
David Wagner, University of California at Berkeley

Abstract: I discuss computer security challenges facing the field as a result of the exciting progress in Large Language Models (LLMs). I will give an overview of emerging threats, including both potential attacks on language models as well as ways that language models might be used by attackers to attack people and existing systems. Then, I will discuss potential defenses, opportunities for protecting against these threats, and directions we might expect to see industry and research take in the coming years. Finally, I will briefly present research in our group on attacks and defenses for LLMs.

Bio: David Wagner is Professor of Computer Science at the University of California at Berkeley, with expertise in the areas of computer security and adversarial machine learning. He has published over 100 peer-reviewed papers in the scientific literature and has co-authored two books on encryption and computer security. His research has analyzed and contributed to the security of cellular networks, 802.11 wireless networks, electronic voting systems, and other widely deployed systems.

We are not prepared
Nicholas Carlini, Google DeepMind

Abstract: It has now been a decade since the first adversarial examples were demonstrated on deep learning models. And yet, even still, we can not robustly classify MNIST images better than LeNet-5 or ImageNet images better than AlexNet. But now, more than ever, we need robust machine learning models. And not only robust to evasion attack: but also robust to poisoning, stealing, and many other attacks. In this talk I survey the current progress we have made on adversarial machine learning. While we have made many significant advances in making attacks practical, we have had made considerably less progress on defenses. Worse, top conferences like S&P continue to accept obviously incorrect adversarial example defenses. Making progress towards addressing these challenges will be of the highest importance in the coming years.

Bio: Nicholas Carlini is a research scientist at Google DeepMind studying the security and privacy of machine learning, for which he has received best paper awards at ICML, USENIX Security, and IEEE S&P. He obtained his PhD from UC Berkeley in 2018.

Program (Tentative) - May 23, 2024

The following times are on PT time zone.
8:20–08:30 Opening and Welcome
08:30–9:30 Keynote I: Ben Zhao (University of Chicago)
Experiences deploying an adversarial ML tool at scale
9:30-10:15 Session I: Federated Learning Security (Session Chair: Binghui Wang)
9:30: A Performance Analysis for Confidential Federated Learning
Bruno Casella (University of Turin), Iacopo Colonnelli (University of Turin), Gianluca Mittone (University of Turin), Robert Renè Maria Birke (University of Turin), Walter Riviera (Intel Corp. and University of Verona), Antonio Sciarappa (Leonardo SpA), Carlo Cavazzoni (Leonardo SpA), Marco Aldinucci (University of Turin)
9:45: The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision
Andreas Müller (Ruhr University Bochum), Erwin Quiring (Ruhr University Bochum, ICSI Berkeley)
10:00: LayerDBA: Circumventing Similarity-Based Defenses in Federated Learning
Javor Nikolov (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)
10:15–10:45 Coffee Break
10:45-11:30 Session II: Reinforcement Learning Security (Session Chair: Yuchen Yang)
10:45: Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks
Junlin Wu (Washington University in St. Louis), Hussein Sibai (Washington University in St. Louis), Yevgeniy Vorobeychik (Washington University in St. Louis)
11:00: Wendigo: Deep Reinforcement Learning for Denial-of-Service Query Discovery in GraphQL
Shae McFadden (King's College London & The Alan Turing Institute), Marcello Maugeri (University of Catania), Chris Hicks (The Alan Turing Institute), Vasilis Mavroudis (The Alan Turing Institute), Fabio Pierazzi (King's College London)
11:15: Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space
Sanyam Vyas (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Chris Hicks (The Alan Turing Institute)
11:30-12:05 Session III: GenAI Security (Session Chair: Hongbin Liu)
11:30: Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers
Sivana Hamer (North Carolina State University), Marcelo D'Amorim (North Carolina State University), Laurie Williams (North Carolina State University)
11:45: Subtoxic Questions: Dive Into Attitude Change of LLM’s Response in Jailbreak Attempts (Extended Abstract)
Tianyu Zhang (Nanjing University), Zixuan Zhao (Nanjing University), Jiaqi Huang (Nanjing University), Jingyu Hua (Nanjing University), Sheng Zhong (Nanjing University)
11:55: Beyond fine-tuning: LoRA modules boost near-OOD detection and LLM security (Extended Abstract)
Etienne Salimbeni (EPFL), Francesco Craighero (EPFL), Renata Khasanova (Oracle), Milos Vasic (Oracle), Pierre Vandergheynst (EPFL)
12:05–13:00 Lunch Break
13:00–14:00 Keynote II: David Wagner (University of California, Berkeley)
Security for Language Models
14:00-14:30 Session IV: Privacy in Machine Learning (Session Chair: Yinzhi Cao)
14:00: Gone but Not Forgotten: Improved Benchmarks for Machine Unlearning (Extended Abstract)
Keltin Grimes (Software Engineering Institute), Collin Abidi (Software Engineering Institute), Cole Frank (Software Engineering Institute), Shannon Gallagher (Software Engineering Institute)
14:10: Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models (Extended Abstract)
Hongbin Liu (Duke University), Lun Wang (Google), Om Thakkar (Google), Abhradeep Guha Thakurta (Google), Arun Narayanan (Google)
14:20: Terms of Deception: Exposing Obscured Financial Obligations in Online Agreements with Deep Learning (Extended Abstract)
Elisa Tsai (University of Michigan - Ann Arbor), Anoop Singhal (National Institute of Standards and Technology), Atul Prakash (University of Michigan - Ann Arbor)
14:30–15:00 Coffee Break
15:00–16:00 Keynote III: Nicholas Carlini (Google DeepMind)
We are not prepared
16:00-16:40 Session V: Attacks (Session Chair: Yuchen Yang)
16:00: NodeGuard: A Highly Efficient Two-Party Computation Framework for Training Large-Scale Gradient Boosting Decision Tree
Tianxiang Dai (Huawei European Research Center), Yufan Jiang (Karlsruhe Institute of Technology), Yong Li (Huawei European Research Center), Fei Mei (Huawei European Research Center)
16:15: Regional Video Style Transfer Attack Using Segment Anything Model
Yuxin Cao (Tsinghua University), Jinghao Li (Shandong University), Xi Xiao (Tsinghua University), Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Hao Ge (Pingan Technology), Wei Liu (Shenzhen Institute of Information Technology), Guangwu Hu (Shenzhen Institute of Information Technology)
16:30: On Adaptive Decision-Based Attacks and Defenses (Extended Abstract)
Ilias Tsingenopoulos (DistriNet, KU Leuven), Vera Rimmer (DistriNet, KU Leuven), Davy Preuveneers (DistriNet, KU Leuven), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London), Wouter Joosen (DistriNet, KU Leuven)
16:40 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline (extended): Feb 9, 2024, 11:59 PM (AoE, UTC-12) Feb 2, 2024, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: Mar 4, 2024
  • Camera-ready due: Mar 26, 2024
  • Workshop: May 23, 2024


Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a powerful tool for security in academia and industry. On the other hand, security and privacy of deep learning has gained growing attention since deep learning has become a new attack surface. The security, privacy, fairness, and interpretability of neural networks has been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.

Topics of Interest

DLSP seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Generative AI
  • Foundation models
  • Federated learning
  • Recurrent network architectures
  • Graph neural networks
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Causal inference
  • Deep reinforcement learning
  • Recommender systems
  • Poisong attacks to deep learning and defenses
  • Adversarial examples to deep learning and defenses
  • Privacy attacks to deep learning and defenses
  • Explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

We accept two types of submissions:

  • Track 1: Archival, full-length papers. Submissions in this track can be up to six pages, plus additional references. Accepted papers in this track will be included in the IEEE workshop proceedings.
  • Track 2: Non-Archival, extended abstract. For this track, we encourage submissions that are forward-looking and explore visionary ideas. We allow concurrent submissions for this track, but the authors are responsible for ensuring compliance with the policies of other venues. Submissions in this track can be up to three pages, plus additional references. Accepted papers in this track will NOT be included in the IEEE workshop proceedings, but will be publicly available on this workshop website.

Submissions in both tracks should be unpublished work. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE S&P Conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

For any questions, contact the workshop organizers at dlsp2024@ieee-security.org

Best Paper/Extended Abstract Awards

One archival, full-length paper in Track 1 will be selected as the Best Paper Award; and one extended abstract in Track 2 will be selected as the Best Extended Abstract Award. The recipient of the Best Paper Award will be granted a prize of $1,500, while the Best Extended Abstract Award will be granted a prize of $1,000. These prizes are generously sponsored by Ant Research.

Presentation Form

All accepted submissions will be presented at the workshop. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper.

Submission Site



Program Chairs

Steering Committee

Program Committee

  • Sanghyun Hong, Oregon State University
  • Mu Zhang, University of Utah
  • Jinyuan Jia, The Pennsylvania State University
  • Minghong Fang, Duke University
  • Ziqi Yang, Zhejiang University
  • Binghui Wang, Illinois Institute of Technology
  • Edward Raff, Booz Allen Hamilton
  • Yupei Liu, Duke University
  • Min Du, Palo Alto Networks
  • Giovanni Apruzzese, University of Liechtenstein
  • Yang Zhang, CISPA Helmholtz Center for Information Security
  • Chao Zhang, Tsinghua University
  • Sagar Samtani, Indiana University, Bloomington
  • Yacin Nadji, Corelight, Inc.
  • Erwin Quiring, Ruhr University Bochum, ICSI Berkeley
  • Phil Tully, Google
  • Mohammadreza (Reza) Ebrahimi, University of South Florida
  • Kexin Pei, The University of Chicago and Columbia University
  • Pavel Laskov, University of Liechtenstein
  • Scott Coull, Google
  • Tomas Pevny, Czech Technical University in Prague
  • Fnu Suya, University of Maryland College Park
  • Phil Tully, Google
  • Konrad Rieck, TU Berlin
  • Christian Wressnegger, Karlsruhe Institute of Technology (KIT)
  • Fabio Pierazzi, King's College London
  • Davide Maiorca, University of Cagliari, Italy
  • Samuel Marchal, VTT and Aalto University
  • Tummalapalli Reddy, QlikTech
  • Jason Xue, CSIRO’s Data61
  • Yanjiao Chen, Zhejiang University
  • Ming Li, University of Arizona
  • Qi Li, Tsinghua University
  • Peng Gao, Virginia Tech
  • Yuan Hong, University of Connecticut
  • Nirnimesh Ghose, University of Nebraska-Lincoln
  • Shiqing Ma, UMass Amherst
  • Koray Karabina, National Research Council of Canada
  • Ting Wang, Stony Brook University
  • Bimal Viswanath, Virginia Tech
  • Mathias Humbert, University of Lausanne
  • Alexandra Dmitrienko, University of Wuerzburg