8:20–08:30 | Opening and Welcome |
08:30–9:30 | Keynote I: Ben Zhao (University of Chicago) |
Experiences deploying an adversarial ML tool at scale |
|
9:30-10:15 | Session I: Federated Learning Security (Session Chair: Binghui Wang) |
9:30: A Performance Analysis for Confidential Federated Learning
Bruno Casella (University of Turin), Iacopo Colonnelli (University of Turin), Gianluca Mittone (University of Turin), Robert Renè Maria Birke (University of Turin), Walter Riviera (Intel Corp. and University of Verona), Antonio Sciarappa (Leonardo SpA), Carlo Cavazzoni (Leonardo SpA), Marco Aldinucci (University of Turin) | |
9:45: The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision
Andreas Müller (Ruhr University Bochum), Erwin Quiring (Ruhr University Bochum, ICSI Berkeley) | |
10:00: LayerDBA: Circumventing Similarity-Based Defenses in Federated Learning
Javor Nikolov (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt) | |
10:15–10:45 | Coffee Break |
10:45-11:30 | Session II: Reinforcement Learning Security (Session Chair: Yuchen Yang) |
10:45: Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks
Junlin Wu (Washington University in St. Louis), Hussein Sibai (Washington University in St. Louis), Yevgeniy Vorobeychik (Washington University in St. Louis) | |
11:00: Wendigo: Deep Reinforcement Learning for Denial-of-Service Query Discovery in GraphQL
Shae McFadden (King's College London & The Alan Turing Institute), Marcello Maugeri (University of Catania), Chris Hicks (The Alan Turing Institute), Vasilis Mavroudis (The Alan Turing Institute), Fabio Pierazzi (King's College London) | |
11:15: Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space
Sanyam Vyas (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Chris Hicks (The Alan Turing Institute) | |
11:30-12:05 | Session III: GenAI Security (Session Chair: Hongbin Liu) |
11:30: Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers
Sivana Hamer (North Carolina State University), Marcelo D'Amorim (North Carolina State University), Laurie Williams (North Carolina State University) | |
11:45: Subtoxic Questions: Dive Into Attitude Change of LLM’s Response in Jailbreak Attempts (Extended Abstract)
Tianyu Zhang (Nanjing University), Zixuan Zhao (Nanjing University), Jiaqi Huang (Nanjing University), Jingyu Hua (Nanjing University), Sheng Zhong (Nanjing University) | |
11:55: Beyond fine-tuning: LoRA modules boost near-OOD detection and LLM security (Extended Abstract)
Etienne Salimbeni (EPFL), Francesco Craighero (EPFL), Renata Khasanova (Oracle), Milos Vasic (Oracle), Pierre Vandergheynst (EPFL) | |
12:05–13:00 | Lunch Break |
13:00–14:00 | Keynote II: David Wagner (University of California, Berkeley) |
Security for Language Models |
|
14:00-14:30 | Session IV: Privacy in Machine Learning (Session Chair: Yinzhi Cao) |
14:00: Gone but Not Forgotten: Improved Benchmarks for Machine Unlearning (Extended Abstract)
Keltin Grimes (Software Engineering Institute), Collin Abidi (Software Engineering Institute), Cole Frank (Software Engineering Institute), Shannon Gallagher (Software Engineering Institute) | |
14:10: Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models (Extended Abstract)
Hongbin Liu (Duke University), Lun Wang (Google), Om Thakkar (Google), Abhradeep Guha Thakurta (Google), Arun Narayanan (Google) | |
14:20: Terms of Deception: Exposing Obscured Financial Obligations in Online Agreements with Deep Learning (Extended Abstract)
Elisa Tsai (University of Michigan - Ann Arbor), Anoop Singhal (National Institute of Standards and Technology), Atul Prakash (University of Michigan - Ann Arbor) | |
14:30–15:00 | Coffee Break |
15:00–16:00 | Keynote III: Nicholas Carlini (Google DeepMind) |
We are not prepared |
|
16:00-16:40 | Session V: Attacks (Session Chair: Yuchen Yang) |
16:00: NodeGuard: A Highly Efficient Two-Party Computation Framework for Training Large-Scale Gradient Boosting Decision Tree
Tianxiang Dai (Huawei European Research Center), Yufan Jiang (Karlsruhe Institute of Technology), Yong Li (Huawei European Research Center), Fei Mei (Huawei European Research Center) | |
16:15: Regional Video Style Transfer Attack Using Segment Anything Model
Yuxin Cao (Tsinghua University), Jinghao Li (Shandong University), Xi Xiao (Tsinghua University), Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Hao Ge (Pingan Technology), Wei Liu (Shenzhen Institute of Information Technology), Guangwu Hu (Shenzhen Institute of Information Technology) | |
16:30: On Adaptive Decision-Based Attacks and Defenses (Extended Abstract)
Ilias Tsingenopoulos (DistriNet, KU Leuven), Vera Rimmer (DistriNet, KU Leuven), Davy Preuveneers (DistriNet, KU Leuven), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London), Wouter Joosen (DistriNet, KU Leuven) | |
16:40 | Closing remarks |
Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a powerful tool for security in academia and industry. On the other hand, security and privacy of deep learning has gained growing attention since deep learning has become a new attack surface. The security, privacy, fairness, and interpretability of neural networks has been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.
DLSP seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
We accept two types of submissions:
Submissions in both tracks should be unpublished work. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE S&P Conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
For any questions, contact the workshop organizers at dlsp2024@ieee-security.org
All accepted submissions will be presented at the workshop. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper.