Call for Papers

Compressed large language models (LLMs) are increasingly adopted in practice due to their efficiency advantages. However, these models pose new challenges in robustness and security, raising concerns for their reliable deployment in real-world scenarios. We welcome research contributions related to the following (but not limited to) topics:
  • Robustness of Compressed Foundation Models
  • Strategies for Jointly Enhancing Compression Efficiency and Robustness
  • Compression for Robust Foundation Models
  • Efficient and Robust Architectural Designs
  • Robust and Efficient Inference Strategies
  • Privacy and Ethical Considerations in Compressed Models
  • Real-World Case Studies on Compressed Model Security and Privacy
Submission Format: Submissions papers (.pdf format) must use the IJCAI Article Template and be anonymized and follow IJCAI 2025 author instructions. The workshop considers two types of submissions: (1) Long Paper [7 pages]; (2) Extended Abstract [4 pages], including figures, tables and references.
Important: Our workshop will feature a Best Paper Award, and certificates will be presented during the workshop.
Submission Site: https://openreview.net/group?id=ijcai.org/IJCAI/2025/Workshop/Practical-DL
Submission Due: 25th May, 2025 GMT

Workshop Schedule (TBD)

Organizer

Xingyu Zheng


Beihang University


Haotong Qin


ETH Zürich


Aishan Liu


Beihang University


Jie Zhang


Center for Frontier
AI Research
(CFAR), A*STAR

Jiakai Wang


Zhongguancun Laboratory


Yulun Zhang


Shanghai Jiao Tong University

Olivera Kotevska


Oak Ridge National Laboratory

Xianglong Liu


Beihang University

Michele Magno


ETH Zürich

Dacheng Tao


Nanyang Technological University

Organizing Committee

Kewei Liao



Beihang University

Shenghao Jin


Beihang University

Xudong Ma


Beihang University

Wei Huang


The University of Hong Kong

Mingyuan Zhang


Nanyang Technological University

Zixiang Zhao


Xi'an Jiaotong University

Renshuai Tao


Beijing Jiaotong University