Call for Papers

Compressed large language models (LLMs) are increasingly adopted in practice due to their efficiency advantages. However, these models pose new challenges in robustness and security, raising concerns for their reliable deployment in real-world scenarios. We welcome research contributions related to the following (but not limited to) topics:
  • Robustness of Compressed Foundation Models
  • Strategies for Jointly Enhancing Compression Efficiency and Robustness
  • Compression for Robust Foundation Models
  • Efficient and Robust Architectural Designs
  • Robust and Efficient Inference Strategies
  • Privacy and Ethical Considerations in Compressed Models
  • Real-World Case Studies on Compressed Model Security and Privacy
Submission Format: Submissions papers (.pdf format) must use the IJCAI Article Template and be anonymized and follow IJCAI 2025 author instructions. The workshop considers two types of submissions: (1) Long Paper [7 pages]; (2) Extended Abstract [4 pages], including figures, tables and references.

Workshop Schedule (TBD)

Organizer

Xingyu Zheng


Beihang University


Haotong Qin


ETH Zürich


Aishan Liu


Beihang University


Jie Zhang


Center for Frontier
AI Research
(CFAR), A*STAR

Yulun Zhang


Shanghai Jiao Tong University


Olivera Kotevska


Oak Ridge National Laboratory

Xianglong Liu


Beihang University

Michele Magno


ETH Zürich

Dacheng Tao


Nanyang Technological University