Call for Papers

The explosive success of large language models (LLMs) has captured the attention of the deep learning research community. However, LLMs face significant efficiency and reliability challenges in practice. We welcome research contributions related to the following (but not limited to) topics:
  • Model quantization techniques for LLMs
  • Sparse representation and pruning of LLMs
  • Knowledge distillation methods for LLMs
  • Hardware-friendly design of LLMs
  • Efficient neural architecture search for LLMs
  • Federated learning for compressing LLMs
  • Low-rank approximation and tensor decomposition for LLMs
  • Compressed LLMs for resources-constrained hardware and systems
  • Tiny LLM on edge processors
  • Ethical considerations and policy considerations of LLMs on users’ privacy
  • Case studies on real-world privacy challenges and solutions in deploying LLMs
  • Societal impact of privacy within LLMs
  • Broader implications of privacy in LLMs
Submission Format: Submissions papers (.pdf format) must use the IEEE CAI Article author instructions. The workshop considers two types of submissions: (1) full papers [6 pages]; (2) short papers [2 pages], including figures, tables and references.
Important: The accepted papers will be published on Springer Book Series on Adaptation, Learning, and Optimization.
Submission Site: https://cmt3.research.microsoft.com/PracticalDL2024/
Submission Due: 30th Apr, 2024 AoE

Workshop Schedule

Organizing Committee

Xingyu Zheng


Beihang University


Xudong Ma


Beihang University



Wei Huang


The University of Hong Kong


Mingyuan Zhang


Nanyang Technological University

Zixiang Zhao


Xi’an Jiaotong University


Renshuai Tao


Beijing Jiaotong University

Technical Committee (TBD)