Call for Papers

Recently, the success of deep learning in AI has attracted great attention from academia and industry. However, research shows that the performance of models in the wild is far from practical due to the lack of efficiency and robustness towards open-world data and scenarios. We welcome research contributions related to the following (but not limited to) topics:
  • Network quantization and binarization
  • Adversarial attacking deep learning systems
  • Neural architecture search (NAS)
  • Robust architectures against adversarial attacks
  • Hardware implementation and on-device deployment
  • Benchmark for evaluating model robustness
  • New methodologies and architectures for efficient and rubost deep learning
Submission Format: Submissions papers (.pdf format) must use the AAAI Article Template and be anonymized and follow AAAI 2023 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 7 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.
The excellent papers will be invited to the Special Issue of Pattern Recognition journal for publication consideration.
Submission Site: https://cmt3.research.microsoft.com/PracticalDL2023/
Submission Due: 15th Nov, 2022 (AoE)

Workshop Schedule

Practical AI Challenge

The challenge is held jointly with the "2nd International Workshop on Practical Deep Learning in the Wild" at AAAI 2023.
  • Evaluating and exploring the challenge of building practical deep-learning models;
  • Encouraging technological innovation for efficient and robust AI algorithms;
  • Emphasizing the size, latency, power, accuracy, safety, and generalization ability of the neural network.
Track I: Efficient and Robust network for specific hardware
  • Recently, there are many efficient networks for mobile vision applications or edge devices. However, most of them are only concerned with the theoretical speedup of the model(such as FLOPs, memory), ignoring the speedup of the networks on the practical hardware devices. There are still many problems preventing the network achieving theoretical speedup and accuracy improvement in practical hardware.
  • To accelerate the development of practical AI, we organize this challenge track for motivating novel easier-to-deploy networks with high accuracy performance. Participants are encouraged to select a specific hardware platform, such as Atlas300/RV1126 for smart city or camera, GPU T4 for cloud computing, mobile GPU/DSP for mobile phones for and TDA4VM for autonomous driving, to develop efficient and robust networks according to their hardware characteristics.
Track II: Efficient and Robust network across multiple hardware.
  • Most hardware-aware networks aim to design an efficient network for one specific hardware platform. However, due to the different characteristics among different hardware, it is hard to achieve the same speedup on all hardware platforms. There are still many challenges finding an efficient and robust network which can achieve both speed and accuracy performance across different hardware platforms.
  • To accelerate the research on building efficient and robust networks across different hardware, we organize this challenge track. Participants are encouraged to design an efficient network which can be deployed on all hardware platforms we provide.
Challenge Site: https://practical-dl.sensecore.cn
Submission Due: 31th Dec, 2022 (AoE)

Accepted Papers

Accepted Long Paper

  • Automatic Neural Network Pruning that Efficiently Preserves the Model Accuracy [Paper] [Supp]
    Thibault Castells (Nota AI GmbH); Seul-Ki Yeom (Nota AI GmbH)*
  • HesScale: Scalable Computation of Hessian Diagonals [Paper]
    Mohamed Elsayed (University of Alberta)*; Rupam Mahmood (University of Alberta)
  • Contrastive View Design Strategies to Enhance Robustness to Domain Shifts in Downstream Object Detection [Paper]
    Kyle R Buettner (University of Pittsburgh)*; Adriana Kovashka (University of Pittsburgh)
  • Model and Data Agreement for Learning with Noisy Labels [Paper]
    Yuhang Zhang (Inspur Electronic Information Industry Co., Ltd.)*; Weihong Deng (Inspur); Xingchen Cui (Inspur Electronic Information Industry Co., Ltd.); Yunfeng Yin (Inspur); Hongzhi Shi (Inspur Electronic Information Industry Co., Ltd); Dongchao Wen (Inspur Electronic Information Industry Co., Ltd.)
  • Expeditious Saliency-guided Mix-up through Random Gradient Thresholding [Paper] [Supp]
    Long Minh Luu (International University - VNUHCM)*; Zeyi Huang (Carnegie Mellon University); Haohan Wang (University of Illinois Urbana-Champaign); Yong Jae Lee (University of Wisconsin-Madison); Eric Xing (MBZUAI, CMU, and Petuum Inc.)
  • Generalizability of Adversarial Robustness Under Distribution Shifts [Paper] [Supp]
    Kumail Alhamoud (KAUST)*; Hasan Abed Al Kader Hammoud (King Abdullah University of Science and Technology ); Motasem Alfarra (KAUST); Bernard Ghanem (KAUST)
  • Explanation-based Adversarial Detection with Noise Reduction [Paper]
    Juntao Su (The George Washington University)*; Zhou Yang (The George Washington University); Zexin Ren (George Washingtong University); Fang Jin (George Washington University)
  • Efficient Fusion of Image Attributes: A New Approach to Visual Recognition [Paper]
    Dehao Yuan (University of Maryland)*; Minghui Liu (University of Maryland); Cornelia Fermuller (University of Maryland, College Park); Yiannis Aloimonos (University of Maryland, College Park)
  • Systematic Quantization of Vision Models based on MLPs [Paper] [Supp]
    Lingran Zhao (Peking University)*; Zhen Dong (UC Berkeley); Kurt Keutzer (EECS, UC Berkeley)
  • "It’s a Match!" - A Benchmark of Task Affinity Scores for Joint Learning [Paper] [Supp]
    Raphael AZORIN (Huawei Technologies)*; Massimo Gallo (Huawei Technologies Co., Ltd.); alessandro finamore (huawei technologies); dario rossi (Huawei); Pietro Michiardi (EURECOM)
  • Simplifying Adversarial Attacks Against Object Detectors: a Fundamental Approach [Paper]
    Thomas Cilloni (University of Mississippi)*; Charles Fleming (Cisco Research); Charles Walter (University of Mississippi)
  • Deep Active Learning with Contrastive Learning Under Realistic Data Pool Assumptions [Paper]
    Jihyo Kim (Seoul National University of Science and Technology); Jeonghyeon Kim (Seoul National University of Science and Technology); Sangheum Hwang (Seoul National University of Science and Technology)*
  • Towards Hardware-Specific Automatic Compression of Neural Networks [Paper]
    Torben Krieger (Heidelberg University); Bernhard Klein (Heidelberg University)*; Holger Froening (University of Heidelberg)
  • A-ColViT : Real-time Interactive Colorization by Adaptive Vision Transformer [Paper]
    Gwanghan Lee (Sungkyunkwan University (SKKU) )*; Saebyeol Shin (Sungkyunkwan University (SKKU)); Donggeun Ko (Sungkyunkwan University); JiYeon Jung (SK telecom); Simon S Woo (Sungkyunkwan University (SKKU))

Accepted Extended Abstract

  • Output Sensitivity-Aware DETR Quantization [Paper]
    Yafeng Huang (Nanjing University); Huanrui Yang (UC Berkeley)*; Zhen Dong (UC Berkeley); Denis A Gudovskiy (Panasonic); Tomoyuki Okuno (Panasonic); Yohei Nakata (Panasonic Corporation); Yuan Du (Nanjing University); Shanghang Zhang (Peking University); Kurt Keutzer (EECS, UC Berkeley)
  • Addressing distribution shift at test time in pre-trained language models [Paper]
    Ayush Singh (EverNorth Healthcare Services)*; John Ortega (University of Santiago de Compostela)
  • QD-BEV: Quantization-aware View-guided Distillation for Multi-view 3D Object Detection [Paper]
    yifan zhang (Nanjing University)*; Zhen Dong (UC Berkeley); Huanrui Yang (UC Berkeley); Ming Lu (Intel Labs China); Cheng-Ching Tseng (Peking University); Yandong Guo (OPPO Research Institute); Kurt Keutzer (EECS, UC Berkeley); LI DU (Nanjing University); Shanghang Zhang (Peking University)
  • Frequency Regularization for Improving Adversarial Robustness [Paper]
    Binxiao Huang (The University of Hong Kong)*; Chaofan Tao (The University of Hong Kong); Rui Lin (The University of Hong Kong); Ngai Wong (The University of Hong Kong)

Technical Committee

  • Linghui Zhu, Tsinghua University
  • Yisong Xiao, Beihang University
  • Yuxuan Wen, Beihang University
  • Hang Yu, Beihang University
  • Ruikui Wang, Beihang University
  • Zixin Yin, Beihang University
  • Jun Guo, Beihang University
  • Simin Li, Beihang University
  • Shunchang Liu, Beihang University
  • Zonglei Jing, Beihang University
  • Feng Zhu, University of Technology Sydney
  • Kui Zhang, University of Science and Technology of China
  • Zhipeng Wei, Fudan University
  • Tianlin Li, Nanyang Technological University
  • Bangyan He, Institute of Information Engineering, Chinese Academy of Sciences
  • Jingzhi Li, Institute of Information Engineering, Chinese Academy of Sciences
  • Xiaojun Jia, Institute of Information Engineering, Chinese Academy of Sciences
  • Bo Liu, National Space Science Center, Chinese Academy of Sciences
  • Tianyuan Zhang, Beihang University
  • Shihao Bai, Beihang University
  • Xiaowei zhao, Beihang University
  • Huangxinxin Xu, Wuhan University
  • Maura Pintor, University of Cagliari
  • Jiachen Sun, University of Michigan
  • Yulong Cao, University of Michigan, Ann Arbor
  • YiKun Xu, Institute of Information Engineering, Chinese Academy of Sciences
  • Rajkumar Theagarajan, University of California, Riverside

Sponsors