Important Dates

Workshop Schedule

Call for Papers

Recently, the success of deep learning in AI has attracted great attention from academia and industry. However, research shows that the performance of models in the wild is far from practical due to the lack of efficiency and robustness towards open-world data and scenarios. We welcome research contributions related to the following (but not limited to) topics:
  • Network quantization and binarization
  • Adversarial attacking deep learning systems
  • Neural architecture search (NAS)
  • Robust architectures against adversarial attacks
  • Hardware implementation and on-device deployment
  • Benchmark for evaluating model robustness
  • New methodologies and architectures for efficient and rubost deep learning

Paper Submission

Format: Submissions papers (.pdf format) must use the AAAI Article Template and be anonymized and follow AAAI 2022 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 7 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.
The excellent papers will be invited to the Special Issue of Pattern Recognition journal for publication consideration.

Submission Site: https://cmt3.research.microsoft.com/PracticalDL2022/
Submission Due: Nov 21, 2021 (AoE)


Accepted Long Paper

  • DMSANet: Dual Multi Scale Attention Network [Paper]
    Abhinav Sagar (VIT Vellore)*
  • AASeg: Attention Aware Network for Real Time Semantic Segmentation [Paper]
    Abhinav Sagar (VIT Vellore)*
  • AA-PINN: Attention Augmented Physics Informed Neural Networks [Paper]
    Abhinav Sagar (VIT Vellore)*
  • Disentangling Transfer and Interference in Multi-Domain Learning [Paper]
    Yipeng Zhang (University of Rochester)*; Tyler L Hayes (RIT); Christopher Kanan (Rochester Institute of Technology)
  • Sparse-softmax: A Simpler and Faster Alternative Softmax Transformation [Paper]
    Shaoshi Sun (School of Computer Science and Informatics Cardiff University); Zhenyuan Zhang (Osaka City University); BoCheng Huang (Beijing Jiaotong University); Pengbin Lei (Shenzhen University ); Jianlin Su (Shenzhen Zhuiyi Technology Co., Ltd.); Shengfeng Pan (Shenzhen Zhuiyi Technology Co., Ltd.); Jiarun Cao (Institute of Automation, Chinese Academy of Sciences)*
  • Dynamic Activation Step Size for Post-Training Quantization [Paper]
    Yuanpei Chen (X LAB,The Second Academy of CASIC,Beijing); Yuanyuan Ou (Chongqing University); YingLei Wang (CASIC); Rongli Zhao (The Second Academy of China Aerospace Science and Industry Corporation); Xiaode Liu (X Lab, The Second Academy of China Aerospace Science and Industry Corporation); Yufei Guo (The Second Academy of China Aerospace Science and Industry Corporation)*
  • Enhanced Exploration in Neural Feature Selection for Deep Click-Through Rate Prediction Models via Ensemble of Gating Layers [Paper]
    Lin Guan (Arizona State University)*; Xia Xiao (Bytedance); Ming Chen (ByteDance); Youlong Cheng (Bytedance)
  • GD-REC: Data-Free Learning of Knowledge Distillation for Recommender System [Paper]
    Li-e Wang ( Guangxi Normal University); Yutian Zheng (Guangxi Normal University)*; Xianxian Li (Guangxi Normal University); yange guo (Guangxi Normal University ); Yan Bai (University of Washington Tacoma); yuan liang (State Key Laboratory of Software Development Environment, School of Computer Science, Beihang University)
  • Cascaded Video Generation for Videos In-the-Wild [Paper]
    Lluis Castrejon (Mila, Université de Montréal, Facebook AI Research)*; Nicolas Ballas (Facebook FAIR); Aaron Courville (MILA, Université de Montréal)
  • Controlling the quality of distillation in response-based network compression [Paper]
    Vibhas K Vats (Indiana University Bloomington)*; David Crandall (Indiana University)
  • TADSAM:A Time-Aware Dynamic Self-Attention Model for Next Point-of-Interest Recommendation [Paper]
    Peng Liu (Guangxi Normal University); Yange Guo (Guangxi Normal University ); Yuan Liang (State Key Laboratory of Software Development Environment, School of Computer Science, Beihang University)*; Yutian Zheng (Guangxi Normal University); Xianxian Li (Guangxi Normal University)
  • Neural Oscillations for Sparsely Activated Deep Spiking Neural Networks [Paper]
    Etienne Mueller (Technical University of Munich)*; Daniel Auge (Technical University of Munich); Simon Klimaschka (Technische Universität München); Alois Knoll (Technical University of Munich)
  • WE-Bee:Weight Estimator for Beehives Using Deep Learning [Paper]
    Omar Anwar (The University of Western Australia)*; Adrian Keating (University of Western Australia ); Rachel Cardell-Oliver (University of Western Australia); Amitava Datta (The University of Western Australia); Gino Putrino (The University of Western Australia)
  • ADAPT: An Open Source sUAS Payload for Real-Time Disaster Prediction and Response with AI [Paper]
    Daniel Davila (Kitware Inc)*; Matt S Brown (Kitware, Inc.); Joseph B Van Pelt (Kitware, Inc.); Adam Romlein (Kitware, Inc.); Peter Webley (University of Alaska Fairbanks); Alex Lynch (Kitware, Inc)
  • Federated learning over WiFi: Should we use TCP or UDP? [Paper]
    S Vineeth (Indian Institute of Science)*
  • ActiveGuard: Active Intellectual Property Protection for Deep Neural Networks via Adversarial Examples based User Fingerprinting [Paper]
    Mingfu Xue (Nanjing University of Aeronautics and Astronautics)*; Shichang Sun (Nanjing University of Aeronautics and Astronautics); Can He (Nanjing University of Aeronautics and Astronautics); Dujuan Gu (NSFOCUS Information technology CO., LTD); Yushu Zhang (Nanjing University of Aeronautics and Astronautics); Jian Wang (Nanjing University of Aeronautics and Astronautics); Weiqiang Liu (Nanjing University of Aeronautics and Astronautics)
  • Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization [Paper]
    Bo-Shiuan Chu (National Tsing Hua University); Che-Rung Lee (National Tsing Hua University)*
  • A Robust Steganography-without-Embedding Approach Against Adversarial Attacks [Paper]
    Donghui Hu (Hefei University of Technology)*; Song Yan (Hefei University of Technology); Wenjie Jiang (Hefei University of Technology); Run Wang (Wuhan University)
  • Block-wise Training of Residual Networks via the Minimizing Movement Scheme [Paper]
    Skander Karkar (Criteo, Sorbonne Université)*; Ibrahim Ayed (LIP6, Sorbonne Université); Emmanuel de Bézenac (Sorbonne Université); Patrick Gallinari (Criteo, Sorbonne Université)

Accepted Extended Abstract

  • Training High-performance Spiking Neural Networks Through Reducing Quantization Error [Paper]
    Yufei Guo (The Second Academy of China Aerospace Science and Industry Corporation); Xinyi Tong (The Second Academy of China Aerospace Science and Industry Corporation); Yuanpei Chen (X LAB,The Second Academy of CASIC,Beijing); Xiashuang Wang (X lab); Xiuhua Liu (The Second Academy of China Aerospace Science and Industry Corporation); Liwen Zhang (X Lab, the Second Academy of CASIC, Beijing)*
  • Enabling NAS With Automated Super-Network Generation [Paper]
    J. Pablo Munoz (Intel)*; Nikolay Lyalyushkin (Intel Corporation); Yash Akhauri (Intel); Anastasia Senina (Intel Corporation); Alexander Mr Kozlov (Intel Corp); Nilesh Jain (Intel)
  • Revisiting the Rationality Few-shot Detection Benchmarks [Paper]
    Tianbo Wang (Beihang University)*; Renshuai Tao (Beihang University); wang jiaying (Shanghai Aerospace Electronic Technology Institute); Bowei Jin (IFLYTEK (Suzhou) Technology Co., Ltd)
  • A robust learning method for deep neural networks using unsupervised competitive learning and brain-like information processing [Paper]
    Takashi Shinozaki (NICT CiNet)*