您當前的位置:首頁 > 旅遊

Paper List | 一文看 AAAI 2021 模型壓縮 paper

作者:由 LoBob 發表于 旅遊時間:2021-01-28

【未經允許,請勿轉載】

如果有錯誤之處,非常感謝大家提出,我也是在學習中。

AAAI 2021 – Important Dates

August 15 – August 30, 2020:

Authors register on the AAAI web site

September 1, 2020:

Electronic abstracts due at 11:59 PM UTC-12 (anywhere on earth)

September 9, 2020:

Electronic papers due at 11:59 PM UTC-12 (anywhere on earth)

September 29, 2020:

Abstracts AND full papers due for revisions of rejected NeurIPS/EMNLP submissions by 11:59 PM UTC-12 (anywhere on earth)

AAAI-21 Reviewing Process: Two-Phase Reviewing and NeurIPS/EMNLP Fast Track Submissions

November 3-5, 2020:

Author Feedback Window (anywhere on earth)

December 1, 2020:

Notification of acceptance or rejection

AAAI 2021 Pruning

TransTailor: Pruning the Pre-Trained Model for Improved Transfer Learning

Provable Benefits of Overparameterization

in Model Compression: From Double Descent to Pruning Neural Networks

Linearly Replaceable Filters

for Deep Network

Channel Pruning

Compressing Deep Convolutional Neural Networks by Stacking

Low-Dimensional Binary Convolution Filters

Tied Block Convolution: Leaner and Better CNNs with

Shared Thinner Filters

Revisiting Dominance

Pruning

in Decoupled Search

CAKES:

Channel-Wise

Automatic Kernel Shrinking for Efficient 3D Networks

DPFPS:

Dynamic and Progressive Filter Pruning

for Compressing Convolutional Neural Networks

from Scratch

OPQ: Compressing Deep Neural Networks with

One-Shot Pruning-Quantization

AutoLR: Layer-Wise Pruning and

Auto-Tuning of Learning Rates

in Fine-Tuning of Deep Networks

Accurate and Robust

Feature Importance Estimation

under

Distribution Shifts

Winning Lottery Ticket

s in Deep Generative Models

Slimmable

Generative Adversarial Networks

Towards Faster Deep Collaborative Filtering via Hierarchical Decision Networks

Quantization

Optimizing Information Theory Based Bitwise Bottlenecks for Efficient Mixed-Precision Activation Quantization

Scalable Verification of Quantized Neural Networks

Stochastic Precision Ensemble:

Self-Knowledge Distillation

for Quantized Deep Neural Networks

Weakly Supervised Deep Hyperspherical Quantization for

Image Retrieval

FracBits:

Mixed Precision

Quantization via Fractional Bit-Widths

Distribution Adaptive

INT8

Quantization for Training CNNs

TRQ: Ternary Neural Networks with Residual Quantization

Training

Binary

Neural Network

without Batch Normalization

for Image Super-Resolution

SA-BNN: State-Aware

Binary

Neural Network

Post-training Quantization

with Multiple Points:

Mixed Precision

without Mixed Precision

Any-Precision Deep Neural Networks

Distillation

Show, Attend and Distill: Knowledge Distillation via Attention-Based Feature Matching

PSSM-Distil

: Protein Secondary Structure Prediction (PSSP) on Low-Quality PSSM by Knowledge Distillation with Contrastive Learning

Cross-Layer Distillation with Semantic Calibration

Harmonized Dense Knowledge Distillation Training for Multi-Exit Architectures

Universal Trading for Order Execution with Oracle Policy Distillation

Diverse Knowledge Distillation for End-to-End Person Search

Distilling Localization for Self-Supervised Representation Learning

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

Progressive Network Grafting for Few-Shot Knowledge Distillation

Robust Knowledge Transfer via Hybrid Forward on the Teacher-Student Mode

Peer Collaborative Learning for Online Knowledge Distillation

標簽: Deep  knowledge  Neural  distillation  2020