- [2014 NeurIPS] Distilling the Knowledge in a Neural Network, [paper], [bibtex], sources: [peterliht/knowledge-distillation-pytorch], [a7b23/Distilling-the-knowledge-in-neural-network], [chengshengchan/model_compression].
- [2018 NeurIPS] KDGAN: Knowledge Distillation with Generative Adversarial Networks, [paper], [bibtex], [homepage], sources: [xiaojiew1/KDGAN].
- [2018 ArXiv] Dataset Distillation, [paper], [bibtex], [homepage], sources: [SsnL/dataset-distillation].
- [2016 ArXiv] N-ary Error Correcting Coding Scheme, [paper], [bibtex].
- [2018 JIIS] Experimental Validation for N-ary Error Correcting Output Codes for Ensemble Learning of Deep Neural Networks, [paper], [bibtex].
- [2018 ArXiv] Next Item Recommendation with Self-Attention, [paper], [bibtex].
- [2018 ICDM] Self-Attentive Sequential Recommendation, [paper], [bibtex], sources: [kang205/SASRec].
- [2013 ICML] Deep Canonical Correlation Analysis, [paper], [bibtex], sources: [VahidooX/DeepCCA], [DTaoo/DCCA], [msamribeiro/deep-cca], [wangxu-scu/DeepCCA].
- [2014 EACL] CCA: Improving Vector Space Word Representations Using Multilingual Correlation, [paper].
- [2015 ICML] Unsupervised Domain Adaptation by Backpropagation, [paper], [bibtex], sources: [shucunt/domain_adaptation], [pumpikano/tf-dann], [kskdev/DANN], [fungtion/DANN].
- [2016 JMLR] Domain-Adversarial Training of Neural Networks, [paper], [bibtex], sources: [shucunt/domain_adaptation], [pumpikano/tf-dann], [kskdev/DANN], [fungtion/DANN].
- [2017 TIML] Efficient Methods and Hardware for Deep Learning, [Ph.D Thesis], [Song Han's homepage], [slides].
- [2017 NIPS] SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability, [paper], sources: [google/svcca].
- [2017 ArXiv] One Model To Learn Them All, [paper], [blog].
- [2018 ArXiv] Tunneling Neural Perception and Logic Reasoning through Abductive Learning, [paper].