CORC  > 自动化研究所  > 中国科学院自动化研究所
LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation
Xu, Ting-Bing1,2; Yang, Peipei1; Zhang, Xu-Yao1,2; Liu, Cheng-Lin1,2,3
刊名PATTERN RECOGNITION
2019-04-01
卷号88期号:88页码:272-284
关键词Deep network acceleration and compression Architecture distillation Lightweight network
ISSN号0031-3203
DOI10.1016/j.patcog.2018.10.029
英文摘要

In recent years, deep neural networks have achieved remarkable successes in many pattern recognition tasks. However, the high computational cost and large memory overhead hinder them from applications on resource-limited devices. To address this problem, many deep network acceleration and compression methods have been proposed. One group of methods adopt decomposition and pruning techniques to accelerate and compress a pre-trained model. Another group designs single compact unit to stack their own networks. These methods are subject to complicated training processes, or lack of generality and extensibility. In this paper, we propose a general framework of architecture distillation, namely LightweightNet, to accelerate and compress convolutional neural networks. Rather than compressing a pre-trained model, we directly construct the lightweight network based on a baseline network architecture. The Lightweight Net, designed based on a comprehensive analysis of the network architecture, consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. Specifically, we propose the strategy of low-dimensional features of fully-connected layers for substantial memory saving, and design multiple efficient compact blocks to distill convolutional layers of baseline network with accuracy-sensitive distillation rule for notable time saving. Finally, it can effectively reduce the computational cost and the model size by > 4x with negligible accuracy loss. Benchmarks on MNIST, CIFAR-10, ImageNet and HCCR (handwritten Chinese character recognition) datasets demonstrate the advantages of the proposed framework in terms of speed, performance, storage and training process. In HCCR, our method even outperforms traditional handcrafted features-based classifiers in terms of speed and storage while maintaining state-of-the-art recognition performance. (C) 2018 Elsevier Ltd. All rights reserved.

资助项目National Natural Science Foundation of China (NSFC)[61721004] ; National Natural Science Foundation of China (NSFC)[61633021]
WOS关键词FEATURE-EXTRACTION ; CHARACTER ; RECOGNITION ; NORMALIZATION ; ONLINE
WOS研究方向Computer Science ; Engineering
语种英语
出版者ELSEVIER SCI LTD
WOS记录号WOS:000457666900021
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/25265]  
专题中国科学院自动化研究所
通讯作者Liu, Cheng-Lin
作者单位1.Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China
2.UCAS, Sch Artificial Intelligence, Beijing 100190, Peoples R China
3.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,et al. LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation[J]. PATTERN RECOGNITION,2019,88(88):272-284.
APA Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,&Liu, Cheng-Lin.(2019).LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation.PATTERN RECOGNITION,88(88),272-284.
MLA Xu, Ting-Bing,et al."LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation".PATTERN RECOGNITION 88.88(2019):272-284.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace