ECBC: Efficient Convolution via Blocked Columnizing
Zhao, Tianli2; Hu, Qinghao1; He, Xiangyu2; Xu, Weixiang1; Wang, Jiaxing1; Leng, Cong1; Cheng, Jian2
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
2021-07-16
页码13
关键词Convolution Tensors Layout Memory management Indexes Transforms Performance evaluation Convolutional neural networks (CNNs) direct convolution high performance computing for mobile devices im2col convolution memory-efficient convolution (MEC)
ISSN号2162-237X
DOI10.1109/TNNLS.2021.3095276
通讯作者Cheng, Jian(jcheng@nlpr.ia.ac.cn)
英文摘要Direct convolution methods are now drawing increasing attention as they eliminate the additional storage demand required by indirect convolution algorithms (i.e., the transformed matrix generated by the im2col convolution algorithm). Nevertheless, the direct methods require special input-output tensor formatting, leading to extra time and memory consumption to get the desired data layout. In this article, we show that indirect convolution, if implemented properly, is able to achieve high computation performance with the help of highly optimized subroutines in matrix multiplication while avoid incurring substantial memory overhead. The proposed algorithm is called efficient convolution via blocked columnizing (ECBC). Inspired by the im2col convolution algorithm and the block algorithm of general matrix-to-matrix multiplication, we propose to conduct the convolution computation blockwisely. As a result, the tensor-to-matrix transformation process (e.g., the im2col operation) can also be done in a blockwise manner so that it only requires a small block of memory as small as the data block. Extensive experiments on various platforms and networks validate the effectiveness of ECBC, as well as the superiority of our proposed method against a set of widely used industrial-level convolution algorithms.
资助项目National Natural Science Foundation of China[61972396] ; National Key Research and Development Program of China[2020AAA0103402] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDA27040300]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000732241300001
资助机构National Natural Science Foundation of China ; National Key Research and Development Program of China ; Strategic Priority Research Program of the Chinese Academy of Sciences
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/46863]  
专题类脑芯片与系统研究
通讯作者Cheng, Jian
作者单位1.Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Automat, Beijing 100080, Peoples R China
推荐引用方式
GB/T 7714
Zhao, Tianli,Hu, Qinghao,He, Xiangyu,et al. ECBC: Efficient Convolution via Blocked Columnizing[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2021:13.
APA Zhao, Tianli.,Hu, Qinghao.,He, Xiangyu.,Xu, Weixiang.,Wang, Jiaxing.,...&Cheng, Jian.(2021).ECBC: Efficient Convolution via Blocked Columnizing.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,13.
MLA Zhao, Tianli,et al."ECBC: Efficient Convolution via Blocked Columnizing".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021):13.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace