CORC  > 湖南大学
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
Chen, JG; Li, KL; Bilal, K; Zhou, X; Li, KQ; Yu, PS
刊名IEEE Transactions on Parallel & Distributed Systems
2019
卷号Vol.30 No.5页码:965-976
关键词Training Computer Architecture Computational Modeling Parallel Processing Task Analysis Distributed Computing Acceleration Big Data Bi Layered Parallel Computing Convolutional Neural Networks Deep Learning Distributed Computing
URL标识查看原文
公开日期[db:dc_date_available]
内容类型期刊论文
URI标识http://www.corc.org.cn/handle/1471x/4612614
专题湖南大学
作者单位1.Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410006, Hunan, Peoples R China
2.Natl Supercomp Ctr, Changsha 410082, Hunan, Peoples R China
3.COMSATS Univ Islamabad, Abbottabad 45550, Pakistan
4.Qatar Univ, Doha 2713, Qatar
5.SUNY Coll New Paltz, Dept Comp Sci, New Paltz, NY 12561 USA
6.Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
7.Tsinghua Univ, Inst Data Sci, Beijing 100084, Peoples R China
推荐引用方式
GB/T 7714
Chen, JG,Li, KL,Bilal, K,et al. A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks[J]. IEEE Transactions on Parallel & Distributed Systems,2019,Vol.30 No.5:965-976.
APA Chen, JG,Li, KL,Bilal, K,Zhou, X,Li, KQ,&Yu, PS.(2019).A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.IEEE Transactions on Parallel & Distributed Systems,Vol.30 No.5,965-976.
MLA Chen, JG,et al."A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks".IEEE Transactions on Parallel & Distributed Systems Vol.30 No.5(2019):965-976.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace