Abstract
Deep neural networks have been widely used in many applications. The classification accuracy increases as the network goes bigger. However, the huge computation and storage have prevented their deployments in resource-limited devices. In this talk, we will first show that there exists redundancy in current CNNs under the PAC framework. Second, we will propose the self-distillation technique that can compress the deep neural networks with dynamic inference. Finally, I will introduce some recent work on improving the robustness of the DNNs.
南方科技大学数学系微信公众号
© 2015 All Rights Reserved. 粤ICP备14051456号
Address: No 1088,xueyuan Rd., Xili, Nanshan District,Shenzhen,Guangdong,China 518055 Tel: +86-755-8801 0000