Reducing and leveraging redundancy in deep learning – Kilian Weinberger – 9.30.16
Speaker: Prof. Kilian Weinberger, Cornell University
Date: Friday, September 30, 2016
Time/Location: 10:30 AM – 11:30 AM, 366 WVH
Title: Reducing and Leveraging Redundancy in Deep Learning
Abstract
Deep Learning has lead to undeniable successes all over the machine learning landscape. In contrast to most approaches, deep networks have much larger model sizes and often fit many more parameters than training examples. It is now apparent that a lot of these parameters are redundant and encode similar things. In this talk I will show how to reduce the redundancy in deep neural networks by compressing models through weight hashing. I will also demonstrate how to leverage redundancy to make neural networks deeper and more accurate. Finally, I will introduce a novel neural network architecture, which, by design, incorporates these lessons and leads to compact models with state-of-the-art generalization properties.
Host
Professor Ehsan Elhamifar