Penn State CAFE Fireside Chat: Towards Deep, Interpretable, and Robust Spiking Neural Networks: Algorithmic Approaches

Zoom info: Join from PC, Mac, Linux, iOS or Android: https://psu.zoom.us/j/91299037264

Or iPhone one-tap (US Toll):  +16468769923,91299037264#  or +13017158592,91299037264#

Or Telephone:
    Dial:
    +1 646 876 9923 (US Toll)
    +1 301 715 8592 (US Toll)
    +1 312 626 6799 (US Toll)
    +1 669 900 6833 (US Toll)
    +1 253 215 8782 (US Toll)
    +1 346 248 7799 (US Toll)
    Meeting ID: 912 9903 7264
    International numbers available: https://psu.zoom.us/u/adYRPlt5FM

Or an H.323/SIP room system:
    H.323:
        162.255.37.11 (US West)
        162.255.36.11 (US East)
        221.122.88.195 (China)
        115.114.131.7 (India Mumbai)
        115.114.115.7 (India Hyderabad)
        213.19.144.110 (Amsterdam Netherlands)
        213.244.140.110 (Germany)
        103.122.166.55 (Australia Sydney)
        103.122.167.55 (Australia Melbourne)
        209.9.211.110 (Hong Kong SAR)
        64.211.144.160 (Brazil)
        69.174.57.160 (Canada Toronto)
        65.39.152.160 (Canada Vancouver)
        207.226.132.110 (Japan Tokyo)
        149.137.24.110 (Japan Osaka)
    Meeting ID: 912 9903 7264

    SIP: 91299037264@zoomcrc.com

Abstract: Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy, interpretability, and robustness. We will first delve into how training is performed in SNNs. Training SNNs with surrogate gradients presents computational benefits due to short latency. However, due to the non-differentiable nature of spiking neurons, the training becomes problematic and surrogate methods have thus been limited to shallow networks compared to the conversion method. To address this training issue with surrogate gradients, we will go over a recently proposed method Batch Normalization Through Time (BNTT) that allows us to train SNNs from scratch with very low latency and enables us to target interesting applications like video segmentation and beyond traditional learning scenarios, like federated training. Another critical limitation of SNNs is the lack of interpretability. While a considerable amount of attention has been given to optimizing SNNs, the development of explainability still is at its infancy. I will talk about our recent work on a bio-plausible visualization tool for SNNs, called Spike Activation Map (SAM) compatible with BNTT training. The proposed SAM highlights spikes having short inter-spike interval, containing discriminative information for classification. Finally, with proposed BNTT and SAM, I will highlight the robustness aspect of SNNs with respect to adversarial attacks. In the end, time permitting, I will talk about interesting prospects of SNNs for non-conventional learning scenarios such as privacy-preserving distributed learning.

Biography: Dr. Priya Panda is an Assistant Professor in the Electrical Engineering Department at Yale University. She received her Ph.D. from Purdue University in 2019 under the supervision of Prof. Kaushik Roy. She received the B.E. degree in Electrical & Electronics Engineering and the M.Sc. degree in Physics from B.I.T.S. Pilani, India, in 2013. She was the recipient of the outstanding student award in physics for academic excellence. From 2013-14, she worked in Intel, India on RTL design for graphics power management. She has also worked with Intel Labs, USA, in 2017 and Nvidia, India in 2013 as research intern.

Priya’s research interests lie in robust and efficient neuromorphic computing. Her goal is to empower energy-aware, energy-efficient and robust machine intelligence through algorithm-hardware co-design while catering to the resource constraints of Internet of Things (IoT) devices. Her research has been published in journals such as Nature, Nature Communications, IEEE Transactions on VLSI, Applied Physics Review among others.

Moderator: Sonali Singh, CSE Ph.D. Candidate

 

Share this event

facebook linked in twitter email

Media Contact: Vijay Narayanan

 
 

About

The Center for Artificial Intelligence Foundations and Engineered Systems (CAFE), pronounced café, brings together expertise from 75 researchers representing 24 academic units across Penn State with the goal of developing cross-disciplinary interactions. The center’s focus is on accelerating advances by synergistically advancing AI foundations and the techniques to deploy them efficiently toward applications focused on engineered and defense systems. CAFE provides opportunities for research partnerships, faculty/student recruitment, and technology transition to practice.

Center for Artificial Intelligence Foundations and Engineered Systems

The Pennsylvania State University

W323 Westgate Building

University Park, PA 16802