Classification using event-driven sensors and machine learning deep neural networks
Dr. Shih-Chii Liu
Institute of Neuroinformatics, University of Zurich and ETH Zurich
Abstract: Video and audio processing algorithms are typically based on conventional regular sampling, therefore leading to unnecessary processing of frames even when the frames carry no information. They also require high sampling rates if fine temporal resolution is needed. This talk will cover alternate low-power, low-latency spiking silicon retina sensors (DVS) that produce asynchronous sparse streams of spikes only when a change in luminance contrast is detected; and silicon cochlea sensors (DAS) that produce asynchronous frequency-specific channel spikes. It will also present the use of event-driven machine learning deep networks together with the asynchronous outputs of these spiking sensors for solving a real-world task, and further discuss conversion methods for producing equivalent-accurate spiking network versions of convolutional deep networks for classification.
Biography: Shih-Chii Liu co-leads the Sensors group (http://sensors.ini.uzh.ch) at the Institute of Neuroinformatics, University of Zurich and ETH Zurich. She received the B. S. degree in electrical engineering from MIT and the Ph.D. degree in the Computation and Neural Systems program from the California Institute of Technology. She has worked at various companies including Gould American Microsystems, LSI Logic, and Rockwell International Research Labs. Her research interests include low-power neuromorphic auditory sensors and processors; and VLSI event-driven bio-inspired processing circuits, event-driven algorithms, and deep neural networks. Dr. Liu is past Chair of the IEEE CAS Sensory Systems and Neural Systems and Applications Technical Committees. She is current Chair of the IEEE Swiss CAS/ED Society and an associate editor of the IEEE Transactions of Biomedical Circuits and Systems and Neural Networks journal.