help_outline Skip to main content
Add Me To Your Mailing List
HomeEventsIEEE Philly - DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY, HARDWARE AND IMPLEMENTATION

Events - Event View

This is the "Event Detail" view, showing all available information for this event. If registration is required or recommended, click the 'Register Now' button to start the process. If the event has passed, click the "Event Report" button to read a report and view photos that were uploaded.

IEEE Philly - DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY, HARDWARE AND IMPLEMENTATION

When:
Thursday, October 24, 2019, 1:30 PM until 2:30 PM
Where:
Bossone Research Enterprise
Center, Room 302, Drexel University
3140 Market St
Philadelphia, PA  19104
Category:
Affiliate Group Event
Registration is required
Payment In Full In Advance Only

DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY, HARDWARE AND IMPLEMENTATION

MEETING DETAILS

 

Date:Thursday October 24, 2019

Time:1:30 pm - 2:30 pm

Location: Bossone Research Enterprise

Center, Room 302, 3140 Market St., Drexel University, Philadelphia, PA

 

ClickHEREfor more information & to register.

 

Speaker: Hai “Helen” Li

 

Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, as well as the investigation on memristor-based computing engine.