ECE 598 NS - Machine Learning in Silicon

Fall 2017

TitleRubricSectionCRNTypeHoursTimesDaysLocationInstructor
Machine Learning in SiliconECE598NS66386LEC41100 - 1220 M W  2074 Electrical & Computer Eng Bldg Naresh R Shanbhag

Official Description

Subject offerings of new and developing areas of knowledge in electrical and computer engineering intended to augment the existing curriculum. See Class Schedule or departmental course information for topics and prerequisites. Course Information: May be repeated in the same or separate terms if topics vary.

Section Description

Prerequisites: ECE 313, and ECE 482. This course will introduce the design and implementation of robust and energy-efficient machine learning systems on nanoscale CMOS, with applications to emerging sensor-rich energy-constrained embedded platforms such as wearables, IoTs, autonomous vehicles, and biomedical devices. Algorithm-to-architecture mapping techniques to reduce energy consumption will be studied and applied to machine learning algorithms to optimize energy. Energy, delay and behavioral models of machine learning kernels in nanoscale silicon operating at the limits of energy efficiency (low-SNR fabrics) will be developed, and the impact of errors due to low-SNR circuit operation on system behavior studied. Statistical Shannon-inspired error compensation techniques based on estimation and detection techniques will be discussed and compared with conventional fault tolerance and error resiliency techniques. Case studies of integrated circuit realizations of machine learning ker

Course Director

Description

This course will introduce the design and implementation of machine learning systems on resource-constrained platforms that are beginning to find use in emerging sensor-rich applications such as wearables, IoTs, autonomous vehicles, and biomedical devices. The course will begin with preliminaries including motivation and scope of the course; terminology, applications and platforms; taxonomy of inference tasks and learning. Algorithm, architecture and circuit trade-offs to meet desired system performance metrics such as accuracy, latency, throughput, will be studied under severe constraints on precision, memory, computation, and energy. The least mean squared (LMS) algorithm will be employed as a vehicle to understand the issues involved in mapping learning algorithms to architectures and circuits including – algorithmic properties (training, convergence); analytical estimation of bit precision requirements; use of data flow-graph (DFG) descriptors; algorithm-to-architecture mapping using DFG transforms; architectural energy and delay estimation via CMOS circuit models of arithmetic units, memory and interconnect; and case studies of CMOS prototypes of LMS. This path from algorithms-to-architectures-to-circuits will be taken for: single stage classifiers (support vector machine, decision trees), classifier ensembles (random forest, ADAboost), and deep neural networks (DNNs/CNNs). Finally, machine learning on silicon operating at limits of energy efficiency will be studied – properties of low-SNR/low-energy nanoscale fabrics; intrinsic error tolerance of machine learning algorithms; error-resilient computing; inexact computing; Shannon-inspired computing (statistical error compensation (SEC)); and case studies of CMOS implementations. Advanced topics include: emerging cognitive applications; deep in-memory architecture (DIMA); systems on beyond CMOS fabrics.

Detailed Description and Outline

Course Web-Page: http://courses.ece.uiuc.edu/ece598ns/fa2017

Course Description: This course will introduce the design and implementation of machine learning systems on resource-constrained platforms that are beginning to find use in emerging sensor-rich applications such as wearables, IoTs, autonomous vehicles, and biomedical devices. The course will begin with preliminaries including motivation and scope of the course; terminology, applications and platforms; taxonomy of inference tasks and learning. Algorithm, architecture and circuit trade-offs to meet desired system performance metrics such as accuracy, latency, throughput, will be studied under severe constraints on precision, memory, computation, and energy. The least mean squared (LMS) algorithm will be employed as a vehicle to understand the issues involved in mapping learning algorithms to architectures and circuits including – algorithmic properties (training, convergence); analytical estimation of bit precision requirements; use of data flow-graph (DFG) descriptors; algorithm-to-architecture mapping using DFG transforms; architectural energy and delay estimation via CMOS circuit models of arithmetic units, memory and interconnect; and case studies of CMOS prototypes of LMS. This path from algorithms-to-architectures-to-circuits will be taken for: single stage classifiers (support vector machine, decision trees), classifier ensembles (random forest, ADAboost), and deep neural networks (DNNs/CNNs). Finally, machine learning on silicon operating at limits of energy efficiency will be studied – properties of low-SNR/low-energy nanoscale fabrics; intrinsic error tolerance of machine learning algorithms; error-resilient computing; inexact computing; Shannon-inspired computing (statistical error compensation (SEC)); and case studies of CMOS implementations. Advanced topics include: emerging cognitive applications; deep in-memory architecture (DIMA); systems on beyond CMOS fabrics.

Texts

List of papers and instructor notes.

Last updated

8/7/2017