ECE 417
Multimedia Signal Processing

Displaying course information from Spring 2014.

Section Type Times Days Location Instructor
A LEC 0930 - 1050 T R   130 Wohlers Hall  Mark Hasegawa-Johnson
Official Description Characteristics of speech and image signals; important analysis and synthesis tools for multimedia signal processing including subspace methods, Bayesian networks, hidden Markov models, and factor graphs; applications to biometrics (person identification), human-computer interaction (face and gesture recognition and synthesis), and audio-visual databases (indexing and retrieval). Emphasis on a set of MATLAB machine problems providing hands-on experience. Course Information: 4 undergraduate hours. 4 graduate hours. Prerequisite: ECE 310 and ECE 313.
Course Prerequisites Credit in ECE 313 or STAT 410
Credit in ECE 310
Course Directors Thomas S Huang
Topical Prerequisities
Prerequisite: ECE 313 and ECE 310.
Course Goals

The goal of the course is to prepare the students for industrial positions in the emerging field of multimedia and for pursuing further graduate studies in signal processing. Through a set of carefully designed machine problems, the student learns important tools in audio-visual signal processing, analysis, and synthesis, and their applications to biometrics, human-computer interaction, and multimedia indexing and search.

Instructional Objectives

A. After Machine Problem 1 (MP1), Week 3 of the semester, the students should be able to:

- Understand speech features (esp. cepstrum coefficients) and nearest-neighbor pattern classifies and their applications to speech recognition and speaker identification (a,l)

B. After MP2, Week 5 of the semester, the students should be able to:

- Understand principal component analysis and linear discriminant analysis, and their applications to face recognition (a,l)

C. After MP3, Week 7 of the semester, the students should be able to:

- Understand maximum likelihood (ML) classifies, Bayesian networks, and multimodal fusion, and their applications to audio-visual person identification (a,l)

D. After MP4, Week 9 of the semester, the students should be able to:

- Understand hidden Markov model (HMM), including algorithms for learning, inference, and decoding, and its application to audio-visual speech recognition (a,l)

E. After MP5, Week 11 of the semester, the students should be able to:

- Understand 3D face modeling and animation and applications to speech-driven lip movement in an audio-visual avatar (synthetic talking head) (a,l)

F. After MP6, Week 13 of the semester, the students should be able to:

- Understand and practice content-based image retrieval with relevance feedback (a,l)

G. After MP7, Week 15 of the semester, the students should be able to:

- Understand structuring and indexing video data, and algorithms for video shot segmentation based on audio and visual cues (a,l)

Last updated: 5/23/2013