The number of undergraduate students, 2014-15 school year.
The aim of this course is to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Many commercial offerings from NVIDIA, AMD, and Intel already offer such levels of concurrency. Effectively programming these processors requires in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors.
A. After the six machine problems (after approximately 20 seventy-five minute lectures) the student should be able to:
1. Analyze and implement common parallel algorithm patterns in a parallel programming model such as CUDA. (a)
2. Design experiments to analyze the performance bottlenecks in their parallel code. (b)
B. By the final examination (after approximately 29 seventy-five minute lectures) the student should be able to:
6. Understand and apply common parallel algorithm patterns. (a)
8. Understand and apply common parallel programming interface features. (a)
C. By the end of the final project (with proposal, workshop discussions, presentation, and report) the student should be able to:
14. Identify design space and explore optimization opportunities for the solutions. (c)