This course will teach the attendees how to efficiently design AI hardware that is 3-4 orders more efficient than GPUs and FPGAs. In essence, this course aims at teaching how to achieve non-incremental improvement in computational, silicon and engineering efficiencies. The course will specifically focus on widely used artificial neural networks like CNN, LSTM and SOM and show how to analyze their requirements and what are the architectural and technology options to do trade-off to find optimal solutions. The course will also cover how to do trade-off in implementation cost vs. Accuracy of neural networks.
The PhD students are expected to have a good understanding of neural networks and how they are used. Students are also expected to have a good understanding of computer architecture and digital design principles.