Neural Network Acceleration on FPGAs

Content

Neural networks are applied in a variety of domains, even critical application scenarios in transportation and medicine. Important aspects of accelerating neural networks in various application domains are performance, latency, reliability, and energy footprint. Dedicated hardware can have advantages in all of these domains over a traditional CPU and also GPU
implementations. In this regard, Field-Programmable Gate Arrays (FPGAs; reconfigurable hardware) have shown to be an efficient and versatile solution for accelerating quantized neural networks, which are compact representations of neural network models. Their benefits are proven by the use in Microsoft Azure ML, Amazon AWS and other cloud platforms.
This module will teach students how to implement neural networks on reconfigurable hardware using an established framework, and also looks into relevant practical details when optimizing the network for hardware deployment.

Language of instructionEnglish
Organisational issues

Ab 16.04.2024, alle 2 Wochen dienstags 14:00-15:30, Geb. 07.21, Gebäudeteil B, 2.OG, Praktikumsraum B.312.4

Since the number of seats is limited, a registration for this laboratory in the campussystem is necessary.

There are limited slots and the registration is handled in a first-come, first-served manner. So make sure you sign-up as early as possible. We can only consider registrations with the correct documents or from the online system ( https://campus.studium.kit.edu/exams/index.php )