Z-ENG: Design of Hardware Accelerator Optimized for Neural Network Architecture
2024-2025 ősz
Nincs megadva
Téma leírása
Machine learning applications have two main lifecycles: the learning phase, where the neural network is optimized and tuned for a given task, and the inference phase, where the trained network is used to solve the problem. Machine learning typically takes place on high-performance computers equipped with GPUs during the learning phase, while inference usually occurs on low-power, inexpensive embedded systems with low computational capacity. These systems are often equipped with specialized neural network accelerator modules.
Different accelerators have different hardware architectures, resulting in varying characteristics and performance. Specific layers and operations of the neural network can be executed with varying efficiency by different accelerators, making different execution units ideal choices for different network architectures.
When designing neural networks for inference on embedded devices, hardware constraints need to be considered in order to achieve optimal inference times (e.g., convolutional: optimal kernel size, optimal number of channels). The task involves designing an FPGA-based inference accelerator hardware that can be specifically optimized for a given neural network using parameters used during synthesis.
To solve this task, the student will receive assistance from the employees of the Continental AI Development Center.
If you are interested in the topic, be sure to contact Dávid Sik by email before applying, indicating the selected topic, training level, major and the planned project subject.
Külső partner: Continental Autonomous Mobility Hungary
Maximális létszám:
1 fő
Konzulensek
Sik Dávid
Tanársegéd
Q.B232.
+36 (1) 463-2886