Z-ENG: Quantization Error Propagation in Convolutional Neural Networks
2024-2025 ősz
Nincs megadva
Téma leírása
Machine learning applications have two main lifecycles: the learning phase, where the neural network is optimized and tuned for a given task, and the inference phase, where the trained network is used to solve the problem. Machine learning typically takes place on high-performance computers equipped with GPUs during the learning phase, while inference usually occurs on low-power, inexpensive embedded systems with low computational capacity. These systems are often equipped with specialized neural network accelerator modules.
While training neural networks mostly involves floating-point computations, specialized embedded accelerator cores often only support fixed-point number representations. As a result, quantization of neural network parameters is applied between the training and inference phases. The introduced error during quantization depends on the fixed-point number representation and the accelerator's parameters. The student's task is to analyze the nature of error propagation in convolutional neural networks across different network architectures, establish a model for error propagation, and devise a strategy for selecting fixed-point number representations and accelerator parameters.
During this task, the student will gain insights into the operation and configuration possibilities of convolutional neural networks and dedicated accelerator devices for their execution. Additionally, they will gain experience in using machine learning frameworks.
To solve this task, the student will receive assistance from the employees of the Continental AI Development Center.
If you are interested in the topic, be sure to contact Dávid Sik by email before applying, indicating the selected topic, training level, major and the planned project subject.
Külső partner: Continental Autonomous Mobility Hungary
Maximális létszám:
1 fő
Konzulens
Sik Dávid
Tanársegéd
Q.B232.
+36 (1) 463-2886