Always-on Deep Neural Networks
Always-on Deep Neural Networks
Contact |
Rohan Asthana, M.Sc. |
E-Mail: rohan.asthana@fau.de |
Link to person |
The recent deep neural network architectures are complex with large demands on computational resources. Consequently, deploying a deep neural network model on hardware-constrained devices is remains a challenge. To address this problem, the network architectures have to be re-designed by taking into consideration the storage, floating-point operations, and parameter discretization factors. This process is known as neural network compression. Besides, the Edge hardware needs to be investigated and redefined for efficient neural network operations. In particular, specialized integrated-circuit (IC) accelerators can provide large adaptability regarding the memory hierarchy and can exploit medium-precision mixed-signal compute circuitry to drive down the power consumption. This DFG project aims to find new mixed-signal circuits and architectures with a runtime-tunable compute precision, as well as the design and training of a hardware-fitted hybrid-precision neural network on such hardware. In the frame of custom hardware, neural network compression will be explored as a co-design and co-train task where the hardware will be part of the optimization.
Collaboration Partners: Prof. V. Belagiannis (FAU), Prof. M. Ortmanns (Universität Ulm)
This project is funded by DFG . Project number-493129587.