Energy-Efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration
2022
Deep neural networks (DNNs) are being heavily utilized in modern applications, putting energy-constraint devices to the test. To bypass high energy consumption issues, approximate computing has been employed in DNN accelerators to balance out the accuracy-energy reduction trade-off. However, the approximation-induced accuracy loss can be very high and drastically degrade the performance of the DNN. Therefore, there is a need for a fine-grain mechanism that would assign specific DNN operations to approximation to maintain acceptable DNN accuracy, while achieving low energy consumption. We present an automated framework for weight-to-approximation mapping through formal property exploration for approximate DNN accelerators. At the MAC unit level, our experimental evaluation surpassed already energy-efficient mappings by more than
$\times 2$
in terms of energy gains, while supporting a fine-grain control over the introduced approximation.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI