Machine Learning Accelerators for Edge-computing
2020/07 - 2022/12
- A hardware-accelerated approach for onsite neural network training. our accelerator achieves10.2x and 2.0x speedup, while 10.1x and 3.5x energyefficiency compared with implementation on edge CPUs and GPUs, respectively. In comparison to the state-of-the-art (SOTA) FPGA implementation, resource consumptioncan be reduced by up to 41.3%.
- An edge-FPGA-based accelerator has been introduced for BFGS-QN algorithm. Which obtains 4.9x faster than a typical edge-CPU.
- A series of ML hardware accelerator IP-cores are proposed and open-source.
- Publications:
- “Edge FPGA-based Onsite Neural Network Training”, in ISCAS, 2023.
- “Implementation of quasi-Newton algorithm on FPGA for IoT endpoint devices,” Int. J. Secur. Netw., 2022.
- “MLoF: Machine Learning Accelerators for the Low-Cost FPGA Platforms,” Appl. Sci.-Basel, 2021.