ϟ
 
DOI: 10.1109/aicas54282.2022.9869947
OpenAccess: Closed
This work is not Open Acccess. We may still have a PDF, if this is the case there will be a green box below.

A low power neural network training processor with 8-bit floating point with a shared exponent bias and fused multiply add trees

Jeongwoo Park,Sunwoo Lee,Dongsuk Jeon

Dataflow
Computer science
Artificial neural network
2022
This demonstration showcases a neural network training processor implemented in silicon through 40nm LPCMOS technology. Based on custom 8-bit floating point and efficient tree-based processing schemes and dataflow, we achieve 2.48× higher energy efficiency than a prior low-power neural network training processor.
Loading...
    Cite this:
Generate Citation
Powered by Citationsy*
    A low power neural network training processor with 8-bit floating point with a shared exponent bias and fused multiply add trees” is a paper by Jeongwoo Park Sunwoo Lee Dongsuk Jeon published in 2022. It has an Open Access status of “closed”. You can read and download a PDF Full Text of this paper here.