Floadia desarrolla tecnología de memoria que retiene datos analógicos de ultra alta precisión durante períodos prolongados
[ad_1]
Floadia will apply the memory technology to a chip that realizes AI (inteligencia artificial) inference operations with overwhelmingly low power consumption. This chip is based on an architecture called Computing in Memory (CiM), which stores neural network weights in non-volatile memory and executes a large number of multiply-accumulate calculations in parallel by passing current through the memory array. CiM is attracting worldwide attention as an AI accelerator for edge computing environments because it can read a large amount of data from memory and consumes much less power than conventional AI accelerators that perform multiply-accumulate calculations on CPUs and GPUs.
This memory technology is based on SONOS-type flash memory chips developed by Floadia for integration into microcontrollers and other devices. Floadia made numerous innovations such as optimizing the structure of charge-trapping layers, i.e. ONO film, to extend the data retention time when storing 7 bits of data. The combination of two cells can store up to 8 bits of neural network weights, and despite its small chip area, it can achieve a multiply-accumulate calculation performance of 300 TOPS/W, far exceeding that of existing AI accelerators.
[ad_2]