Fast inference engine for 1.58-bit (ternary) neural networks on ESP32.
A lightweight runtime for models compiled with BitNeural32. It enables efficient deep learning on ESP32 by executing 1.58-bit quantized weights, minimizing memory usage and maximizing speed.
| Filename | Release Date | File Size |
|---|---|---|
| BitNeural32-0.0.3.zip | 34.65 KiB | |
| BitNeural32-0.0.2.zip | 26.61 KiB | |
| BitNeural32-0.0.1.zip | 26.61 KiB |