Is it possible to implement an MNIST inference engine, which can classify handwritten numbers, also on a PMS150C?
Bouyed by the surprisingly good performance of neural networks with >quantization aware training on the CH32V003, I wondered how far this can be >pushed. How much can we compress a neural network while still achieving
good test accuracy on the MNIST dataset? When it comes to absolutely
low-end microcontrollers, there is hardly a more compelling target than the >Padauk 8-bit microcontrollers. These are microcontrollers optimized for the >simplest and lowest cost applications there are. The smallest device of the >portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable
memory and 64 bytes of ram, more than an order of magnitude smaller than
the CH32V003. In addition, it has a proprieteray accumulator based 8-bit >architecture, as opposed to a much more powerful RISC-V instruction set.
Is it possible to implement an MNIST inference engine, which can classify >handwritten numbers, also on a PMS150C?
<https://cpldcpu.wordpress.com/2024/05/02/machine-learning-mnist-inference-on-the-3-cent-microcontroller/>
<https://archive.md/DzqzL>
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving
good test accuracy on the MNIST dataset? When it comes to absolutely
low-end microcontrollers, there is hardly a more compelling target than the Padauk 8-bit microcontrollers. These are microcontrollers optimized for the simplest and lowest cost applications there are. The smallest device of the portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable
memory and 64 bytes of ram, more than an order of magnitude smaller than
the CH32V003. In addition, it has a proprieteray accumulator based 8-bit architecture, as opposed to a much more powerful RISC-V instruction set.
Is it possible to implement an MNIST inference engine, which can classify handwritten numbers, also on a PMS150C?
…
…
<https://cpldcpu.wordpress.com/2024/05/02/machine-learning-mnist-inference-on-the-3-cent-microcontroller/>
<https://archive.md/DzqzL>
test to see if this posts or I should dump this paid provider.
On 10/21/2024 3:06 PM, D. Ray wrote:
Bouyed by the surprisingly good performance of neural networks with
quantization aware training on the CH32V003, I wondered how far this can be >> pushed. How much can we compress a neural network while still achieving
good test accuracy on the MNIST dataset? When it comes to absolutely
low-end microcontrollers, there is hardly a more compelling target than the >> Padauk 8-bit microcontrollers. These are microcontrollers optimized for the >> simplest and lowest cost applications there are. The smallest device of the >> portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable
memory and 64 bytes of ram, more than an order of magnitude smaller than
the CH32V003. In addition, it has a proprieteray accumulator based 8-bit
architecture, as opposed to a much more powerful RISC-V instruction set.
Is it possible to implement an MNIST inference engine, which can classify
handwritten numbers, also on a PMS150C?
…
…
<https://cpldcpu.wordpress.com/2024/05/02/machine-learning-mnist-inference-on-the-3-cent-microcontroller/>
<https://archive.md/DzqzL>
test to see if this posts or I should dump this paid provider.
Depends on whether you mean
George Neuner <gneuner2@comcast.net> wrote:
Depends on whether you mean
Perhaps you misunderstood me. I’m not the author, I just posted beginning of a blog post and provided the link to the rest of it because it seemed interesting. The reason I didn’t post a whole thing is because there are quite few illustrations.
Blog post ends with:
“It is indeed possible to implement MNIST inference with good accuracy using one of the cheapest and simplest microcontrollers on the market. A
lot of memory footprint and processing overhead is usually spent on implementing flexible inference engines, that can accomodate a wide range
of operators and model structures. Cutting this overhead away and reducing the functionality to its core allows for astonishing simplification at this very low end.
This hack demonstrates that there truly is no fundamental lower limit to applying machine learning and edge inference. However, the feasibility of implementing useful applications at this level is somewhat doubtful.”
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 153:03:42 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
2,525 files (731M bytes) |
Messages: | 2,411,053 |