- Notifications
You must be signed in to change notification settings - Fork 523
Description
Hi @earlephilhower & maintainers,
First of all, thank you for the amazing work on Arduino-pico – it’s become my go-to way to work with RP2040/RP2350 boards with PIO.
I’d like to kindly request support for a TensorFlow Lite Micro + CMSIS-NN integration for RP2350 within Arduino-pico, ideally by adapting or reusing the existing pico-tflmicro work from the Pico SDK ecosystem.
The new Cortex-M33 cores on RP2350 are much more capable (DSP/FPU etc.), and having a ready-to-use TFLM+CMSIS-NN stack in Arduino-pico would make it practical to run edge AI models (e.g. signal quality, anomaly detection, tiny classifiers) directly from the Arduino environment. I think this could unlock a lot of applications for the maker and research community (wearables, bio-signals, low-power ML, small sensor nodes, etc).
Motivation / context:
- The Raspberry Pi folks already maintain pico-tflmicro for the Pico SDK, with CMSIS-NN kernels wired into TFLM kernels (e.g. conv/fully_connected).
- On RP2350’s M33 cores, this combination could run significantly larger or more accurate models at reasonable speed and power.
- Having this available from Arduino-pico would let people stay in the Arduino workflow (sketches, libraries, PlatformIO) while still benefiting from TFLM + CMSIS-NN optimizations.
What I’m hoping for is if there is some way to use pico-tflmicro (or an equivalent TFLM+CMSIS-NN setup) from Arduino-pico on RP2350 boards.
It doesn’t have to expose every TFLM feature at first; even a minimal, well-documented example (e.g. a small conv net running on RP2350) would be a great starting point for the community.
I’m still quite new to both the Pico SDK and TensorFlow Lite Micro internals, so I’m not confident enough to propose a full design or send a big PR on my own yet. But I’d be very happy to:
Test any experimental branches or prototypes on RP2350 hardware, or help with documentation or example sketches or small, well-scoped tasks if you can point me in the right direction.
Thanks again for all the work you’re doing on this core, and for considering this request.