In just a few short years, ML technology has become a reality for endpoint devices. The ability to run compact but powerful algorithms on tinyML devices – endpoint systems that run machine learning locally rather than in the cloud – means more intelligence at the edge. It also means lower latency, improved security and energy efficiency and faster time-to-results.
But getting there has been an adventure, as myriad approaches to hardware, operating system software, training, data acquisition and model development and deployment have emerged. This often has the effect of confusing potential adopters, but fortunately, as with most technological transformations, the area is maturing.
Arm, with support from AWS, RaspberryPi, Arduino, the tinyML Foundation, and Edge Impulse, set out to gauge the current state of tinyML development, what’s worked and what remains a challenge. From June-July 2021, we surveyed global developers and received 667 valid responses.
The results were fascinating.