Earlier we mentioned that many developers are building PoCs without hardware constraints, but what happens deeper into development? How do they handle model and hardware alignment?
The survey results also indicated that half the teams decide on hardware direction before choosing their ML model, while another third choose their model at the same time they decide on hardware. Smaller companies tend to be more flexible in the timing of their hardware selection, while larger teams tend to defer their hardware decision to either the point at which the ML is chosen or after.
Public or company-developed datasets are most popular for model development.
As part of that model development, teams use multiple approaches to obtain data. Teams often favor either using public datasets (45 percent) or an existing data set their company has compiled (40 percent), but they also lean toward developing new data sets (55 percent). They also tend to leverage efficiencies by using existing models (40 percent), or pre-trained models (24 percent), although a third of the respondents are developing models from scratch, according to the survey.
Approaches to obtaining data and models
When it comes to training those models, nearly two-thirds of developers have a positive attitude about leveraging the cloud. They can, for example, develop solutions entirely in the cloud, using accurate SoC models to build and test software before and after silicon and hardware availability.
The attractiveness of the cloud training extends to endpoint AI models; however, developers also favor leveraging local GPUs, CPUs, edge devices and MCUs to train their models, and those models generally are updated within the first six months of deployment.
Developers are drawn to the features of devices such as Arm Cortex-M4, M7 and M0+ running on convolutional neural networks (CNNs), recurrent neural networks (RNNs) and multilayer perceptron networks (MLP). They tended to leverage TensorFlow, Keras and PyTorch for training frameworks and TensorFlow Lite for inference frameworks.
Microcontrollers such as the Arm Cortex-M family are an ideal platform for ML because they’re already ubiquitous. They perform real-time calculations quickly and efficiently, so they’re reliable and responsive, and because they use very little power, can be deployed in places where replacing the battery is difficult or inconvenient.
Perhaps even more importantly, they’re cheap enough to be used just about anywhere. The market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018, with that annual shipment volume growing to 38.2 billion by 2023.
As this technology is pushed to the edge and endpoint it will open new and exciting applications, offering designers an opportunity to breathe new life into their product offerings.