The Role of the AI Ecosystem
The last three decades have been defined by groundbreaking devices that have fundamentally transformed how people communicate, engage with technology, and live their daily lives. However, these devices need a rich supporting ecosystem to truly make a difference to the world.
For the tech industry, the AI revolution is not just about creating new devices that are capable of processing vast generative AI compute workloads, but it also requires a rich supporting ecosystem that provides the capabilities to create innovative AI-based services and applications.
There is skyrocketing consumer demand for these services and applications. Generative AI has been one of the fastest-growing technology rollouts ever, hitting hundreds of millions of users in its first few months.
But creating these life-changing AI experiences does not happen overnight or through one company alone. It is a complex process requiring significant investment, partnerships, and support from a rich, dynamic ecosystem consisting of diverse technologies and organizations, from small start-ups to the world’s leading tech companies.
As the ecosystem works to realize the true potential of AI, we see several opportunities:
Delivering AI in a sustainable way;
Improving performance at the edge to overcome latency;
Deploying security to protect valuable models;
Ensuring privacy to protect consumer data;
Efficiently moving data around to avoid data ceilings; and
Enabling low-friction deployment for developers so they can build on the pace of AI innovation, while working with limited computing resources.
Learn more about the role of a rich AI ecosystem in this article.
The industry-leading Arm ecosystem is one of the largest and most diverse in the industry, composed of more than 1,000 partners across the technology stack, from silicon to software, from hardware to cloud.
The breadth of the ecosystem enables a faster time-to-market and scalability for AI applications, as partners can leverage Arm's solutions to deliver products and services that are compatible, interoperable, and optimized for AI. This helps the 15 million developers worldwide developing for Arm-based devices to run more complex AI workloads and ensure they get their applications to market faster.
Arm is working across the ecosystem on a range of partnerships to make our AI commitments to developers a reality. We continuously foster innovation and collaboration, so partners can access Arm's resources, tools, and expertise to accelerate their AI development and deployment.
Arm’s continuous software investments mean we are creating the largest AI developer community globally.
Arm provides developers with tools, frameworks, and support to create applications more easily. This includes software development kits (SDKs), libraries, and resources that help developers optimize their applications for Arm-based devices. The consistency and compatibility provided by Arm's ecosystem mean that developers can create applications that work seamlessly across a wide range of devices, reaching billions of users worldwide.
Arm NN SDK: Today, over 100 million users have access to the Arm NN SDK to optimize their AI workloads on Arm CPUs and GPUs. Arm NN enables developers to run neural network models on Arm-based devices, such as CPUs, GPUs, and NPUs. The Arm NN SDK also supports popular frameworks, such as TensorFlow, TensorFlow Lite, ONNX, PyTorch and Caffe, and offers optimized performance and power consumption for AI-based applications.
Arm Compute Library: This is a collection of low-level functions that are optimized for Arm-based devices. The library includes functions for computer vision, image processing, convolutional neural networks, and general matrix multiplication. The library can be used to accelerate AI applications on Arm CPUs, GPUs, and NPUs.
Arm ML Zoo: A repository of pre-trained machine learning models that are optimized for Arm-based devices. The models cover various domains such as image classification, object detection, face recognition, and natural language processing. The models can be downloaded and deployed on devices using Arm NN SDK or other frameworks. The repository also provides tutorials and guides for developers to use the models in their applications.
Arm Virtual Hardware (AVH): A solution that facilitates software development on Arm-based processors by utilizing virtual targets. It offers virtual simulation models, cloud-native deployments, and integrations with development tools, supporting the software development cycle of embedded systems, IoT applications, and machine learning programs. With AVH, developers can start coding and debugging software before the hardware is available or finalized, reducing time to market and costs.
Open-source support: We are enabling Arm-based hardware with increased AI capabilities through open-source frameworks and libraries in all the places where developers need support, including TensorFlow, PyTorch, Caffe 2, OpenVINO and TVM. This is creating an AI innovation foundation for the open-source community.
Standards are an essential part of technology development for companies, as they provide a common framework that ensures new products and services are compatible with existing ones. This creates better, quicker, more secure user experiences, as well as reducing risk and costs throughout the product development process. The environment to make this possible is established through an ecosystem of organizations and technologies all using the foundational Arm compute platform.
Open standards are critical to driving innovation, consistency, and interoperability in the AI ecosystem. As part of our commitment to industry collaboration on these standards, Arm recently joined the Microscaling Formats (MX) Alliance, which includes AMD, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc.
The MX Alliance recently collaborated on the specification for a new technology known as microscaling, which builds on a foundation of years of design space exploration and research, and is a fine-grained scaling method for narrow-bit (8-bit and sub 8-bit) training and inference of AI applications. This specification standardizes these narrow-bit data formats to remove fragmentation across the industry and enable scalable AI.
Arm's commitment to fostering an interoperable ecosystem is demonstrated through its work on standards like UCIe and Arm's Chiplet System Architecture (CSA). These initiatives aim to enable an ecosystem of chiplets for heterogeneous computing, encompassing CPUs, accelerators, IO, and more. By promoting interoperability, Arm facilitates the creation of diverse and customizable AI solutions that cater to the unique needs of different industries and applications.
As AI continues to permeate to the edge, security becomes a critical consideration. Arm addresses this challenge with security technologies and standards, which provide robust protection for IP deployed at the edge. This ensures that Arm's ecosystem partners can confidently develop and deploy AI solutions without compromising sensitive data or assets.
Arm’s collaborative ecosystem is on full display across the technology spectrum.
Arm’s ecosystem helps to drive the future of AI in automotive and the development of connected and software-defined vehicles (SDVs) – from in-vehicle infotainment (IVI) to advanced driver assistance systems (ADAS), from autonomous driving to vehicle-to-everything (V2X) communication.
Arm collaborates with leading tech companies, such as Marvell, MediaTek, NVIDIA, NXP, Renesas, Telechips and Texas Instruments, to provide an innovation foundation for AI in the automotive domain. For example, Arm’s latest Cortex-A AE IP cores are being used by MediaTek in their first Dimensity Auto Cockpit SoCs for AI-powered in-cabin experiences.
Arm also works with NXP to enable its BlueBox platform, which is a comprehensive and flexible platform for ADAS and autonomous driving. Finally, Arm's solutions are also used by Renesas to power its R-Car platform, which is a versatile and scalable platform for infotainment, ADAS, and V2X applications.
Arm plays a crucial role in powering technologies like cloud computing and 5G networks, where efficient computing is paramount. The company's collaborative engagement model allows for joint hardware-software optimization between Arm and its partners, tailoring solutions to their unique requirements. This is exemplified by Arm's Neoverse Compute Subsystems (CSS), which provide integrated and validated platforms to accelerate time-to-market for partners building AI-optimized solutions.
Arm's ecosystem extends beyond immediate partners to include a comprehensive network of companies across the technology stack, from EDA and IP to foundries. Arm Total Design brings these entities together to simplify and accelerate the deployment of Neoverse CSS-based designs. This collaborative approach ensures that Arm remains at the center of the AI ecosystem, driving innovation and delivering business value to its partners.
Arm enables intelligent, immersive and secure experiences across the mobile industry, from smartphones to tablets, from wearables to XR wearable devices. The Arm ecosystem enables developers to create millions of applications used by billions of people worldwide. This is due in a large part to Arm's energy-efficient chip designs, which enable longer battery life and improved performance in mobile devices.
Arm-based solutions bring high performance, low power, and rich functionality for AI applications in the mobile domain. Arm collaborates with leading tech companies, such as Google, MediaTek and Samsung, to provide an innovation foundation for AI in the mobile domain.
Arm works with MediaTek to enable its Dimensity platform, which is a family of 5G-integrated SoCs that deliver high performance, power efficiency, and AI capabilities for smartphones. Arm's solutions are also used by Google to power its Pixel Neural Core, which is a dedicated hardware accelerator for on-device machine learning on Pixel smartphones.
The Arm ecosystem is a driving force for AI in the IoT domain, which encompasses a wide range of devices and applications, from smart home appliances to industrial automation. Arm's IoT solutions enable seamless connectivity, security, and intelligence for billions of devices that collect, process, and act on data. Arm also offers a range of microcontrollers, such as the Cortex-M series, that are optimized for low power consumption and high performance for IoT applications.
Additionally, Arm works with partners to provide specialized hardware and software solutions for specific IoT domains, such as computer vision, audio processing, and natural language understanding.
Arm collaborates with Amazon Web Services (AWS) to deliver AWS IoT Greengrass, which is a service that extends AWS cloud capabilities to local devices, enabling them to act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. Arm also partners with NVIDIA to leverage its Jetson platform, which is a family of AI-enabled embedded systems that can run multiple neural networks in parallel for applications such as robotics, smart cities, healthcare, and retail.
Moreover, Arm is working with Meta to bring PyTorch to Arm-based mobile and embedded platforms at the edge with ExecuTorch. ExecuTorch makes it far easier for developers to deploy state-of-the-art neural networks that are needed for advanced AI and ML workloads across mobile and edge devices.
Supporting the wide deployment of data formats is crucial for scaling AI at a relatively low cost. Arm has been working with ecosystem partners to support a variety of emerging small data types focused on AI workloads.
Last year, in a joint collaboration, Arm, Intel, and NVIDIA, published a new 8-bit floating point specification, the ‘FP8’. Since then, the format has gained momentum and the group of companies expanded to AMD, Arm, Google, Intel, Meta, and NVIDIA, that together created the official OCP 8-bit Floating Point Specification (OFP8). In the latest A-profile architecture update, Arm added OFP8 consistent with this standard to support its rapid adoption in neural networks across the industry. OFP8 is an interchange 8-bit data format that allows the software ecosystem to share neural network models easily, facilitating the continuous advancement of AI computing capabilities across billions of Arm-based devices.