Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
  • Groups
    • Research Collaboration and Enablement
    • DesignStart
    • Education Hub
    • Innovation
    • Open Source Software and Platforms
  • Forums
    • AI and ML forum
    • Architectures and Processors forum
    • Arm Development Platforms forum
    • Arm Development Studio forum
    • Arm Virtual Hardware forum
    • Automotive forum
    • Compilers and Libraries forum
    • Graphics, Gaming, and VR forum
    • High Performance Computing (HPC) forum
    • Infrastructure Solutions forum
    • Internet of Things (IoT) forum
    • Keil forum
    • Morello Forum
    • Operating Systems forum
    • SoC Design and Simulation forum
    • 中文社区论区
  • Blogs
    • AI and ML blog
    • Announcements
    • Architectures and Processors blog
    • Automotive blog
    • Graphics, Gaming, and VR blog
    • High Performance Computing (HPC) blog
    • Infrastructure Solutions blog
    • Innovation blog
    • Internet of Things (IoT) blog
    • Operating Systems blog
    • Research Articles
    • SoC Design and Simulation blog
    • Tools, Software and IDEs blog
    • 中文社区博客
  • Support
    • Arm Support Services
    • Documentation
    • Downloads
    • Training
    • Arm Approved program
    • Arm Design Reviews
  • Community Help
  • More
  • Cancel
Arm Community blogs
Arm Community blogs
AI and ML blog How audio development platforms can take advantage of accelerated ML processing
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI and ML blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded blog

  • Graphics, Gaming, and VR blog

  • High Performance Computing (HPC) blog

  • Infrastructure Solutions blog

  • Internet of Things (IoT) blog

  • Operating Systems blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Software
  • Arm DevSummit 2022
  • Helium
  • Artificial Intelligence (AI)
  • audio
  • Machine Learning (ML)
  • Partner solutions
  • Arm DevSummit
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

How audio development platforms can take advantage of accelerated ML processing

Mary Bennion
Mary Bennion
October 24, 2022
3 minute read time.
Blog post by Matthew Mitschang, DSP Concepts & Henrik Flodell, Alif Semiconductor

The Arm® Cortex®-M55 processor and Ethos -U55 microNPU (Neural Processing Unit) have brought forth new machine learning (ML) opportunities for edge and endpoint devices. ML is becoming more common in audio applications in the form of voice user interfaces, voice identification and security, and natural language communication systems. As such, this hardware is best paired with a powerful and flexible audio development platform that can take advantage of accelerated ML processing.

The Arm Cortex-M55 processor is the most AI-capable Cortex-M processor, bringing endpoint AI to billions more devices. Featuring the Helium vector instruction set that enables a significant increase in digital signal processing (DSP) and ML capability. This processor improves throughput and maximizes the use of processor resources.

Arm Ethos -U55 is a new class of machine learning processor specifically designed to accelerate ML inference in embedded and IoT devices. When combined with a Cortex-M55 processor, the Ethos-U55 provides ML performance that is hundreds of times faster than existing Cortex-M-based systems.

To access the power of these new chipsets, the Alif Semiconductor Ensemble Family is a hardware platform built on the latest generation of embedded processing technology. It can scale from a single Arm Cortex-M55 microcontroller unit (MCU) to a quad-core Fusion processor. This contains dual Cortex-M55 MCU cores, dual Ethos-U55 cores, and dual Cortex-A32 microprocessor unit (MPU) cores. The platform family is ideal for ML processing on the edge and was designed for power efficiency, long battery life, and high ML and AI capability.

Audio Weaver® by DSP Concepts is a hardware-independent platform. It offers a comprehensive set of embedded libraries with easy-to-use tools for design, collaboration, testing, and deployment of a full range of sound and voice features. Designs created in Audio Weaver can be developed with or without target hardware. These designs are deployed when ready to an MCU, SOC, or DSP, without the need for redesign. With algorithms developed in-house and by third parties, Audio Weaver is a powerful and flexible solution that can streamline the entire development workflow.

The functionality of Audio Weaver helps product makers approach the future by enabling them to rapidly innovate and mitigate risk. Designs are created by placing signal processing building blocks known as modules on a virtual canvas. These designs are connected with virtual wires, and module properties are adjusted to tune the design. Designs can then be auditioned from within the AWE Designer application using the sound card of the host PC. Multiple team members can each approach the creation and tuning of different portions of the design concurrently. They can develop features in parallel and later combine them into a final design. With this collaboration and the ability to quickly and seamlessly test iterations and new designs, the entire process is streamlined. For final testing, tuning, and production, designs created in AWE Designer are deployed to a target MCU, dedicated DSP, or SoC with the embedded AWE Core libraries. This dynamic instantiation allows for rapid iteration and simple integration, and over-the-air (OTA) product updates.

In addition to audio processing, Audio Weaver features Machine Learning functionality to augment the ML lifecycle. This is by simplifying the featurization of data and streamlining the process of embedding models on target hardware. With a reduced need for engineering resources and the amount of expertise required, Audio Weaver accelerates the time to market. This enables model serving without the need for recoding.

With the solutions offered by Arm, Alif Semiconductor, and DSP Concepts, product makers can use a proven development platform. This platform creates designs that have the power, efficiency, and scalability to perform ML processing on the network edge. This is all while reducing cost, development time, and necessary expertise.

Join DSP Concepts and Alif Semiconductor at Arm DevSummit 2022 to learn more about ML techniques commonly used for audio. Discover the features and benefits of the Audio Weaver platform. Learn how to build innovative ML designs that apply the power of Cortex-M55 and Ethos-U55 processors featured on Alif Ensemble MCUs.

In addition, Alif Semiconductor is giving away 25 Ensemble MCU Development kits and all participants can redeem an extended 90-day trial of Audio Weaver.

 Register Free for Arm DevSummit

Anonymous
AI and ML blog
  • Analyzing Machine Learning models on a layer-by-layer basis

    George Gekov
    George Gekov
    In this blog, we demonstrate how to analyze a Machine Learning model on a layer-by-layer basis.
    • October 31, 2022
  • How audio development platforms can take advantage of accelerated ML processing

    Mary Bennion
    Mary Bennion
    Join DSP Concepts and Alif Semiconductor at Arm DevSummit 2022 to discuss ML techniques commonly used for audio. Discover the features and benefits of the Audio Weaver platform.
    • October 24, 2022
  • How to Deploy PaddlePaddle on Arm Cortex-M with Arm Virtual Hardware

    Liliya Wu
    Liliya Wu
    This blog introduces how to deploy a PP-OCRv3 English text recognition model on Arm Cortex-M55 processor with Arm Virtual Hardware.
    • August 31, 2022