• Fast and accurate keyword spotting using Transformers

    Axel Berg
    Axel Berg
    On-device automatic speech recognition is now becoming feasible and is useful in scenarios without internet connection or when data privacy is a concern.
    • January 10, 2022
  • FixyFPGA: Fully-parallel and fully-pipelined FPGA accelerator for sparse CNNs

    Jae-sun Seo
    Jae-sun Seo
    Most conventional FPGA-based accelerators use off-chip memory for data transformation, then perform computation for a single-layer in a time-multiplexed manner. Throughput is often limited by the memory…
    • September 28, 2021
  • Neural network architectures for deploying TinyML applications on commodity microcontrollers

    Colby Banbury
    Colby Banbury
    TinyML seeks to deploy ML algorithms on ultra low power systems, to enable us to intelligently select which data to transmit, improving energy efficiency.
    • June 29, 2021
  • Improving federated learning with dynamic regularization

    Paul Whatmough
    Paul Whatmough
    IoT devices collect and transmit data to the cloud, where it is then analyzed and ML models trained. Privacy challenges arise when users are reluctant to share their personal data.
    • June 11, 2021
  • What you missed at the second On-Device Intelligence Workshop

    Paul Whatmough
    Paul Whatmough
    The workshop held in conjunction with MLSys 2021 brought researchers and practitioners together to discuss key issues, share new research results and practical tutorial material.
    • June 3, 2021
  • Arm Research’s collaboration with the Cambridge ELLIS Unit

    Partha Maji
    Partha Maji
    Arm Research is excited to be part of the Cambridge ELLIS Unit, focusing on two key elements: Bayesian statistics and Probabilistic ML.
    • May 21, 2021
  • Ensuring your AI is sure: Any place, anywhere, anytime

    Tiago Azevedo
    Tiago Azevedo
    It is important in industry to define what we see and how well we see it. This simple yet powerful idea has driven recent developments in the Arm Research ML Lab.
    • April 9, 2021
  • Using multiple labels improves neural network learning

    Axel Berg
    Axel Berg
    A single label is not enough. Label diversity can be introduced by creating several labels for each training example in a way that the ordinal structure allows.
    • February 22, 2021
  • Research for a sustainable future

    René de Jong
    René de Jong
    To help companies find the breakthrough innovations needed to support the Global Goals, the UNGC set up the Young SDG Innovator Program, which our colleagues in Arm joined.
    • October 19, 2020
  • Efficient Bug Discovery with Machine Learning for Hardware Verification

    Hongsup Shin
    Hongsup Shin
    For present-day microprocessors, it is even more challenging to identify bugs. Using (ML) to efficiently identify bugs, we've seen a 25% increase in efficiency than the default verification workflow. …
    • September 22, 2020
  • Reducing the Cost of Neural Network Inference with Residue Number Systems

    Matthew Mattina
    Matthew Mattina
    The size and computational complexity of neural network models continues to grow exponentially. However, the increase in computational requirements presents a major challenge to their adoption. Could Residue…
    • August 21, 2020
  • Adapting Models to the Real World: On-Device Training for Edge Model Adaptation

    Mark O'Connor
    Mark O'Connor
    Neural networks are becoming widely used in computer interaction, but in real-world scenarios we see errors. We’ve recently completed research into edge distillation to solve this problem.
    • July 15, 2020
  • It is time for natively flexible processors

    Emre Ozer
    Emre Ozer
    The story behind our flexible processors paper started with how to make billions of everyday things smart.
    • July 13, 2020
  • Scalable Hyperparameter Tuning for AutoML

    Mohit Aggarwal
    Mohit Aggarwal
    Mango is an open source Python library for hyperparameter optimization, built for AutoML systems. Developed by Arm Research, Mango presents many useful features.
    • July 7, 2020
  • Even Faster Convolutions: Winograd Convolutions meet Integer Quantization and Architecture Search

    Javier Fernandez-Marques
    Javier Fernandez-Marques
    The design of deep learning (DL) neural network (NN) models targeting mobile devices has advanced rapidly over the last couple of years. Important computer vision tasks have led a community-wide transition…
    • April 29, 2020
  • SCALE-Sim: A cycle-accurate NPU simulator for your research experiments

    Paul Whatmough
    Paul Whatmough
    Architecture simulators are a key tool in the computer architecture toolbox. They provide a convenient model of real hardware at a level of abstraction that makes them faster and more flexible than low…
    • April 21, 2020
  • TinyML Applications Require New Network Architectures

    Urmish Thakker
    Urmish Thakker
    Researchers have studied neural network compression for quite some time. However, the need for always on compute has led to a recent trend towards executing these applications on even smaller IoT devices…
    • February 13, 2020
  • A Year of Discovery: Arm Research 2019

    Rhiannon Burleigh
    Rhiannon Burleigh
    2019 was yet another year of incredible technology discovery for Arm Research. Inspiring advancements have been made across the research community, and Arm Research has contributed to this. Our teams have…
    • January 6, 2020
>