The size and computational complexity of neural network models continues to grow exponentially. The reason for this growth is easy to understand; generally, larger neural networks deliver higher accuracy on many image and language tasks that users care about. For example, the recent GPT-3 transformer-based neural network from OpenAI has over 175 billion parameters, and generates human-level text. However, the increase in the computational requirements when executing (inferencing) these massive networks presents a major challenge to their adoption. This challenge is one of the primary avenues of research being pursued by Arm’s Machine Learning Research Lab. Our lab is focused on finding novel ways to efficiently execute advanced machine learning models on Arm-based embedded and mobile platforms. To this end, we have published various research, ranging from AutoML for deeply embedded devices, novel factorization schemes, and hardware designs for executing compressed models.
Our recent paper, which will be presented at ECCV in August, attacks the computational problem from a different angle. It is well established that the use of low-precision numbers—such as INT8 parameters and computation--significantly reduces the power, memory, and execution-time requirements for advanced neural networks. It is also well known that transform techniques—in particular, the Winograd transform—can be used to significantly reduce the number of arithmetic operations required for the execution of these networks.
However, the combination of these two techniques – low-precision representation and the complexity-reducing Winograd transform – has, until now, resulted in an unacceptably high loss in prediction accuracy. The loss in accuracy arises due to numerical problems that occur when performing the transform operations required by the Winograd algorithm. As can be seen in the following Figure, several transform coefficients are either very large or very small, and thus cannot be accurately represented with INT8 precision.
Figure 1. The 10 x 10 convolution y (in brown, far right) of 12 x 12 input d (in blue, far left) and 3 x 3 kernel g (in green, center)
Where
We have developed a technique that allows the complexity-reducing Winograd transform to be applied to convolutional neural networks with INT8 parameters. The foundation of our technique is the use of a residue number system (RNS). An RNS is used to represent integers by their values modulo pairwise co-prime integers, as shown in Figure 2. The RNS representation enables us to perform the transformations and operations required to execute the network in the Winograd domain, without suffering the numerical problems (underflow and overflow) that typically result in a loss of prediction accuracy. This means that the resulting lower-complexity network incurs no degradation of prediction accuracy compared to the original INT8 network.
Figure 2: RNS representation of integers by their values modulo pairwise co-prime integers
The following equation shows the same computation for the MxM output y as was shown in Figure 1, except in Figure 3 the calculation is performed using RNS(247, 251, 253). The weight, activation, and output transform matrices for RNS(253) are shown. As shown, the transform coefficients (G, B, A matrices) can all be represented precisely with an INT8 representation, and y, (the result of the convolution) can be reconstructed using either the Chinese Remainder Theorem or Mixed Radix Conversion.
Figure 3. The Winograd convolution F (10x10,3x3) over RNS (247,251,253)
In Table 1, we present the speedup achieved on different layers of the VGG16 convolution neural network using our RNS-based Winograd convolution with ImageNet dataset, compared to the baseline INT8 and INT16 approaches. As shown, we achieve around a 2x speedup over the standard im2col+GEMM implementation on an Arm Cortex-A73 platform with our residual number system-based Winograd approach. We anticipate that speedups of this magnitude will enable the next generation of advanced convolution neural networks for image, video, and speech applications to execute efficiently on embedded and mobile platforms.
Table 1: Inference performance of 8-bit activation and 8-bit weight quantized CNN layers of VGG16 with Winograd algorithm F(14 14; 3 3) over RNS(251,241,239) and RNS(4001,4331) on Arm Cortex-A73, having 71.4% top-1 prediction accuracy with ImageNet dataset. The corresponding transforms are in the supplementary materials. The speed-up of RNS(251,241,239) and RNS(4001,4331) are the runtime improvement relative to the standard INT8 and INT16 Im2col+GEMM convolution baselines respectively.
Zhi-Gang Liu from Arm’s ML Research Lab presented the details of this research at ECCV - take a look at the full paper to learn more.
Discover more about ML Research at Arm
Read the full paper
Take a look at some of the other blogs published recently by our Machine Learning researchers: