This is the first article of three that will focus on the implementation of Fast Fourier Transform (FFT) using the mixed-radix method on Mobile ARM Mali GPU by means of OpenCL.
This blog series continues the work of Neil Tan who analyzed the main strategies for optimizing the radix-2 FFT algorithm in his blog optimizing Fast Fourier Transformation on ARM Mali GPUs.
In this first post, we are going to build up a minimal background for the 1D complex to complex FFT algorithm by starting to point out the limits of DFT using the direct computation and exploiting the well-known and commonly used radix-2 FFT.
The mathematical prospective of FFT mixed-radix will be used to explain the key concepts of digit-reverse, twiddle factor multiplication and radix computation. This will be covered extensively in the second article from the point of view of an implementation in OpenCL.
Before starting, we'd like to thank Georgios Pinitas and Moritz Pflanzer for their important help in reviewing this article and for their contribution to this implementation.
The Discrete Fourier Transform (DFT) is a domain transform (time -> frequency, image -> frequency,..) widely used in real-time digital signal processing for a variety of tasks such as spectrum analysis and convolution in the frequency domain.
It is defined as follow:
where:
The inverse function of the DFT, known as the Inverse Discrete Fourier Transform (IDFT), allows us to obtain the original function.
Commonly the above expressions are written in the following tightly-packaged forms:
with
Generally the complexity of an algorithm can be defined in terms of a number of multiplications. We can easily infer the complexity of direct computation decomposing the N-point DFT in the real and imaginary parts:
where in the above relation we have applied the following trigonometric relation:
Equating the real and imaginary part:
the direct computation would require:
Since we have to compute K output complex values, we have:
therefore the total multiplications are: 2*N^2 + 2*N^2 = 4 * N^2
declaring the complexity of direct computation as O(N^2).
FFT is an efficient algorithm for producing exactly the same result (in the limit of precision) as DFT but with a complexity of O(N log2 N).
This method (and the general idea of FFT) was popularized by a publication of J. W. Cooley and J. W. Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms).
The most famous and commonly used FFT's algorithm is radix-2 Cooley–Tukey. It works just with N power of 2 and behind this approach we find the "divide et impera" strategy which recursively breaks down the DFT into many smaller DFTs of size 2 called radix-2.
Essentially it is like we are seeing N composed by log2(N) 2's factors (i.e. N = 2 x 2 x..)
Looking at the picture we have:
Please note that after the first radix stages, the inputs of each radix basic element will not be in consecutive order. In particular there will be an offset called "span" between each input which will depend on the radix stage.
Often for signal processing applications we design our system with radix-2 FFTs in mind. However, having N power of 2 for exactly fitting radix-2 FFTs can cause a significant performance drop in some applications.
For instance, if we consider the worst case for a 1D FFT, forcing the use of radix-2 would require double the amount of data input and consequently double the amount of computation. Moreover, this loss of performance would be much worse for the 2D case. In applications of image processing we have quite often dimensions not power of 2 and then doubling the size of both rows and columns, it would increase by a factor of 4 the total FFT size.
However, even if we have previously factorized N as N = 2 x 2 x 2..., any factorization would generally be possible.
Factorizing N as N = N1 x N2 x N3 x .. the "divide et impera" strategy remains the same but the new algorithm recursively breaks the DFT into many smaller DFTs of sizes N1, N2, N3 called respectively radix-N1, radix-N2, radix-N3,..
The generalization of the basic radix-2 FFT is called mixed-radix.
In terms of pipeline stages we are going to have:
From this first analysis we can underline that the factorization is the fundamental principle behind the Cooley–Tukey algorithm and the radix-2 algorithm is just a special case of mixed radix.
Virtually mixed-radix allows us to compute the FFT on any length N of input data but as typical implementations use a limited number of optimized radixes such as radix-2, radix-3, radix-5, radix-7, the computation will be restricted to those N that are an expansion of these available factors. In our case we are going to use:
so N must be compound of power of 2,3,5,7.
Please note: In the factorization of N there is not 4 as it is a power of 2. The motivation behind the choice to implement an highly optimized radix-4 will be clear in the second article where we will compare the performance of radix-2 and mixed-radix algorithms.
This section will detail the maths behind the Cooley-Turkey mixed-radix algorithm and will explain concepts such as twiddle factor multiplication and digit-reverse.
Since factorization is the key principle of Cooley–Tukey mixed-radix algorithm, let's start to decompose N in 2 factors, Nx and Ny:
Let's arrange the input data x(n) in a matrix with Nx columns and Ny rows where n-th element can be addressed in the following manner:
with:
i.e.
If we had N = 21 and we factorized N as N = 7 x 3, the 16-th element of the input array would be addressed:
address of 16-th element = 2 + 2 * 7
using this notation, the DFT can be written as follow:
Now let's place the output elements X(k) in a matrix as well but with Nx rows and Ny columns. The motivation behind this choice will be clear during the demonstration.
The k-th element will be addressed with the following relation:
For addressing the output elements, we use kx and ky:
Addressing the output elements is completely upside-down because ky scans the columns whilst kx the rows.
and the DFT becomes:
Let's evaluate the complex exponential:
The exponential is easy to study thanks for choosing the output matrix with Nx rows and Ny columns.
Writing the 4th term as:
the final DFT's expression becomes:
This is the expression behind the computation of mixed-radix algorithm in the case of 2 factors.
If we had for instance N = 21 and we factorized N as N = 7 x 3, we could graphically summarize this final expression in the following manner:
Which can be translated in the following pipeline:
In the end from the demonstration we can highlight the 3 main computation blocks behind this algorithm:
With this first article we have finished to illustrate the minimal background behind FFT mixed-radix algorithm and we are definitely ready to dive in the next article where we are going to play with the OpenCL kernels
Ciao,
Gian Marco
[CTAToken URL = "https://community.arm.com/graphics/b/blog/posts/speeding-up-fast-fourier-transform-mixed-radix-on-mobile-arm-mali-gpu-by-means-of-opencl---part-2" target="_blank" text="Read Part 2 of this blog" class ="green"]