We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Which parts of CMSIS-DSP are used for CMSIS-NN? I'm asking because I want to disable as much options as possible of the CMSIS-DSP.
CMSIS-NN is a lib accelerating neural network kernels that can help TFLM and TVM accelerate inference while CMSIS-DSP is a lib for more mathematical calculations. ARM_MATH_MVEI and ARM_MATH_DSP flags are defined automatically (in arm_math_types.h) depending on the CPU for which you compile and based upon compiler #defines. But generally, both are defined since your core kernels can have a vector part using MVEI extensions and a scalar part using DSP extension. You can disable either using these flag options.
The functions supporting TensorFlow Lite framework is identified by the _s8 suffix and can be invoked from TFL micro. The functions are bit exact to TensorFlow Lite. Refer to the TensorFlow's documentation in [3] on how to run a TensorFlow Lite model using optimized CMSIS-NN kernels.