I made a simple neural network with a few words. Converted to a microcontroller. All OK. I convert a file with one of the words into a spectrogram with the tools that come with TensorFlow. Sew in the code, everything is fine, the neural network recognizes the word. Using the code to convert an audio file into a spectrogram from Tensorflow light for MCU, I wrote my library (in C). But the appearance of the spectrograms is different. What is the problem? If need, i can show the code.
The top graph is the work of my library.
Audacity is an excellent audio application which can show a real time spectrogram of your input audio file ... sonic-visualiser is another essential audio tool for this purpose ... they will confirm what a proper spectrogram of your audio should look like ... to understand how to code up one I suggest you invest time understanding the notion of a fourier transform ... just slogging on some library will not give you the appreciation of transforming data from time domain to frequency domain.
Do you have update regarding this one sir? I hope you resolved it somehow. Only few individuals can do this. Continue your work, regards, landscaping north shore auckland.
By the way, if you have time try checking this scipy documentation this may help: https://docs.scipy.org/doc/scipy- 0.19.0/reference/generated/scipy.signal.spectrogram.html.
View all questions in Machine Learning forum