I’ve just got to have a little boast about my latest PCB. I managed to draw up the schematic and lay out the PCB in just 3 hours start to finish. And anyone who knows me will tell you that I’m quite thorough when designing boards, double-checking measurements and footprints and so on.
Admittedly, I was doing a lot of recycling, about 75% of it was cut & pasted from other boards I have designed in the past but hey, if it gets the board out the door quicker…
In case you’re wondering what it does, it’s a custom RS422 to RS232 Bit Rate Converter I did for a client and of course, the board works perfectly with no design errors.
What a mouthful of acronyms in that title! This is going to be a very nerdy post.
Today I have been having fun processing audio with the NXP1758 Arm Cortex-M3 microprocessor. It’s really quite easy with the DSP library from NXP. Above is a screen shot of it showing the spectrum of a single note from a synthesizer.
The DSP Library
To get the library, download AN10913 from NXP’s website. There is no source code for the FFT, you need to link the static library NXP_M3_DSPLibFft.a into your project. The documentation doesn’t say much about the FFT function which is why I’m writing this blog post. Here’s some things they don’t mention:
- The FFT functions use complex numbers for both the input and output data
- It won’t do an in-place FFT – you need two separate data buffers
- The number of output data points is always equal to the number of input data points
Processing the data
I’m doing a 1024 point FFT so I’ll need a 4096 byte input buffer and a 4096 byte output buffer for a total RAM usage of 8k. The buffers are twice as large as you’d think because they need to store complex numbers. Each complex number consists of two 16-bit signed values, the first one being the real component and the second one the imaginary component which can be safely set to zero for audio data.
Interpreting the result
The result data contains a bunch of complex numbers which represent the amplitude and phase of each “frequency bin” in the spectrum. A frequency bin is a slice out of the total spectrum; being a digital process, FFT cannot deliver a continuous spectrum.
The bandwidth of each frequency bin is the sampling frequency divided by the number of FFT points. So for example if you are sampling at 22050Hz and using 1024 FFT points, each bin will cover a 21.53Hz slice of the total spectrum. Each bin has a centre frequency of (N*SAMPLERATE)/FFTSIZE where N is the bin number. So in my example above, bin 1 will be centred at 21.53Hz, bin 2 will be centred at 43.06Hz and so on. Bin 0 is centred at 0Hz and represents the DC component of the signal.
The results are complex numbers so you can work out not only the amplitude of each bin but also the phase. I am not interested in the phase so my example code below only computes the amplitude.
// Returns an array of 512 values containing the amplitude of each frequency
// bin from DC up to the sampling frequency / 2.
int16_t *processaudio(uint16_t audiobuffer)
// First copy the audio data to the real component of the FFT input buffer.
// Set the imaginary component to zero for all samples.
memset(fft_in, 0, sizeof(fft_in));
for(i = 0; i < 1024; i++)
fft_in[i*2] = audiobuffer[i];
// Now I can run the FFT.
// Convert the output data back into real numbers because I am not interested
// in the phase component.
// Also note that the second half of the FFT output mirrors the first half so it is
// only necessary to process 512 data points.
for(i = 0; i < 512; i++)
a = fft_out[i*2];
b = fft_out[i*2+1];
spectrum[i] = sqrt((a*a)+(b*b));