As a hobbyist working on an audio recording project, I’ve recently switched to using the nRF5340 development board. I’ve successfully implemented the I2S master functionality using the SPH0465 MEMS microphone as a slave. The data flow seems straightforward, but I’m encountering some challenges with the audio playback. Let me walk you through my setup and the issues I’m facing.
The SPH0465 microphone sends audio data in four-byte packets per channel. From what I understand, the data is transmitted in Big Endian format. After capturing the data, I push it into a ring buffer (using Zephyr’s built-in library) and then send it asynchronously via UART to my PC for processing. However, when I try to play back the recorded audio in Audacity, the results are far from what I expect.
Here’s what I’ve tried so far:
- No changes to the ring buffer data: Sent all 4 bytes as-is to UART. Loaded the data in Audacity as 32-bit signed PCM, Big-Endian. The result? Just a single tone instead of the actual audio.
- Logical shift two bits to the right and sent three bytes: Loaded as 16-bit signed PCM, Big-Endian. No sound at all.
- Logical shift two bits to the right and sent two bytes: Again, no sound.
My ring buffer data looks reasonable, and I don’t see any errors in the I2S configuration or data capture process. So, the issue must lie elsewhere. Could my understanding of the data pattern be incorrect? Or is there something about the endianness that I’m missing?
I’m also curious about how others have handled similar situations. Have you worked with audio data from MEMS microphones? How did you ensure proper endianness handling and successful playback? I’d love to hear your insights or any tricks you might have up your sleeve!
Cheers,
[Your Name]