gpt4 book ai didi

c++ - 如何使用 Win32 消除 C++ 中原始音频信号中的随机不连续性?

转载 作者:行者123 更新时间:2023-12-03 00:11:57 24 4
gpt4 key购买 nike

我想使用 Win32 在 C++ 中以小时间间隔将原始音频数据连续无缝地馈送到循环缓冲区中。 header.lpData WAVEHDR 的缓冲区包含原始音频数据并通过调用 waveInAddBuffer(wi, &header, sizeof(WAVEHDR));这个缓冲区会在很短的时间间隔内循环覆盖。下图显示了问题:
虽然缓冲区以小块重复被覆盖(从左到右,当前偏移由洋红色线显示,并且在洋红色线处具有不连续性的波中可见),随机位置的波中存在额外的不连续性(黄色闪电) .几年前我用 Java 写过同样的东西,它可以完美地工作,音频输入没有中断。
是我做错了什么还是 Win32 音频库中的错误?
enter image description here
这是我的 C++ 代码的相关部分:

#define VC_EXTRALEAN
#pragma comment(lib,"winmm.lib")
#include <Windows.h>

const int sample_rate = 4*4096; // must be supported by microphone
const int sample_size = 4096; // must be a power of 2

const int buffer_size = 2*sample_size;
char* buffer = new char[buffer_size];
float* wave = new float[sample_size];
int offset = 0;

void convert(float* const wave, const char* const buffer, int offset) {
const float scale = 4.0f/65536.0f;
for(int i=0; i<sample_size; i++) {
const uint p = (offset-1+sample_size-i)%(buffer_size/2);
wave[i] = scale*(float)((buffer[2*p+1]<<8)|(buffer[2*p]&0xFF));
}
}

int main() {
for(uint i=0; i<buffer_size; i++) buffer[i] = 0;
for(uint i=0; i<sample_size; i++) wave[i] = 0.0f;

WAVEFORMATEX wfx = {};
wfx.wFormatTag = WAVE_FORMAT_PCM; // PCM is standard
wfx.nChannels = 1; // 1 channel (mono)
wfx.nSamplesPerSec = sample_rate; // sample_rate
wfx.wBitsPerSample = 16; // 16 bit samples
wfx.nBlockAlign = wfx.wBitsPerSample*wfx.nChannels/8;
wfx.nAvgBytesPerSec = wfx.nBlockAlign*wfx.nSamplesPerSec*wfx.nChannels;
wfx.cbSize = 0;
HWAVEIN wi; // open recording device
WAVEHDR header = {}; // initialize header empty
header.dwFlags = 0; // clear the 'done' flag
header.dwBytesRecorded = 0; // tell it no bytes have been recorded
header.lpData = buffer; // give it a pointer to our buffer
header.dwBufferLength = buffer_size; // tell it the size of that buffer in bytes
waveInOpen(&wi, WAVE_MAPPER, &wfx, NULL, NULL, CALLBACK_NULL|WAVE_FORMAT_DIRECT);
waveInStart(wi); // start recording
waveInPrepareHeader(wi, &header, sizeof(WAVEHDR)); // prepare header

while(true) {
waveInAddBuffer(wi, &header, sizeof(WAVEHDR)); // read in new audio data into buffer
offset = header.dwBytesRecorded; // get offset of to which point the buffer is overwritten

convert(wave, buffer, offset);
// plot wave and offset

sleep(1.0/120.0); // time in seconds
}
waveInUnprepareHeader(wi, &header, sizeof(WAVEHDR));
waveInStop(wi); // once the user hits escape, stop recording, and clean up
waveInClose(wi);
}
编辑:我尝试了@Adrian McCarthy 的解决方案,但它不像评论中指出的那样工作。修改后的代码是:
#define VC_EXTRALEAN
#pragma comment(lib,"winmm.lib")
#include <Windows.h>

const int sample_rate = 4*4096; // must be supported by microphone
const int sample_size = 4096; // must be a power of 2

const uint buffer_size = 2*sample_size/8; // make buffers 1/8 the size of the total wave buffer
char* buffer1 = new char[buffer_size];
char* buffer2 = new char[buffer_size];
float* wave = new float[sample_size];
int offset = 0;

void convert(float* const wave, const char* const buffer, int offset) {
const float scale = 4.0f/65536.0f;
for(int i=sample_size-1; i>=offset/2; i--) {
wave[i] = wave[i-offset/2];
}
for(int i=0; i<offset/2; i++) {
const uint p = offset/2-1-i;
wave[i] = scale*(float)((buffer[2*p+1]<<8)|(buffer[2*p]&0xFF));
}
}

int main() {
for(uint i=0; i<buffer_size; i++) buffer1[i] = 0;
for(uint i=0; i<buffer_size; i++) buffer2[i] = 0;
for(uint i=0; i<sample_size; i++) wave[i] = 0.0f;

WAVEFORMATEX wfx = {};
wfx.wFormatTag = WAVE_FORMAT_PCM; // PCM is standard
wfx.nChannels = 1; // 1 channel (mono)
wfx.nSamplesPerSec = sample_rate; // sample_rate
wfx.wBitsPerSample = 16; // 16 bit samples
wfx.nBlockAlign = wfx.wBitsPerSample*wfx.nChannels/8;
wfx.nAvgBytesPerSec = wfx.nBlockAlign*wfx.nSamplesPerSec*wfx.nChannels;
wfx.cbSize = 0;
HWAVEIN wi; // open recording device
WAVEHDR* pCurrent = new WAVEHDR(); // initialize header empty
pCurrent->dwFlags = 0; // clear the 'done' flag
pCurrent->dwBytesRecorded = 0; // tell it no bytes have been recorded
pCurrent->lpData = buffer1; // give it a pointer to our buffer
pCurrent->dwBufferLength = buffer_size; // tell it the size of that buffer in bytes
WAVEHDR* pNext = new WAVEHDR(); // initialize header empty
pNext->dwFlags = 0; // clear the 'done' flag
pNext->dwBytesRecorded = 0; // tell it no bytes have been recorded
pNext->lpData = buffer2; // give it a pointer to our buffer
pNext->dwBufferLength = buffer_size; // tell it the size of that buffer in bytes
waveInOpen(&wi, WAVE_MAPPER, &wfx, NULL, NULL, CALLBACK_NULL|WAVE_FORMAT_DIRECT);
waveInStart(wi); // start recording
waveInPrepareHeader(wi, pCurrent, sizeof(WAVEHDR)); // prepare header
waveInPrepareHeader(wi, pNext , sizeof(WAVEHDR)); // prepare header

while(true) {
do {
waveInAddBuffer(wi, pCurrent, sizeof(WAVEHDR));
sleep(0.001);
} while((pCurrent->dwFlags&WHDR_DONE)==0);
pCurrent->dwFlags &= ~WHDR_DONE;
swap(pCurrent, pNext);

offset = pCurrent->dwBytesRecorded; // get offset of to which point the buffer is overwritten

convert(wave, buffer1, offset);
// plot wave and offset

sleep(1.0/120.0); // time in seconds
}
waveInUnprepareHeader(wi, pCurrent, sizeof(WAVEHDR));
waveInUnprepareHeader(wi, pNext , sizeof(WAVEHDR));
waveInStop(wi); // once the user hits escape, stop recording, and clean up
waveInClose(wi);
}
结果:
enter image description here

最佳答案

问题:

  • 您的线程正在与填充缓冲区并更新 header 中的字段的系统线程竞争。当您阅读 dwBytesRecorded字段,您可以获得小于缓冲区中实际字节数的值。填充缓冲区的线程偶尔会更新dwBytesRecorded ,但随着录制的继续,该数字将在一瞬间过时。这是乐观地假设在另一个线程可能正在写入的同时读取 DWORD 是安全的。
  • 当您再次添加缓冲区时,音频系统认为这是一个新的缓冲区,一旦当前缓冲区已满,就会切换到该缓冲区。您正在传递相同的缓冲区,希望它会从头开始填充它。
    但它也可能在头文件中使用 Reserved 字段并创建不一致的状态。
  • 我不确定是哪个 sleep您正在使用的功能,但它们中的大多数不能/不等待精确的时间。 Win32 Sleep将至少等待指定的毫秒数,然后将线程标记为准备运行,但直到调度程序处理它才真正运行。实际上,这可能不是问题,因为您的缓冲区是 500 毫秒,这比 sleep 的不确定性大一个数量级。

  • 实现这一点的典型方法是在两个(或更多)缓冲区之间进行 ping-pong。您添加两个非常短的缓冲区,并等待第一个获得 WHDR_DONE header 中设置的标志[见注]。然后在系统继续记录到第二个缓冲区的同时一次处理整个第一个缓冲区。处理完一个缓冲区后,重新添加它,然后等待另一个缓冲区准备好。
    // Given two buffers `ping` and `pong` with corresponding WAVEHDRs
    // `ping_header` and `pong_header`...
    WAVEHDR *pCurrent = ping_header;
    WAVEHDR *pNext = pong_header;
    waveInAddBuffer(wi, pCurrent, sizeof(WAVEHDR));
    waveInAddBuffer(wi, pNext, sizeof(WAVEHDR));

    for (;;) {
    // wait for the current buffer to fill
    while ((pCurrent->dwFlags & WHDR_DONE) == 0) {} // SEE NOTE

    // As recording continues with *pNext, process and draw
    // the data from pCurrent->lpData.

    // Now that we're done processing pCurrent, we can re-add it so
    // the system has a place to record when pNext is full.
    waveInAddBuffer(wi, pCurrent, sizeof(WAVEHDR));
    // What was next becomes current, and the new next is the old current.
    swap(pCurrent, pNext);
    }
    请注意,您的两个缓冲区可能非常短。我推荐 16-20 毫秒:大于 Windows 上默认的 15.6 毫秒计时器,但仍然在您尝试在每次循环迭代中处理多少数据的范围内。
    这里繁忙的等待循环不是很好——它可以在不做有用工作的情况下将核心驱动到 100%。但是如果处理时间接近于记录下一个缓冲区所花费的时间,那么它就不会旋转太多。 (从技术上讲,当另一个线程可能正在更新它时,您仍然会遇到读取变量的相同数据争用问题,但我们只是在等待该位变高,因此在实践中可能没问题。)
    Wave 音频 API 不是为超高速处理而设计的。它们适用于 Windows 程序。您应该处理 MM_WIM_DATA,而不是忙于等待标志。消息在窗口的窗口过程中,这将避免繁忙的等待和数据竞争,但会在每个缓冲区完成时增加一些消息传递开销。
    2020-07-19
    备注 :@ProjectPhysX 指出 WHDR_DONE 的繁忙等待循环在我的代码大纲中不起作用。编译器可以自由地假设该值永远不会改变,并且可能会优化代码以测试标志一次,然后永远旋转。这是允许的,因为我们的等待线程和设置标志的线程之间的数据竞争意味着代码具有“未定义的行为”。如果我们控制两个线程,我们可以使用任何类型的同步方案来消除数据竞争,这将起作用。但是我们无权访问音频系统中运行的线程。
    波形音频 API 旨在通过向客户端发送窗口消息来通知客户端缓冲区完成时。这对于连续记录来说效果很好,但这意味着采用事件驱动的方法,并且消息传递的开销可能会限制程序处理样本的速度。 XAudio2 或 Windows Core Audio 中的任何一个都更适合高速音频工作。使用一对(或链)小缓冲区的想法非常普遍,类似于使用后台缓冲区或交换链的图形程序。

    关于c++ - 如何使用 Win32 消除 C++ 中原始音频信号中的随机不连续性?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62968248/

    24 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com