gpt4 book ai didi

signal-processing - 什么可能导致 FFT 数据在错误频率处出现尖峰?

转载 作者:行者123 更新时间:2023-12-01 17:16:31 34 4
gpt4 key购买 nike

我正在使用 Apple 的 Accelerate 框架在 iPhone 上实现 FFT 音高检测,如所述 many times之前在这里。

我了解相位偏移、bin 频率,并研究了几种使用 FFT 技术(简单基音检测、自相关、倒谱等)来检测基音的开源调谐器。这是我的问题:

我的 FFT 结果始终偏离 5-10 Hz (+/-),即使箱仅相距 1-2 赫兹。我尝试过不同的算法,甚至是一个简单的算法以高分辨率采样的 FFT 显示了看似错误位置的幅度尖峰。这不是一个一致的偏移量;有些太高,有些太低。

例如,440Hz 的音调听起来是 445.2 Hz; 220Hz 变为 214Hz; 880Hz 为 874Hz;使用音频发生器将 1174Hz 变为 1183Hz。类似open-source tuner对于 Mac,使用几乎完全相同的算法可以毫无问题地完美检测音高。 (这些差异在设备上与模拟器上有所不同,但它们仍然关闭。)

我认为问题不在于箱分辨率,因为实际音调和检测到的幅度尖峰之间通常存在一些箱。就好像输入只是听到了错误的音调。

我已将代码粘贴在下面。一般流程很简单:

将一个步骤推到 FFT 缓冲区 -> Hann Window -> FFT -> 相位/幅度 -> 最大音调错误。

enum {
kOversample = 4,
kSamples = MAX_FRAME_LENGTH,
kSamples2 = kSamples / 2,
kRange = kSamples * 5 / 16,
kStep = kSamples / kOversample
};



const int PENDING_LEN = kSamples * 5;
static float pendingAudio[PENDING_LEN * sizeof(float)];
static int pendingAudioLength = 0;

- (void)processBuffer {
static float window[kSamples];
static float phase[kRange];
static float lastPhase[kRange];
static float phaseDeltas[kRange];
static float frequencies[kRange];
static float slidingFFTBuffer[kSamples];
static float buffer[kSamples];

static BOOL initialized = NO;
if (!initialized) {
memset(lastPhase, 0, kRange * sizeof(float));

vDSP_hann_window(window, kSamples, 0);
initialized = YES;
}

BOOL canProcessNewStep = YES;
while (canProcessNewStep) {

@synchronized (self) {
if (pendingAudioLength < kStep) {
break; // not enough data
}
// Rotate one step's worth of pendingAudio onto the end of slidingFFTBuffer
memmove(slidingFFTBuffer, slidingFFTBuffer + kStep, (kSamples - kStep) * sizeof(float));
memmove(slidingFFTBuffer + (kSamples - kStep), pendingAudio, kStep * sizeof(float));
memmove(pendingAudio, pendingAudio + kStep, (PENDING_LEN - kStep) * sizeof(float));
pendingAudioLength -= kStep;
canProcessNewStep = (pendingAudioLength >= kStep);
}

// Hann Windowing
vDSP_vmul(slidingFFTBuffer, 1, window, 1, buffer, 1, kSamples);
vDSP_ctoz((COMPLEX *)buffer, 2, &splitComplex, 1, kSamples2);

// Carry out a Forward FFT transform.
vDSP_fft_zrip(fftSetup, &splitComplex, 1, log2f(kSamples), FFT_FORWARD);

// magnitude to decibels
static float magnitudes[kRange];
vDSP_zvmags(&splitComplex, 1, magnitudes, 1, kRange);
float zero = 1.0;
vDSP_vdbcon(magnitudes, 1, &zero, magnitudes, 1, kRange, 0); // to decibels

// phase
vDSP_zvphas(&splitComplex, 1, phase, 1, kRange); // compute magnitude and phase
vDSP_vsub(lastPhase, 1, phase, 1, phaseDeltas, 1, kRange); // compute phase difference
memcpy(lastPhase, phase, kRange * sizeof(float)); // save old phase

double freqPerBin = sampleRate / (double)kSamples;
double phaseStep = 2.0 * M_PI * (float)kStep / (float)kSamples;

// process phase difference ( via https://stackoverflow.com/questions/4633203 )
for (int k = 1; k < kRange; k++) {
double delta = phaseDeltas[k];
delta -= k * phaseStep; // subtract expected phase difference
delta = remainder(delta, 2.0 * M_PI); // map delta phase into +/- M_PI interval
delta /= phaseStep; // calculate diff from bin center frequency
frequencies[k] = (k + delta) * freqPerBin; // calculate the true frequency
}

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

MCTunerData *tunerData = [[[MCTunerData alloc] initWithSize:MAX_FRAME_LENGTH] autorelease];

double maxMag = -INFINITY;
float maxFreq = 0;
for (int i=0; i < kRange; i++) {
[tunerData addFrequency:frequencies[i] withMagnitude:magnitudes[i]];
if (magnitudes[i] > maxMag) {
maxFreq = frequencies[i];
maxMag = magnitudes[i];
}
}

NSLog(@"Max Frequency: %.1f", maxFreq);

[tunerData calculate];

// Update the UI with our newly acquired frequency value.
[self.delegate frequencyChangedWithValue:[tunerData mainFrequency] data:tunerData];

[pool drain];
}

}

OSStatus renderCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,
AudioBufferList *ioData)
{
MCTuner* tuner = (MCTuner *)inRefCon;

OSStatus err = AudioUnitRender(tuner->audioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, tuner->bufferList);
if (err < 0) {
return err;
}

// convert SInt16 to float because iOS doesn't support recording floats directly
SInt16 *inputInts = (SInt16 *)tuner->bufferList->mBuffers[0].mData;

@synchronized (tuner) {
if (pendingAudioLength + inNumberFrames < PENDING_LEN) {

// Append the audio that just came in into the pending audio buffer, converting to float
// because iOS doesn't support recording floats directly
for(int i = 0; i < inNumberFrames; i++) {
pendingAudio[pendingAudioLength + i] = (inputInts[i] + 0.5) / 32767.5;
}
pendingAudioLength += inNumberFrames;
} else {
// the buffer got too far behind. Don't give any more audio data.
NSLog(@"Dropping frames...");
}
if (pendingAudioLength >= kStep) {
[tuner performSelectorOnMainThread:@selector(processBuffer) withObject:nil waitUntilDone:NO];
}
}

return noErr;
}

最佳答案

我没有详细浏览你的代码,但这让我一下子就明白了:

vDSP_zvmags(&splitComplex, 1, magnitudes, 1, kRange);

重要的是要记住,实数到复数 fft 的结果是以一种有点奇怪的布局打包的。如果第 j 个傅里叶系数的实部和虚部分别用 R(j) 和 I(j) 表示,则 splitComplex< 的 realimag 分量 对象具有以下内容:

.real = {  R(0) , R(1), R(2), ... , R(n/2 - 1) } 
.imag = { R(n/2), I(1), I(2), ... , I(n/2 - 1) }

因此,你的震级计算做了一些奇怪的事情;幅度向量中的第一个条目是 sqrt(R(0)^2 + R(n/2)^2),其中应该是 |R(0)|.我没有仔细研究所有常数,但这似乎可能会导致差一错误,导致您丢失奈奎斯特带 (R(n/2)) 或类似的错误。这种相差一的错误可能会导致频段被视为比实际情况更宽或更窄,这将导致整个范围内的小音调向上或向下缩放,这与实际情况相匹配你所看到的。

关于signal-processing - 什么可能导致 FFT 数据在错误频率处出现尖峰?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5503501/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com