gpt4 book ai didi

java - 每次使用 Processing 和 Beads 运行 FFT 时都会得到不同的结果

转载 作者:塔克拉玛干 更新时间:2023-11-02 20:23:37 30 4
gpt4 key购买 nike

我将 Processing 3 与 Beads 库结合使用以分析大量样本,但每次对相同数据运行分析时,我都会得到截然不同的结果。这是示例和分析设置:

import beads.*;
import org.jaudiolibs.beads.*;

AudioContext ac;
GranularSamplePlayer sample;
Gain gain;

ShortFrameSegmenter sfs;
FFT fft;
PowerSpectrum ps;
Frequency f;
SpectralPeaks sp;
float[][] meanHarmonics;

int numPeaks = 6;

void setup() {
size(1600, 900);
ac = new AudioContext();
ac.start();
println(dataPath("") + "1.wav");
sample = new GranularSamplePlayer(ac, SampleManager.sample(dataPath("") + "\\1.wav"));

gain = new Gain(ac, 1, 1);

// input chaining
gain.addInput(sample);
ac.out.addInput(gain);

// setup analysis
// break audio into more manageable chunks
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(sample);

// fast fourier transform to analyse the harmonic spectrum
fft = new FFT();
sfs.addListener(fft);

// PowerSpectrum turns the raw FFT output into proper audio data.
ps = new PowerSpectrum();
fft.addListener(ps);

// Frequency tries to determine the strongest frequency in the wave
// which is the fundamental that determines the pitch of the sound
f = new Frequency(44100.0f);
ps.addListener(f);

// Listens for harmonics
sp = new SpectralPeaks(ac, numPeaks);
ps.addListener(sp);

meanHarmonics = new float[numPeaks][2];

// initialise meanHarmonics
for(int i = 0; i < numPeaks; i++) {
for(int j = 0; j < 2; j++) {
meanHarmonics[i][j] = 0;
}
}

ac.out.addDependent(sfs);

int startTime = millis();
int loops = 0;
float meanFrequency = 0.0;
while(millis() - startTime < 1500) {
loops++;
if(loops == 1) {
sample.start(0);
}
Float inputFrequency = f.getFeatures();
if(inputFrequency != null) {
meanFrequency += inputFrequency;
}
float[][] harmonics = sp.getFeatures();
if(harmonics != null) {
for(int feature = 0; feature < numPeaks; feature++) {
// harmonic must be in human audible range
// and its amplitude must be large enough to be audible
if(harmonics[feature][0] < 20000.0 && harmonics[feature][1] > 0.01) {
// average out the frequencies
meanHarmonics[feature][0] += harmonics[feature][0];
// average out the amplitudes
meanHarmonics[feature][1] += harmonics[feature][1];
}
}
}
}
float maxAmp = 0.0;
float freq = 0.0;
sample.pause(true);
meanFrequency /= loops;
println(meanFrequency);
for(int feature = 0; feature < numPeaks; feature++) {
meanHarmonics[feature][0] /= loops;
meanHarmonics[feature][1] /= loops;
if(meanHarmonics[feature][1] > maxAmp) {
freq = meanHarmonics[feature][0];
maxAmp = meanHarmonics[feature][1];
}
println(meanHarmonics[feature][0] + " " + meanHarmonics[feature][1]);
}
println(freq + " " + meanFrequency);
println();
}

我在设定的时间内运行 FFT,在此期间我将 Frequency 对象和 SpectralPeaks 特征返回的频率相加。最后,我将累积的频率和振幅相除以获得均值。我还尝试通过查找振幅最大的频率来查找 SpectralPeaks 阵列中的基频。但是每次我运行我的程序时,我都会得到不同的结果,包括 SpectralPeaks 和 Frequency(它们的值也彼此不同)。以下是一些示例值:

第一次运行:

Spectral Peaks features:

914.84863 0.040409338

844.96295 0.033234257

816.0808 0.027509697

664.9141 0.022158746

633.3232 0.019597264

501.93716 0.01606628

Spectral Peaks fundamental: 914.84863

Frequency: 1028.1572

第二次运行,相同的样本:

Spectral Peaks features:

1023.4123 0.03913592

1109.2562 0.031178929

967.0786 0.026673868

721.2698 0.021666735

629.9294 0.018046249

480.82416 0.014858524

Spectral Peaks fundamental: 1023.4123

Frequency: 1069.3387

另外,Frequency返回的值经常是NaN,我不明白这是为什么。

最佳答案

您的代码返回不同值的原因是因为它在不同时刻对音频进行采样和分析。一旦开始播放音频,您就无法控制何时执行 Float inputFrequency = f.getFeatures();。更好的方法是不使用 millis() 并将 while 循环替换为 for 循环,并使用 ac.runForMillisecondsNonRealTime( )。这样您就可以确切地知道您执行了 1500 毫秒的分析。

  //while(millis() - startTime < 1500) {
for(int i = 0; i < numPeaks; i++) {
ac.runForNMillisecondsNonRealTime(1500/numPeaks);
Float inputFrequency = f.getFeatures();
if(inputFrequency != null) {
meanFrequency += inputFrequency;
}
float[][] harmonics = sp.getFeatures();
if(harmonics != null) {
for(int feature = 0; feature < numPeaks; feature++) {
// harmonic must be in human audible range
// and its amplitude must be large enough to be audible
if(harmonics[feature][0] < 20000.0 && harmonics[feature][1] > 0.01) {
// average out the frequencies
meanHarmonics[feature][0] += harmonics[feature][0];
// average out the amplitudes
meanHarmonics[feature][1] += harmonics[feature][1];
}
}
}
}

关于java - 每次使用 Processing 和 Beads 运行 FFT 时都会得到不同的结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48769181/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com