gpt4 book ai didi

Android AudioRecord 过滤频率范围

转载 作者:搜寻专家 更新时间:2023-11-01 09:07:22 25 4
gpt4 key购买 nike

我正在使用 android 平台,从以下引用问题我知道使用返回原始数据的 AudioRecord 类我可以过滤音频范围取决于我的需要但是为此我需要算法,有人可以帮助我寻找过滤范围 b/w 14,400 bph 和 16,200 bph 的算法。

我试过“JTransform”,但我不知道我能否用 JTransform 实现这个目标?目前我正在使用“jfftpack”来显示效果很好的视觉效果,但我无法使用它实现音频过滤器。

Reference here

帮助感谢提前致谢。以下是我上面提到的我的代码我正在使用“jfftpack”库来显示你可能会在代码中找到这个库引用请不要混淆

private class RecordAudio extends AsyncTask<Void, double[], Void> {
@Override
protected Void doInBackground(Void... params) {
try {
final AudioRecord audioRecord = findAudioRecord();
if(audioRecord == null){
return null;
}

final short[] buffer = new short[blockSize];
final double[] toTransform = new double[blockSize];

audioRecord.startRecording();


while (started) {
final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);

for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}

transformer.ft(toTransform);
publishProgress(toTransform);

}
audioRecord.stop();
audioRecord.release();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;

/**
* @param toTransform
*/
protected void onProgressUpdate(double[]... toTransform) {
canvas.drawColor(Color.BLACK);
for (int i = 0; i < toTransform[0].length; i++) {
int x = i;
int downy = (int) (100 - (toTransform[0][i] * 10));
int upy = 100;
canvas.drawLine(x, downy, x, upy, paint);
}
imageView.invalidate();
}

最佳答案

在这个过程中有很多微小的细节可能会把你卡在这里。此代码未经测试,而且我不经常进行音频过滤,因此您应该非常怀疑这里。这是过滤音频的基本过程:

  1. 获取音频缓冲区
  2. 可能的音频缓冲区转换(字节到 float )
  3. (可选)应用窗口函数,即 Hanning
  4. 进行 FFT
  5. 过滤频率
  6. 进行逆 FFT

我假设您对 Android 和录音有一些基本了解,因此将在此处介绍第 4-6 步。

//it is assumed that a float array audioBuffer exists with even length = to 
//the capture size of your audio buffer

//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type

//Take the FFT
mFFT.realForward(audioBuffer);

//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way. To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE

//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2

//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;

//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){
//Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;

//Now filter the audio, I'm assuming you wanted to keep the
//frequencies of interest rather than discard them.
if(frequency < freqMin || frequency > freqMax){
//Calculate the index where the real and imaginary parts are stored
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;

//zero out this frequency
audioBuffer[real] = 0;
audioBuffer[imaginary] = 0;
}
}

//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);

关于Android AudioRecord 过滤频率范围,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10911189/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com