gpt4 book ai didi

android - 添加 fragment 和简单的UI元素会大大减慢音频处理算法的速度

转载 作者:行者123 更新时间:2023-12-03 00:08:17 24 4
gpt4 key购买 nike

我是Android的新手,我正在尝试构建一个APP以记录音频,进行FFT以获得频率频谱。

完整音频的缓冲区大小为155 * 2048
即155 * AudioRecord.getMinBufferSize(44100,mono_channel,PCM_16bit)

记录器中的每个块都为2048个short,我将short类型转换为double类型,并将其传递给FFT库。该库向我返回了将用于构造频谱的实部和虚部。然后我将每个块附加到数组。

现在这是问题所在:

在应用程序1中,没有UI元素或片段,只是一个简单的基本按钮,该按钮附加到侦听器,该按钮执行Async任务以从Audio.Recorder读取块,并对每个块进行FFT(每个块= 2048短)。 155个块(采样率为44100)的此过程(记录和FFT)应该花费7秒(2048 * 155/44100),但是任务花费了大约9秒,这是2秒的延迟(可以接受)。

在应用程序2中,有7个带有登录和注册屏幕的片段,其中每个片段彼此独立并链接到主要 Activity 。此处的相同代码在40-45秒内完成155 * 2048个块的任务(记录和fft),这意味着延迟高达33-37秒。对于我的目的而言,这种滞后太大了。应用程序2出现如此大的延迟可能是什么原因,我该如何减少?

以下是FFT库代码和复杂类型代码
FFT.javaComplex.java

我的申请代码

private boolean is_recording = false;

private AudioRecord recorder = null;
int minimum_buffer_size = AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);

int bufferSize = 155 * AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
private static final int SAMPLE_RATE = 44100;
private Thread recordingThread = null;
short[] audioBuffer = new short[bufferSize];

MainTask recordTask;
double finalData[];
Complex[] fftArray;
boolean recieved = false;

int data_trigger_point = 10;
int trigger_count = 0;
double previous_level_1 ;
double previous_level_2 ;
double previous_level_3 ;

int no_of_chunks_to_be_send = 30;
int count = 0;
short[] sendingBuffer = new short[minimum_buffer_size * no_of_chunks_to_be_send];
public static final int RequestPermissionCode = 1;

mButton = (ImageButton) view.findViewById(R.id.submit);
mButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {

if (is_recording) {
mButton.setBackgroundResource(R.drawable.play);
stopRecodringWithoutTone();
}
else {
mButton.setBackgroundResource(R.drawable.wait);
is_recording = true;
recordTask = new MainTask();
recordTask.execute();
}

}


});

public class MainTask extends AsyncTask<Void, int[], Void> {

@Override
protected Void doInBackground(Void... arg0) {
try {

recorder = new AudioRecord(
MediaRecorder.AudioSource.DEFAULT,
SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minimum_buffer_size);

recorder.startRecording();

short[] buffer_recording = new short[minimum_buffer_size];

int recieve_counter = 0;
while (is_recording) {
if (count < bufferSize) {
int bufferReadResult = recorder.read(buffer_recording, 0, minimum_buffer_size);
System.arraycopy(buffer_recording, 0, audioBuffer, count, buffer_recording.length);
count += bufferReadResult;
System.out.println(count);
finalData = convert_to_double(buffer_recording);
int [] magnitudes = processFFT(finalData);
}
else {
stopRecording();
}
}
}
catch (Throwable t) {
t.printStackTrace();
Log.e("V1", "Recording Failed");
}
return null;
}

@Override
protected void onProgressUpdate(int[]... magnitudes) {

}

}
private int[] processFFT(double [] data){

Complex[] fftTempArray = new Complex[finalData.length];
for (int i=0; i<finalData.length; i++)
{
fftTempArray[i] = new Complex(finalData[i], 0);
}
fftArray = FFT.fft(fftTempArray);
int [] magnitude = new int[fftArray.length/2];

for (int i=0; i< fftArray.length/2; i++) {
magnitude[i] = (int) fftArray[i].abs();
}
return magnitude;
}
private double[] convert_to_double(short data[]) {
double[] transformed = new double[data.length];
for (int j=0;j<data.length;j++) {
transformed[j] = (double)data[j];
}
return transformed;

}
private void stopRecording() {

if (null != recorder) {
recorder.stop();
postAudio(audioBuffer);
recorder.release();
is_recording = false;
recorder = null;
recordingThread = null;
count = 0;
recieved = false;
}
}

最佳答案

我不确定为什么会有延迟,但是您可以绕开此问题:运行两个异步任务,任务1记录数据并将其存储在数组中。第二个异步任务从该数组中获取块并执行FFT。

关于android - 添加 fragment 和简单的UI元素会大大减慢音频处理算法的速度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43168755/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com