gpt4 book ai didi

android - Android NDK音频回调

转载 作者:行者123 更新时间:2023-12-03 00:14:31 26 4
gpt4 key购买 nike

开发一个可以进行实时综合的Android应用。我正在使用NDK生成波形,并使用Java来生成所有UI。我有以下内容:

private Thread audioThread; 

@Override
protected void onCreate(Bundle savedInstanceState) {
// UI Initializations here
// Audio Thread creation:
if (audioThread == null) {
audioThread = new Thread() {
public void run() {
setPriority(Thread.MAX_PRIORITY);
JNIWrapper.runProcess();
}
};
audioThread.start();
}
}

在我的C++文件中:
void Java_com_rfoo_runProcess() {
OPENSL_STREAM *p = android_OpenAudioDevice(SAMPLE_RATE, 0, 2, FRAME_SIZE);
double outBuffer[FRAME_SIZE];

while (true) {
// Audio Processing code happens HERE
// Write to output buffer here
android_AudioOut(p, outBuffer, FRAME_SIZE);
}
android_CloseAudioDevice(p);
}

如果我不在自己的 runProcess代码中进行大量信号处理工作,那么这一切将变得很麻烦。因为正在进行大量工作,所以我的UI延迟确实很高,当我尝试更改信号处理代码的参数(例如频率,ADSR包络,滤波器截止频率等)时,会导致单击。

有哪些方法可以减少这种延迟?在iOS和PortAudio中,当填充时间间隔/缓冲区时,通常会调用音频回调。我尝试搜索Android中存在的类似音频回调,但找不到。我应该编程自己的计时器来调用我的处理代码吗?

谢谢!

最佳答案

是的,所以我完全将回调设置为错误的...实际上,我什至根本没有设置回调。

为了解决这个问题,我在线遵循了一些技巧,并创建了一个处理回调:

// Define the callback:
typedef void (*opensl_callback) (void *context, int buffer_frames,
int output_channels, short *output_buffer);
// Declare callback:
static opensl_callback myAudioCallback;
// Define custom callback:
static void audioCallback(void *context, int buffer_frames,
int output_channels, short *output_buffer) {
// Get my object's data
AudioData *data = (AudioData *)context;
// Process here! Then:
output_buffer[i] = final_sample;
}

我如何声明/初始化OpenSL流:
jboolean Java_com_rfoo_AudioProcessor_runProcess(JNIEnv *, jobject, 
int srate, int numFrames) {
myAudioCallback = audioCallback;
OPENSL_Stream *p = opensl_openDevice(srate, 2, numFrames, myAudioCallback, audioData);
// Check if successful initialization
if (!p) return JNI_FALSE;
// Start our process:
opensl_startProcess(p);
return JNI_TRUE;
}

基本上 opensl_openDevice()opensl_startProcess()的作用是:
OPENSL_STREAM *opensl_openDevice(int sampleRate, int outChans, int numFrames, opensl_callback cb, void *data) {
if (!cb) {
return NULL;
}
if (outChans == 0) {
return NULL;
}

SLuint32 srmillihz = convertSampleRate(sampleRate);
if (srmillihz < 0) {
return NULL;
}

OPENSL_STREAM *p = (OPENSL_STREAM *) calloc(1, sizeof(OPENSL_STREAM));
if (!p) {
return NULL;
}

p->callback = cb;
p->data = data;
p->isRunning = 0;

p->outputChannels = outChans;
p->sampleRate = sampleRate;

p->thresholdMillis = 750.0 * numFrames / sampleRate;

p->outputBuffer = NULL;
p->dummyBuffer = NULL;

p->numFrames = numFrames;
p->outputBufferFrames = OUTPUT_BUFFERS * numFrames;

if (openSLCreateEngine(p) != SL_RESULT_SUCCESS) {
opensl_close(p);
return NULL;
}

if (outChans) {
int outBufSize = p->outputBufferFrames * outChans;
if (!(openSLPlayOpen(p, srmillihz) == SL_RESULT_SUCCESS &&
(p->outputBuffer = (short *) calloc(outBufSize, sizeof(short))))) {
opensl_close(p);
return NULL;
}
}

LOGI("OpenSL_Stream", "Created OPENSL_STREAM(%d, %d, %d, %d)",
sampleRate, inChans, outChans, callbackBufferFrames);
LOGI("OpenSL_Stream", "numBuffers: %d", OUTPUT_BUFFERS);
return p;
}

开始流代码:
int opensl_startProcess(OPENSL_STREAM *p) {
if (p->isRunning) {
return 0; // Already running.
}

p->outputIndex = 0;
p->readIndex = -1;

p->outputTime.tv_sec = 0;
p->outputTime.tv_nsec = 0;
p->outputIntervals = 0;
p->previousOutputIndex = 0;
p->outputOffset = 0;

p->lowestMargin = p->inputBufferFrames;

if (p->playerPlay) {
LOGI("OpenSL_Stream", "Starting player queue.");
int i;
for (i = 0; i < OUTPUT_BUFFERS; ++i) {
playerCallback(p->playerBufferQueue, p);
}
if ((*p->playerPlay)->SetPlayState(p->playerPlay,
SL_PLAYSTATE_PLAYING) != SL_RESULT_SUCCESS) {
opensl_pause(p);
return -1;
}
}
p->isRunning = 1;
return 0;
}

和我的音频播放器回调:
static void playerCallback(SLAndroidSimpleBufferQueueItf bq, void *context) {
OPENSL_STREAM *p = (OPENSL_STREAM *) context;

short *currentOutputBuffer = p->outputBuffer +
(p->outputIndex % p->numFrames) * p->outputChannels;

memset(currentOutputBuffer, 0, p->callbackBufferFrames * p->outputChannels * sizeof(short));

p->callback(p->context, p->sampleRate, p->callbackBufferFrames,
p->inputChannels, p->dummyBuffer,
p->outputChannels, currentOutputBuffer);
}
(*bq)->Enqueue(bq, currentOutputBuffer, p->callbackBufferFrames * p->outputChannels * sizeof(short));
p->outputIndex = nextIndex(p->outputIndex, p->callbackBufferFrames);
}

完成整理后,我将链接我的github opensl_stream代码示例,以便像我这样的其他菜鸟可以轻松找到可用的示例。干杯! :)

关于android - Android NDK音频回调,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39029183/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com