gpt4 book ai didi

iphone - RemoteIO 音频问题 - 模拟器 = 好 - 设备 = 坏

转载 作者:行者123 更新时间:2023-12-03 20:29:03 26 4
gpt4 key购买 nike

好的,所以我使用核心音频从 10 个不同的样本源中提取音频,然后在我的回调函数中将它们混合在一起。

它在模拟器中运行完美,一切都很好。然而,当我尝试在 4.2 iPhone 设备上运行它时,我遇到了麻烦。

如果我在回调中混合 2 个音频文件,一切正常。如果我混合 5 或 6 个音频文件,音频会播放,但过了很短的时间后,音频就会降级,最终不会有音频进入扬声器。 (回调不会停止)。

如果我尝试混合 10 个音频文件,回调会运行,但根本不会出现任何音频。

这几乎就像回调超时,这可能可以解释我混合 5 或 6 个音频源的情况,但无法解释最后一个混合 10 个音频源且根本不播放任何音频的情况。

我不确定以下内容是否有任何影响,但当我调试时,此消息总是打印到控制台。这是否可以表明问题所在?

mem 0x1000 0x3fffffff cache
mem 0x40000000 0xffffffff none
mem 0x00000000 0x0fff none
run
Running…
[Switching to thread 11523]
[Switching to thread 11523]
Re-enabling shared library breakpoint 1
continue
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found).

** 设置我的回调**

#pragma mark -
#pragma mark Callback setup & control

- (void) setupCallback

{
OSStatus status;


// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);

UInt32 flag = 1;
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));

//Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));

// Set up the playback callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!!
//set the reference to "self" this becomes *inRefCon in the playback callback
callbackStruct.inputProcRefCon = self;

status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));

// Initialise
status = AudioUnitInitialize(audioUnit); // error check this status


}

回调

static OSStatus playbackCallback (

void *inRefCon, // A pointer to a struct containing the complete audio data
// to play, as well as state information such as the
// first sample to play on this invocation of the callback.
AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
// between sounds; for silence, also memset the ioData buffers to 0.
AudioTimeStamp *inTimeStamp, // Unused here.
UInt32 inBusNumber, // The mixer unit input bus that is requesting some new
// frames of audio data to play.
UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s)
// pointed to by the ioData parameter.
AudioBufferList *ioData // On output, the audio data to play. The callback's primary
// responsibility is to fill the buffer(s) in the
// AudioBufferList.
) {


Engine *remoteIOplayer = (Engine *)inRefCon;
AudioUnitSampleType *outSamplesChannelLeft;
AudioUnitSampleType *outSamplesChannelRight;

outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

int thetime =0;
thetime=remoteIOplayer.sampletime;


for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
{
// get NextPacket returns a 32 bit value, one frame.
AudioUnitSampleType *suml=0;
AudioUnitSampleType *sumr=0;

//NSLog (@"frame number - %i", frameNumber);
for(int j=0;j<10;j++)

{


AudioUnitSampleType valuetoaddl=0;
AudioUnitSampleType valuetoaddr=0;


//valuetoadd = [remoteIOplayer getSample:j ];
valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ];
//valuetoaddl = [remoteIOplayer getSample:j];
valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ];

suml = suml+(valuetoaddl/10);
sumr = sumr+(valuetoaddr/10);

}


outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


remoteIOplayer.sampletime +=1;


}

return noErr;
}

我的音频提取功能

-(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch;

{

AudioUnitSampleType returnvalue= 0;

soundStruct snd=soundStructArray[index];
UInt64 sn= snd.frameCount;
UInt64 st=sampletime;
UInt64 read= (UInt64)(st%sn);


if(ch==0)
{
if (snd.sendvalue==1) {
returnvalue = snd.audioDataLeft[read];

}else {
returnvalue=0;
}

}else if(ch==1)

{
if (snd.sendvalue==1) {
returnvalue = snd.audioDataRight[read];
}else {
returnvalue=0;
}

soundStructArray[index].sampleNumber=read;
}


if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount)
{
soundStructArray[index].sampleNumber=0;

}

return returnvalue;


}

编辑 1

为了回应@andre,我将回调更改为以下内容,但仍然没有帮助。

static OSStatus playbackCallback (

void *inRefCon, // A pointer to a struct containing the complete audio data
// to play, as well as state information such as the
// first sample to play on this invocation of the callback.
AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
// between sounds; for silence, also memset the ioData buffers to 0.
AudioTimeStamp *inTimeStamp, // Unused here.
UInt32 inBusNumber, // The mixer unit input bus that is requesting some new
// frames of audio data to play.
UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s)
// pointed to by the ioData parameter.
AudioBufferList *ioData // On output, the audio data to play. The callback's primary
// responsibility is to fill the buffer(s) in the
// AudioBufferList.
) {


Engine *remoteIOplayer = (Engine *)inRefCon;
AudioUnitSampleType *outSamplesChannelLeft;
AudioUnitSampleType *outSamplesChannelRight;

outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

int thetime =0;
thetime=remoteIOplayer.sampletime;


for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
{
// get NextPacket returns a 32 bit value, one frame.
AudioUnitSampleType suml=0;
AudioUnitSampleType sumr=0;

//NSLog (@"frame number - %i", frameNumber);
for(int j=0;j<16;j++)

{



soundStruct snd=remoteIOplayer->soundStructArray[j];
UInt64 sn= snd.frameCount;
UInt64 st=remoteIOplayer.sampletime;
UInt64 read= (UInt64)(st%sn);

suml+= snd.audioDataLeft[read];
suml+= snd.audioDataRight[read];


}


outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


remoteIOplayer.sampletime +=1;


}

return noErr;
}

最佳答案

  1. 就像 Andre 所说,回调中最好不要有任何 Objective-C 函数调用。您还应该将 inputProcRefCon 更改为 C-Struct 而不是 Objective-C 对象。

  2. 此外,看起来您可能会逐帧“手动”将数据复制到缓冲区中。相反,使用 memcopy 复制一大块数据。

  3. 另外,我很确定您没有在回调中执行磁盘 I/O,但如果您这样做,您也不应该这样做。

关于iphone - RemoteIO 音频问题 - 模拟器 = 好 - 设备 = 坏,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4325248/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com