gpt4 book ai didi

ios - NewTimePitch与混音器

转载 作者:行者123 更新时间:2023-12-01 16:27:04 26 4
gpt4 key购买 nike

我的图表工作与Apple提供的示例应用程序非常相似。

https://developer.apple.com/library/ios/samplecode/MixerHost/Listings/Classes_MixerHostAudio_m.html#//apple_ref/doc/uid/DTS40010210-Classes_MixerHostAudio_m-DontLinkElementID_6

我的mixerNode由自定义数据(而不是吉他/节拍)馈送-但设置类似。两条总线在混音器上均为立体声。

我正在尝试时移内容,但到目前为止一直没有成功。我尝试将kAudioUnitSubType_NewTimePitch添加到图形中,但是无论何时添加图形都无法创建。是否有任何源示例说明我如何使用混音器单元进行时移(移动所有总线)?

这是一些工作代码:

// Describe audio component
AudioComponentDescription output_desc;
bzero(&output_desc, sizeof(output_desc));
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = self.componentSubType;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;


// multichannel mixer unit
AudioComponentDescription mixer_desc;
bzero(&mixer_desc, sizeof(mixer_desc));
mixer_desc.componentType = kAudioUnitType_Mixer;
mixer_desc.componentSubType = kAudioUnitSubType_MultiChannelMixer;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Describe NewTimePitch component
AudioComponentDescription speed_desc;
bzero(&speed_desc, sizeof(speed_desc));
speed_desc.componentType = kAudioUnitType_FormatConverter;
speed_desc.componentSubType = kAudioUnitSubType_NewTimePitch;
speed_desc.componentFlags = 0;
speed_desc.componentFlagsMask = 0;
speed_desc.componentManufacturer = kAudioUnitManufacturer_Apple;


result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
if (result) { printf("AUGraphNewNode 1 result %ld %4.4s\n", (long)result, (char*)&result); return; }

result = AUGraphAddNode(mGraph, &speed_desc, &timeNode );
if (result) { printf("AUGraphNewNode 2 result %ld %4.4s\n", (long)result, (char*)&result); return; }

result = AUGraphAddNode(mGraph, &mixer_desc, &mixerNode );
if (result) { printf("AUGraphNewNode 3 result %ld %4.4s\n", (long)result, (char*)&result); return; }

result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, outputNode, 0);
if (result) { printf("AUGraphConnectNodeInput mixer-> time result %ld %4.4s\n", (long)result, (char*)&result); return; }

// open the graph AudioUnits are open but not initialized (no resource allocation occurs here)

result = AUGraphOpen(mGraph);
if (result) { printf("AUGraphOpen result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
if (result) { printf("AUGraphNodeInfo mixer result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

result = AUGraphNodeInfo(mGraph, timeNode, NULL, &mTime);
if (result) { printf("AUGraphNodeInfo time result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

result = AUGraphNodeInfo(mGraph, outputNode, NULL, &mOutput);
if (result) { printf("AUGraphNodeInfo output result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }


UInt32 numbuses = 1;

result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, sizeof(numbuses));
if (result) { printf("AudioUnitSetProperty bus result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }


for (int i = 0; i < numbuses; ++i) {
// setup render callback struct
AURenderCallbackStruct rcbs;
rcbs.inputProc = &mixerInput;
rcbs.inputProcRefCon = (__bridge void *)(outputStream);

printf("set kAudioUnitProperty_SetRenderCallback for mixer input bus %d\n", i);

// Set a callback for the specified node's specified input
result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &rcbs);
// equivalent to AudioUnitSetProperty(mMixer, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, i, &rcbs, sizeof(rcbs));
if (result) { printf("AUGraphSetNodeInputCallback result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

// set input stream format to what we want
printf("set mixer input kAudioUnitProperty_StreamFormat for bus %d\n", i);

result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}

result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamInAudioFormat, sizeof(streamInAudioFormat));
if (result) { printf("AudioUnitSetProperty mixer result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

result = AudioUnitSetProperty(mOutput, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamInAudioFormat, sizeof(streamInAudioFormat));
if (result) { printf("AudioUnitSetProperty output result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

CAShow(mGraph);
// now that we've set everything up we can initialize the graph, this will also validate the connections
result = AUGraphInitialize(mGraph);
if (result) { printf("AUGraphInitialize result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

这段代码有效-我有一个混合器,可以通过回调将数据泵入其中。您可以看到我创建了时间节点,但是无论我将其插入到图形中的哪个位置,它都会杀死它。我也无法设置流格式或其他内容。

理想情况下,我想做这样的事情:
result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, timeNode, 0);
result = AUGraphConnectNodeInput(mGraph, timeNode, 0, outputNode, 0);

但这是行不通的。

这是该设置的输出:

AudioUnitGraph 0x385003:
member 节点:
节点1:'auou''vpio''appl',实例0x134f40b10 O
节点2:'aufc''nutp''appl',实例0x134e733b0 O
节点3:'aumx''mcmx''appl',实例0x134ea71d0 O
连接方式:
节点3总线0 =>节点2总线0 [2 ch,44100 Hz,'lpcm'(0x00000029)32位小端浮点数,已解交织]
节点2总线0 =>节点1总线0 [1 ch,0 Hz,'lpcm'(0x00000029)32位小端浮点数,已解交织]
输入回调:
{0x100038ea0,0x134f7f900} =>节点3总线0 [2 ch,44100 Hz]
当前状态:
mLastUpdateError = 0,eventsToProcess = F,isInitialized = F,isRunning = F
2016-01-07 23:21:32.230 R5ProTestbed [901:503908] 23:21:32.229错误:[0x19ff25000] 2776:ConnectAudioUnit失败,错误为-10868
2016-01-07 23:21:32.230 R5ProTestbed [901:503908] 23:21:32.230错误:[0x19ff25000] 1682:初始化失败,错误为-10868

最佳答案

根据CAShow,您当前的图形是这样的:
混音器-> TimePitch-> VoiceProcess
(您的输出节点不在图中)

您无法另外将调音台输出连接到其他设备
在您的代码中,您有

result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, timeNode, 0);

所以你也不能添加
result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, outputNode, 0);

拥有以上两条线会导致图形混乱,并且它不知道您想要混音器的输出在哪里。

同样,您将混音器输出连接到输出节点
result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, outputNode, 0);

因此,您也无法将时间节点连接到outputNode
result = AUGraphConnectNodeInput(mGraph, timeNode, 0, outputNode, 0);

由于输出节点有两个输入,并且只能有一个输入,因此将这两个都弄糊涂了。
就像您要建立“Y”连接一样,就连接而言,您无法执行。

您可以将输出放在一个输入上,这样两条线都会发生冲突。找出链中所需的位置,然后将一个输出恰好连接到一个输入。
然后将渲染回调设置为链中的第一个节点。

从您的评论“我正在尝试做混音器-> newtimepitch-> IO”
您需要制作三个节点,
  • 将混音器输出连接到时间音
  • 将时间点连接到RemoteIO

  • 您需要3个节点。两个AUGraphConnectNodeInput()调用。
    将渲染回调连接到混合器。像这样:
    result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, timeNode, 0);
    result = AUGraphConnectNodeInput(mGraph, timeNode, 0, outputNode, 0);

    和你一样确保从代码中删除了其他节点连接。我不知道您是否删除了其他连接或将其留在并添加了更多连接。

    关于ios - NewTimePitch与混音器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34668685/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com