gpt4 book ai didi

macos - 如何从聚合 CoreAudio 设备中排除输入或输出 channel ?

转载 作者:行者123 更新时间:2023-12-04 12:36:59 25 4
gpt4 key购买 nike

我有一个基于 CoreAudio 的 MacOS/X 程序,它允许用户选择一个输入音频设备和一个输出音频设备,并且(如果用户没有为输入和输出选择相同的设备)我的程序创建了一个私有(private)的聚合音频设备,并使用它来接收音频、处理音频,然后将其发送出去进行播放。

这一切都很好,但有一个小问题——如果选定的输入设备也有一些与其硬件相关的输出,这些输出显示为聚合设备的输出 channel 的一部分,这不是我想要的行为.同样,如果选定的输出设备也有一些与其硬件相关的输入,这些输入将在聚合设备的输入中显示为输入 channel ,这也是我不想要的。

我的问题是,有什么方法可以告诉 CoreAudio 不要在我正在构建的聚合设备中包含子设备的输入或输出? (我的后备解决方案是修改我的音频渲染回调以忽略不需要的音频 channel ,但这似乎不够优雅,所以我很好奇是否有更好的方法来处理它)

我创建聚合设备的函数如下,以防万一:

// This code was adapted from the example code at :  https://web.archive.org/web/20140716012404/http://daveaddey.com/?p=51
ConstCoreAudioDeviceRef CoreAudioDevice :: CreateAggregateDevice(const ConstCoreAudioDeviceInfoRef & inputCadi, const ConstCoreAudioDeviceInfoRef & outputCadi, bool require96kHz, int32 optRequiredBufferSizeFrames)
{
OSStatus osErr = noErr;
UInt32 outSize;
Boolean outWritable;

//-----------------------
// Start to create a new aggregate by getting the base audio hardware plugin
//-----------------------

osErr = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyPlugInForBundleID, &outSize, &outWritable);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

AudioValueTranslation pluginAVT;

CFStringRef inBundleRef = CFSTR("com.apple.audio.CoreAudio");
AudioObjectID pluginID;

pluginAVT.mInputData = &inBundleRef;
pluginAVT.mInputDataSize = sizeof(inBundleRef);
pluginAVT.mOutputData = &pluginID;
pluginAVT.mOutputDataSize = sizeof(pluginID);

osErr = AudioHardwareGetProperty(kAudioHardwarePropertyPlugInForBundleID, &outSize, &pluginAVT);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

//-----------------------
// Create a CFDictionary for our aggregate device
//-----------------------

CFMutableDictionaryRef aggDeviceDict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);

CFStringRef aggregateDeviceNameRef = CFSTR("My Aggregate Device");
CFStringRef aggregateDeviceUIDRef = CFSTR("com.mycomapany.myaggregatedevice");

// add the name of the device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceNameKey), aggregateDeviceNameRef);

// add our choice of UID for the aggregate device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceUIDKey), aggregateDeviceUIDRef);

if (IsDebugFlagEnabled("public_cad_device") == false)
{
// make it private so that we don't have the user messing with it
int value = 1;
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceIsPrivateKey), CFNumberCreate(NULL, kCFNumberIntType, &value));
}

//-----------------------
// Create a CFMutableArray for our sub-device list
//-----------------------

// we need to append the UID for each device to a CFMutableArray, so create one here
CFMutableArrayRef subDevicesArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);

// add the sub-devices to our aggregate device
const CFStringRef inputDeviceUID = inputCadi()->GetPersistentUID().ToCFStringRef();
const CFStringRef outputDeviceUID = outputCadi()->GetPersistentUID().ToCFStringRef();
CFArrayAppendValue(subDevicesArray, inputDeviceUID);
CFArrayAppendValue(subDevicesArray, outputDeviceUID);

//-----------------------
// Feed the dictionary to the plugin, to create a blank aggregate device
//-----------------------

AudioObjectPropertyAddress pluginAOPA;
pluginAOPA.mSelector = kAudioPlugInCreateAggregateDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
UInt32 outDataSize;

osErr = AudioObjectGetPropertyDataSize(pluginID, &pluginAOPA, 0, NULL, &outDataSize);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

AudioDeviceID outAggregateDevice;
osErr = AudioObjectGetPropertyData(pluginID, &pluginAOPA, sizeof(aggDeviceDict), &aggDeviceDict, &outDataSize, &outAggregateDevice);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

//-----------------------
// Set the sub-device list
//-----------------------

pluginAOPA.mSelector = kAudioAggregateDevicePropertyFullSubDeviceList;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(CFMutableArrayRef);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &subDevicesArray);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

//-----------------------
// Set the master device
//-----------------------

// set the master device manually (this is the device which will act as the master clock for the aggregate device)
// pass in the UID of the device you want to use
pluginAOPA.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;

outDataSize = sizeof(outputDeviceUID);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &outputDeviceUID);
if (osErr != noErr) return ConstCoreAudioDeviceRef();

//-----------------------
// Clean up
//-----------------------

// release the CF objects we have created - we don't need them any more
CFRelease(aggDeviceDict);
CFRelease(subDevicesArray);

// release the device UID CFStringRefs
CFRelease(inputDeviceUID);
CFRelease(outputDeviceUID);

ConstCoreAudioDeviceInfoRef infoRef = CoreAudioDeviceInfo::GetAudioDeviceInfo(outAggregateDevice);
if (infoRef())
{
ConstCoreAudioDeviceRef ret(new CoreAudioDevice(infoRef, true));
return ((ret())&&(SetupSimpleCoreAudioDeviceAux(ret()->GetDeviceInfo(), require96kHz, optRequiredBufferSizeFrames, false).IsOK())) ? ret : ConstCoreAudioDeviceRef();
}
else return ConstCoreAudioDeviceRef();
}

最佳答案

有一些方法可以处理 channel 映射(您基本上是在描述),但我怀疑这对您来说是否是“更好”的方式。
使用音频单元的 AudioToolbox 框架中涵盖了此类功能。在这种情况下,尤其是 kAudioUnitSubType_HALOutput AudioUnit (AUComponent.h) 很有趣。
使用这种类型的 AudioUnit,您可以以指定的 channel 格式在特定的音频设备之间发送和接收音频。当所需的 channel 布局与设备的 channel 布局不匹配时,您可以进行 channel 映射。
要获得一些技术细节,请查看:
https://developer.apple.com/library/archive/technotes/tn2091/_index.html
请注意,很多 AudioToolbox 正在被 AVAudioEngine 取代。
因此,在您的情况下,我认为通过忽略不需要的样本来进行手动 channel 映射会更容易。
另外,我不确定 CoreAudio 是否提供“切片”输出缓冲区。一定要考虑让他们自己沉默。
编辑
查看 AudioHardware.h 中的文档,似乎有一种启用和禁用特定 IOProc 流的方法。
当 OS X 创建聚合时,它将不同子设备的所有 channel 放在不同的流中,因此在您的情况下,您应该能够禁用包含输出设备输入的流,反之亦然禁用包含输出设备的流输入设备的输出。
为此,请查看 AudioHardwareIOProcStreamUsagekAudioDevicePropertyIOProcStreamUsage都在 AudioHardware.h
我发现 Apple 的 HALLab 实用程序在查找实际流方面非常有用。
( https://developer.apple.com/download/more/ 并搜索“Xcode 音频工具”)

关于macos - 如何从聚合 CoreAudio 设备中排除输入或输出 channel ?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60445512/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com