gpt4 book ai didi

ios - 从音频单元的渲染线程调用MusicDeviceMIDIEvent

转载 作者:行者123 更新时间:2023-12-01 17:47:13 26 4
gpt4 key购买 nike

关于MusicDeviceMIDIEvent我有一件事不了解。在我见过的每个示例中(搜索过Github和Apple示例),总是在主线程中使用它。现在,为了使用样本偏移量参数,文档指出:

inOffsetSampleFrame: If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to schedule to the sample, the time when a MIDI command is applied and is particularly important when starting new notes. If you are not scheduling in the audio unit's render thread, then you should set this value to 0



尽管如此,即使在最简单的情况下,例如,您只有一个采样器音频单元和一个io单元,由于采样器不允许进行渲染回调,即使该采样器不允许,也会如何从音频单元的渲染线程安排MIDI事件? (或者,如果仅使用io的回调函数来敲入),由于渲染回调函数不打算用于计划MIDI事件,这会让人感到骇俗吗?

如何从音频单元的渲染线程正确调用此功能?

最佳答案

renderNotify回调是从渲染线程进行调度的理想场所。您甚至可以在MusicDevice本身上设置renderNotify。这是AUSampler上的外观。

OSStatus status = AudioUnitAddRenderNotify(sampler, renderNotify, sampler);

在此示例中,我通过inRefCon参数传递了采样器作为引用,并且仅发送了一个note-on(144)以每44100个采样记录64个注释,但是在应用程序中,您将通过struct传入struct inRefCon您的Midi设备以及进行计划所需的所有值。注意检查渲染标志是否为预渲染。
static OSStatus renderNotify(void                         * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {

AudioUnit sampler = inRefCon;
if (ioActionFlags & kAudioUnitRenderAction_PreRender) {
for (int i = 0; i < inNumberFrames; i++) {
if (fmod(inTimeStamp->mSampleTime + i, 44000) == 0) {
MusicDeviceMIDIEvent(sampler,144, 64, 127, i); // i is the offset from render start, so use it for offset argument.
}
}
}

return noErr;
}

关于ios - 从音频单元的渲染线程调用MusicDeviceMIDIEvent,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46868416/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com