gpt4 book ai didi

iOS - 生成并播放不确定的简单音频(正弦波)

转载 作者:可可西里 更新时间:2023-11-01 03:27:27 26 4
gpt4 key购买 nike

我想为 iOS 构建一个非常简单的应用程序,它带有一个按钮来启动和停止音频信号。信号将只是一个正弦波,它将在整个播放过程中检查我的模型(音量的实例变量)并相应地更改其音量。

我的困难与任务的不确定性有关。我了解如何构建表格、用数据填充它们、响应按钮按下等等;然而,当涉及到无限期地继续某些事情(在这种情况下,声音)时,我有点卡住了!任何指针都会很棒!

感谢阅读。

最佳答案

这是一个简单的应用程序,它将按需播放生成的频率。你没有指定是做 iOS 还是 OSX,所以我选择了 OSX,因为它稍微简单一些(不会弄乱 Audio Session 类别)。如果您需要 iOS,您将能够通过查看 Audio Session 类别基础知识并将默认输出音频单元替换为 RemoteIO 音频单元来找出缺失的部分。

请注意,这纯粹是为了演示一些核心音频/音频单元基础知识。如果您想开始变得比这更复杂,您可能需要查看 AUGraph API(同样为了提供一个干净的示例,我没有进行任何错误检查。在处理 Core Audio 时总是进行错误检查。

您需要将 AudioToolboxAudioUnit 框架添加到您的项目中才能使用此代码。

#import <AudioToolbox/AudioToolbox.h>

@interface SWAppDelegate : NSObject <NSApplicationDelegate>
{
AudioUnit outputUnit;
double renderPhase;
}
@end

@implementation SWAppDelegate

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
// First, we need to establish which Audio Unit we want.

// We start with its description, which is:
AudioComponentDescription outputUnitDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_DefaultOutput,
.componentManufacturer = kAudioUnitManufacturer_Apple
};

// Next, we get the first (and only) component corresponding to that description
AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription);

// Now we can create an instance of that component, which will create an
// instance of the Audio Unit we're looking for (the default output)
AudioComponentInstanceNew(outputComponent, &outputUnit);
AudioUnitInitialize(outputUnit);

// Next we'll tell the output unit what format our generated audio will
// be in. Generally speaking, you'll want to stick to sane formats, since
// the output unit won't accept every single possible stream format.
// Here, we're specifying floating point samples with a sample rate of
// 44100 Hz in mono (i.e. 1 channel)
AudioStreamBasicDescription ASBD = {
.mSampleRate = 44100,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagsNativeFloatPacked,
.mChannelsPerFrame = 1,
.mFramesPerPacket = 1,
.mBitsPerChannel = sizeof(Float32) * 8,
.mBytesPerPacket = sizeof(Float32),
.mBytesPerFrame = sizeof(Float32)
};

AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&ASBD,
sizeof(ASBD));

// Next step is to tell our output unit which function we'd like it
// to call to get audio samples. We'll also pass in a context pointer,
// which can be a pointer to anything you need to maintain state between
// render callbacks. We only need to point to a double which represents
// the current phase of the sine wave we're creating.
AURenderCallbackStruct callbackInfo = {
.inputProc = SineWaveRenderCallback,
.inputProcRefCon = &renderPhase
};

AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackInfo,
sizeof(callbackInfo));

// Here we're telling the output unit to start requesting audio samples
// from our render callback. This is the line of code that starts actually
// sending audio to your speakers.
AudioOutputUnitStart(outputUnit);
}

// This is our render callback. It will be called very frequently for short
// buffers of audio (512 samples per call on my machine).
OSStatus SineWaveRenderCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
// inRefCon is the context pointer we passed in earlier when setting the render callback
double currentPhase = *((double *)inRefCon);
// ioData is where we're supposed to put the audio samples we've created
Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData;
const double frequency = 440.;
const double phaseStep = (frequency / 44100.) * (M_PI * 2.);

for(int i = 0; i < inNumberFrames; i++) {
outputBuffer[i] = sin(currentPhase);
currentPhase += phaseStep;
}

// If we were doing stereo (or more), this would copy our sine wave samples
// to all of the remaining channels
for(int i = 1; i < ioData->mNumberBuffers; i++) {
memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize);
}

// writing the current phase back to inRefCon so we can use it on the next call
*((double *)inRefCon) = currentPhase;
return noErr;
}

- (void)applicationWillTerminate:(NSNotification *)notification
{
AudioOutputUnitStop(outputUnit);
AudioUnitUninitialize(outputUnit);
AudioComponentInstanceDispose(outputUnit);
}

@end

您可以随意调用AudioOutputUnitStart()AudioOutputUnitStop() 来开始/停止产生音频。如果您想动态更改频率,您可以传入一个指向 struct 的指针,其中包含 renderPhase double 和另一个表示您想要的频率的结构。

在渲染回调中要小心。它是从实时线程调用的(不是来自与主运行循环相同的线程)。渲染回调有一些相当严格的时间要求,这意味着您不应该在回调中做很多事情,例如:

  • 分配内存
  • 等待互斥锁
  • 读取磁盘上的文件
  • Objective-C 消息传递(是的,认真的。)

请注意,这不是唯一的方法。自从您标记了此核心音频后,我才以这种方式进行了演示。如果您不需要更改频率,您可以将 AVAudioPlayer 与包含正弦波的预制声音文件一起使用。

还有 Novocaine ,这对你隐藏了很多这样的冗长。您还可以查看 Audio Queue API,它的工作方式与我编写的 Core Audio 示例非常相似,但将您与硬件分离更多一点(即,它对您在渲染回调中的行为不那么严格)。

关于iOS - 生成并播放不确定的简单音频(正弦波),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14466371/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com