gpt4 book ai didi

ios - 使用 AVAudioEngine 从 AVAudioPCMBuffer 播放音频

转载 作者:搜寻专家 更新时间:2023-10-30 21:59:30 84 4
gpt4 key购买 nike

我有两个类,MicrophoneHandlerAudioPlayer。我已经设法使用 AVCaptureSession 使用已批准的答案来窃听麦克风数据 here ,并使用此函数将 CMSampleBuffer 转换为 NSData:

func sendDataToDelegate(buffer: CMSampleBuffer!)
{
let block = CMSampleBufferGetDataBuffer(buffer)
var length = 0
var data: UnsafeMutablePointer<Int8> = nil

var status = CMBlockBufferGetDataPointer(block!, 0, nil, &length, &data) // TODO: check for errors

let result = NSData(bytesNoCopy: data, length: length, freeWhenDone: false)

self.delegate.handleBuffer(result)
}

我现在想通过将上面生成的 NSData 转换为 AVAudioPCMBuffer 并使用 AVAudioEngine 播放它来通过扬声器播放音频。我的AudioPlayer类如下:

var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!

override init()
{
super.init()

self.setup()
self.start()
}

func handleBuffer(data: NSData)
{
let newBuffer = self.toPCMBuffer(data)
print(newBuffer)

self.playerNode.scheduleBuffer(newBuffer, completionHandler: nil)
}

func setup()
{
self.engine = AVAudioEngine()
self.playerNode = AVAudioPlayerNode()

self.engine.attachNode(self.playerNode)
self.mixer = engine.mainMixerNode

engine.connect(self.playerNode, to: self.mixer, format: self.mixer.outputFormatForBus(0))
}

func start()
{
do {
try self.engine.start()
}
catch {
print("error couldn't start engine")
}

self.playerNode.play()
}

func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer
{
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 2, interleaved: false) // given NSData audio format
let PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.memory.mBytesPerFrame)

PCMBuffer.frameLength = PCMBuffer.frameCapacity

let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))

data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length)

return PCMBuffer
}

当在上面的第一个代码片段中调用 self.delegate.handleBuffer(result) 时,缓冲区到达 handleBuffer:buffer 函数。

我能够print(newBuffer),并查看转换后缓冲区的内存位置,但扬声器没有任何声音。我只能想象与 NSData 之间的转换不一致。有任何想法吗?提前致谢。

最佳答案

跳过原始 NSData 格式

为什么不一路使用AVAudioPlayer呢?如果您确实需要 NSData,您始终可以从下面的 soundURL 加载此类数据。在这个例子中,磁盘缓冲区是这样的:

let soundURL = documentDirectory.URLByAppendingPathComponent("sound.m4a")

无论如何,为了优化内存和资源管理,直接记录到文件中是有意义的。您可以通过这种方式从记录中获取 NSData:

let data = NSFileManager.defaultManager().contentsAtPath(soundURL.path())

下面的代码就是你所需要的:

记录

if !audioRecorder.recording {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(true)
audioRecorder.record()
} catch {}
}

播放

if (!audioRecorder.recording){
do {
try audioPlayer = AVAudioPlayer(contentsOfURL: audioRecorder.url)
audioPlayer.play()
} catch {}
}

设置

let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try audioRecorder = AVAudioRecorder(URL: self.directoryURL()!,
settings: recordSettings)
audioRecorder.prepareToRecord()
} catch {}

设置

let recordSettings = [AVSampleRateKey : NSNumber(float: Float(44100.0)),
AVFormatIDKey : NSNumber(int: Int32(kAudioFormatMPEG4AAC)),
AVNumberOfChannelsKey : NSNumber(int: 1),
AVEncoderAudioQualityKey : NSNumber(int: Int32(AVAudioQuality.Medium.rawValue))]

下载 Xcode 项目:

你可以找到这个例子here .从 Swift Recipes 下载完整的项目,它可以在模拟器和设备上录制和播放。 .

关于ios - 使用 AVAudioEngine 从 AVAudioPCMBuffer 播放音频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33906649/

84 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com