gpt4 book ai didi

ios - swift 3 : Using AVCaptureAudioDataOutput to analyze audio input

转载 作者:塔克拉玛干 更新时间:2023-11-02 20:34:17 24 4
gpt4 key购买 nike

我正在尝试使用 AVCaptureAudioDataOutput 来分析音频输入,如 here 所述.这不是我自己想出来的东西,所以我正在复制这个例子,但我遇到了困难。

Swift 3 中的 Xcode 提示我进行了一些更改。我在分配 samples 行时遇到编译错误。 Xcode 说,“无法使用类型为‘(UnsafeMutableRawPointer?)’的参数列表为类型‘UnsafeMutablePointer<_> 调用初始化器”

这是我修改后的代码:

func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!){
var buffer: CMBlockBuffer? = nil
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
MemoryLayout<AudioBufferList>.size, // changed for Swift 3
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
var sum:Int64 = 0
var count:Int = 0
var bufs:Int = 0
for buf in abl {
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(buf.mData), // Error here
count: Int(buf.mDataByteSize)/sizeof(Int16))
for sample in samples {
let s = Int64(sample)
sum = (sum + s*s)
count += 1
}
bufs += 1
}
print( "found \(count) samples in \(bufs) buffers, sum is \(sum)" )
}

谁能告诉我如何修复这段代码?

最佳答案

答案是我需要包装 buf.mDataOpaquePointer .即,在调用 UnsafeMutableBufferPointer<Int16>(OpaquePointer(buff.mData)) 时, 改变

start: UnsafeMutablePointer(buff.mData)

start: UnsafeMutablePointer(OpaquePointer(buff.mData))

这是完整的代码,针对 Swift 3 进行了更新:

    func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!){
var buffer: CMBlockBuffer? = nil
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
MemoryLayout<AudioBufferList>.size,
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
var sum:Int64 = 0
var count:Int = 0
var bufs:Int = 0
for buff in abl {
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(OpaquePointer(buff.mData)),
count: Int(buff.mDataByteSize)/MemoryLayout<Int16>.size)
for sample in samples {
let s = Int64(sample)
sum = (sum + s*s)
count += 1
}
bufs += 1
}
print( "found \(count) samples in \(bufs) buffers, RMS is \(sqrt(Float(sum)/Float(count)))" )
}

这满足了编译器的要求,并且似乎生成了合理的数字。

关于ios - swift 3 : Using AVCaptureAudioDataOutput to analyze audio input,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41818883/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com