gpt4 book ai didi

swift - 分解 Uint32 的数据字节缓冲区

转载 作者:行者123 更新时间:2023-11-28 14:07:20 30 4
gpt4 key购买 nike

我正在使用 AVCaptureSession 捕获音频。在用于处理捕获的数据的回调函数中,我将流放在数据结构(字节缓冲区)中。看起来数据是 UInt8(对于字节缓冲区有意义),但我相信流数据是 UInt32。

我不确定我应该执行以下哪些操作,但我无法让它们中的任何一个工作。我是否:

  1. 将数据转换为 UInt32 而不是 UInt8?
  2. 读取数据时,取 4 个字节组成 UInt32?
  3. 将我的捕获 session 更改为 UInt8?
  4. 放弃数据结构,自己做?

我的回调函数是:

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer: CMBlockBuffer?

// Put the sample buffer in to a list of audio buffers (audioBufferList)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
// Extract the BufferList in to an array of buffers
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
// for each buffer, extract the frame. There should only be one buffer as we are recording in mono!
for audioBuffer in buffers {
assert(audioBuffer.mNumberChannels == 1) // it should always be 1 for mono channel
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize) / 8)
}

// limit how much of the sample we pass through.
viewDelegate?.gotSoundData(data.prefix(MAX_POINTS))
}

所有的gotSoundData从view到多个子view去处理

func addSamples(samples: Data) {
//if (isHidden) { return }

samples.forEach { sample in
[...process each byte...]
}
}

我可以看到 Data.append 有定义:

mutating func append(_ bytes: UnsafePointer<UInt8>, count: Int)

最佳答案

Meggar 帮助我专注于选项 4 - 使用我自己的结构 [Int16]。如果有人对选项 1 感兴趣,请查看我后来发现的此链接,该链接扩展了 Data 以获取更多数据类型: round trip Swift number type to/from Data

回调函数:

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var audioBufferList = AudioBufferList()
var blockBuffer: CMBlockBuffer?

// Put the sample buffer in to a list of audio buffers (audioBufferList)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
// Extract the BufferList in to an array of buffers
let audioBuffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
// For each buffer, extract the samples
for audioBuffer in audioBuffers {
let samplesCount = Int(audioBuffer.mDataByteSize) / MemoryLayout<Int16>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
//convert to a "safe" array for ease of use in delegate.
var samplesArray:[Int16] = []
for sample in samples {
samplesArray.append(sample)
}
viewDelegate?.gotSample(samplesArray)
}
}

并且消费函数几乎保持不变:

func addSamples(samples: [Int16]) {
samples.forEach { sample in
[...process each Int16...]
}
}

关于swift - 分解 Uint32 的数据字节缓冲区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52874916/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com