gpt4 book ai didi

ios - 使用 Swift 进行视频缓冲输出

转载 作者:行者123 更新时间:2023-11-28 07:07:06 26 4
gpt4 key购买 nike

我的目标是获取视频缓冲区并最终将其转换为 NSData,但我不明白如何正确访问缓冲区。我有 captureOutput 函数,但如果转换缓冲区我没有成功,我不确定我是否真的在缓冲区中收集任何东西。这都是使用 swift 代码,我找到了一些使用 Objective-C 的示例,但我无法很好地理解 Obj-c 代码以弄清楚。

var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()
var bounds: CGRect = UIScreen.mainScreen().bounds
let captureSession = AVCaptureSession()
var captureConnection: AVCaptureMovieFileOutput?


override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()

for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}

func beginSession() {
var screenWidth:CGFloat = bounds.size.width
var screenHeight:CGFloat = bounds.size.height
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err)!)

if err != nil {
println("Error: \(err?.localizedDescription)")
}

videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true


captureSession.addOutput(videoCaptureOutput)


videoCaptureOutput.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
if captureSession.canAddOutput(self.videoCaptureOutput) {
captureSession.addOutput(self.videoCaptureOutput)
}

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
// I think this is where I can get the buffer info.

}

最佳答案

AVCaptureVideoDataOutputSampleBufferDelegate方法,captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) , 你可以得到缓冲区信息

let formatDescription: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)
let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)

CVPixelBufferLockBaseAddress(imageBuffer, 0)
var imagePointer: UnsafeMutablePointer<Void> = CVPixelBufferGetBaseAddress(imageBuffer)

let bufferSize: (width: Int, height: Int) = (CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetWidth(imageBuffer))

println("Buffer Size: \(bufferSize.width):\(bufferSize.height)")

关于ios - 使用 Swift 进行视频缓冲输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30083206/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com