gpt4 book ai didi

ios - 在 Swift 3 中使用 AVCaptureVideoDataOutput 录制视频

转载 作者:塔克拉玛干 更新时间:2023-11-02 20:34:18 25 4
gpt4 key购买 nike

在我们花了相当多的时间解决这个问题后没有结果,我决定在这里提问。

我们正在使用 AVCaptureVideoDataOutput 获取摄像头实时视频的像素数据,并在 captureOutput 函数中使用。但我们也想使用该数据录制视频。此外,我们想知道这段视频录制是否会像使用 AVCaptureMovieFileOutput 制作的录制视频一样压缩。

我想通知您,使用 AVCaptureMovieFileOutput 我们可以毫无问题地进行录制。但是 AVCaptureMovieFileOutputAVCaptureVideoDataOutput 不能同时工作。

您可以在下面找到我们的captureOutput函数;

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!

CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
videoWidth = CVPixelBufferGetWidth(imageBuffer)
videoHeight = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()

var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)

let context = CGContext(data: baseAddress, width: videoWidth, height: videoHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)

let imageRef = context!.makeImage()

CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

let data = imageRef!.dataProvider!.data as! NSData
let pixels = data.bytes.assumingMemoryBound(to: UInt8.self)


/* Because what we are doing with pixel data irrelevant to the question we omitted the rest of the code to make it simple */




}

最佳答案

经过一段时间的生活后,我发现了如何在获取像素信息时录制视频以对实时视频进行一些基本分析。

首先,我设置 AVAssetWriter 并在给出实际记录顺序之前调用该函数。

var sampleBufferGlobal : CMSampleBuffer?
let writerFileName = "tempVideoAsset.mov"
var presentationTime : CMTime!
var outputSettings = [String: Any]()
var videoWriterInput: AVAssetWriterInput!
var assetWriter: AVAssetWriter!


func setupAssetWriter () {

eraseFile(fileToErase: writerFileName)

presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBufferGlobal!)

outputSettings = [AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : NSNumber(value: Float(videoWidth)),
AVVideoHeightKey : NSNumber(value: Float(videoHeight))] as [String : Any]

videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)


assetWriter = try? AVAssetWriter(outputURL: createFileURL(writerFileName), fileType: AVFileTypeQuickTimeMovie)

assetWriter.add(videoWriterInput)

}

我写了另外一个函数来录音,并在同一个函数中将 sampleBuffer 复制到 sampleBufferGlobal,sampleBufferGlobal = sampleBuffer 后,在 captureOutput 函数中调用该函数进行录音。

func writeVideoFromData() {

if assetWriter?.status == AVAssetWriterStatus.unknown {

if (( assetWriter?.startWriting ) != nil) {

assetWriter?.startWriting()
assetWriter?.startSession(atSourceTime: presentationTime)

}
}



if assetWriter?.status == AVAssetWriterStatus.writing {

if (videoWriterInput.isReadyForMoreMediaData == true) {


if videoWriterInput.append(sampleBufferGlobal!) == false {

print(" we have a problem writing video")

}
}
}
}

然后为了停止录制我使用了以下函数。

   func stopAssetWriter() {

videoWriterInput.markAsFinished()

assetWriter?.finishWriting(completionHandler: {


if (self.assetWriter?.status == AVAssetWriterStatus.failed) {

print("creating movie file is failed ")

} else {

print(" creating movie file was a success ")

DispatchQueue.main.async(execute: { () -> Void in




})

}

})

}

关于ios - 在 Swift 3 中使用 AVCaptureVideoDataOutput 录制视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41802195/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com