gpt4 book ai didi

swift - 使用 AVCaptureVideoDataOutput 或 AVCaptureMovieFileOutput 使用 Swift 捕获视频

转载 作者:IT王子 更新时间:2023-10-29 05:19:22 25 4
gpt4 key购买 nike

我需要一些关于如何在不使用 UIImagePicker 的情况下捕获视频的指导。视频需要在单击按钮时开始和停止,然后将此数据保存到 NSDocumentDirectory。我是 swift 的新手,所以任何帮助都会很有用。

我需要帮助的代码部分是启动和停止视频 session 并将其转换为数据。我创建了一个运行 captureStillImageAsynchronouslyFromConnection 并将此数据保存到 NSDocumentDirectory 的拍照版本。我已经设置了视频捕获 session 并准备好代码来保存数据,但不知道如何从 session 中获取数据。

var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()

let captureSession = AVCaptureSession()

override func viewDidLoad() {
super.viewDidLoad()

captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()

for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}

func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))

if err != nil {
println("Error: \(err?.localizedDescription)")
}

videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true

captureSession.addOutput(videoCaptureOutput)

previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = CGRectMake(0, 0, screenWidth, screenHeight)
captureSession.startRunning()

var startVideoBtn = UIButton(frame: CGRectMake(0, screenHeight/2, screenWidth, screenHeight/2))
startVideoBtn.addTarget(self, action: "startVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(startVideoBtn)

var stopVideoBtn = UIButton(frame: CGRectMake(0, 0, screenWidth, screenHeight/2))
stopVideoBtn.addTarget(self, action: "stopVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(stopVideoBtn)
}

如果需要,我可以提供更多代码或解释。

最佳答案

为获得最佳结果,请阅读 Still and Video Media Capture来自 AV Foundation Programming Guide 的部分.

要处理来自 AVCaptureVideoDataOutput 的帧,您需要一个采用 AVCaptureVideoDataOutputSampleBufferDelegate 协议(protocol)的委托(delegate)。每当写入新帧时,将调用委托(delegate)的 captureOutput 方法。当你设置输出的委托(delegate)时,你还必须提供一个回调应该被调用的队列。它看起来像这样:

let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(myDelegate, queue: cameraQueue)
captureSession.addOutput(videoCaptureOutput)

注意: 如果您只想将电影保存到文件中,您可能更喜欢 AVCaptureMovieFileOutput 类而不是 AVCaptureVideoDataOutput 。在这种情况下,您不需要排队。但是您仍然需要一个委托(delegate),这次改为采用 AVCaptureFileOutputRecordingDelegate 协议(protocol)。 (相关方法仍称为captureOutput。)

这是上面链接的指南中有关 AVCaptureMovieFileOutput 部分的摘录:

Starting a Recording

You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, because the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];

In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the Camera Roll album. It should also check for any errors that might have occurred.

关于swift - 使用 AVCaptureVideoDataOutput 或 AVCaptureMovieFileOutput 使用 Swift 捕获视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27605913/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com