gpt4 book ai didi

ios - 在多线程 Swift 中处理来自 AVFoundation 的视频帧

转载 作者:行者123 更新时间:2023-12-05 05:52:59 30 4
gpt4 key购买 nike

我有一个应用程序,我在其中使用 AVFoundation 从相机获取所有帧,并使用下面的代码进行处理。我想知道是否有办法让这部分成为多线程,这样它可以运行得更快。也许将每一帧放在一个线程中的队列中,另一个线程处理队列,一个队列将每一帧的输出显示为输出?我不知道这是否可以做到,但这是因为每一帧的处理可能需要更多的时间来处理,结果图像在输出中卡住。

这是 CaptureManager 类的代码:

class CaptureManager: NSObject {
internal static let shared = CaptureManager()
weak var delegate: CaptureManagerDelegate?
var session: AVCaptureSession?
var isBackCamera = true

override init() {
super.init()

session = AVCaptureSession()
session?.sessionPreset = .high
//setup input
var device = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
let defaults = UserDefaults.standard
if let stringOne = defaults.string(forKey: defaultsKeys.rememberCamera) {
if(stringOne != "back"){
device = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
}
}else{
defaults.set("back", forKey: defaultsKeys.rememberCamera)
}
if(device != nil){
device?.set(frameRate: 30)
let input = try! AVCaptureDeviceInput(device: device!)
session?.addInput(input)
//setup output
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA]
output.setSampleBufferDelegate(self, queue: DispatchQueue.main)
session?.addOutput(output)
}else{
print("no camera")
}
}

func startSession() {
session?.startRunning()
}

func stopSession() {
session?.stopRunning()
}
func switchCamera(){
//Remove existing input
guard let currentCameraInput: AVCaptureInput = session?.inputs.first else {
return
}

//Indicate that some changes will be made to the session
session?.beginConfiguration()
session?.removeInput(currentCameraInput)
let defaults = UserDefaults.standard
if let stringOne = defaults.string(forKey: defaultsKeys.rememberCamera) {
if(stringOne == "back"){
defaults.set("front", forKey: defaultsKeys.rememberCamera)
}else{
defaults.set("back", forKey: defaultsKeys.rememberCamera)
}
}
//Get new input
var newCamera: AVCaptureDevice! = nil
if let input = currentCameraInput as? AVCaptureDeviceInput {
if (input.device.position == .back) {
newCamera = cameraWithPosition(position: .front)
} else {
newCamera = cameraWithPosition(position: .back)
}
}
newCamera.set(frameRate: 30)

//Add input to session
var err: NSError?
var newVideoInput: AVCaptureDeviceInput!
do {
newVideoInput = try AVCaptureDeviceInput(device: newCamera)
} catch let err1 as NSError {
err = err1
newVideoInput = nil
}

if newVideoInput == nil || err != nil {
print("Error creating capture device input: \(err?.localizedDescription)")
} else {
session?.addInput(newVideoInput)
}
isBackCamera.toggle()
//Commit all the configuration changes at once
session?.commitConfiguration()


}
func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice? {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices {
if device.position == position {
return device
}
}

return nil
}


func getImageFromSampleBuffer(sampleBuffer: CMSampleBuffer) ->UIImage? {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
guard let cgImage = context.makeImage() else {
return nil
}
var image: UIImage
let defaults = UserDefaults.standard
if let stringOne = defaults.string(forKey: defaultsKeys.rememberCamera) {
if(stringOne == "back"){
image = UIImage(cgImage: cgImage, scale: 1, orientation:.right)
}else{
image = UIImage(cgImage: cgImage, scale: 1, orientation:.leftMirrored)
}
}else{
image = UIImage(cgImage: cgImage, scale: 1, orientation:.right)
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return image
}
}

这是处理每一帧的扩展:

extension CaptureManager: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let outputImage = getImageFromSampleBuffer(sampleBuffer: sampleBuffer) else {
return
}
delegate?.processCapturedImage(image: outputImage)
}
}

过程函数:

extension ViewController: CaptureManagerDelegate {
func processCapturedImage(image: UIImage) {
self.imageView.image = ...
//process image
}
}

这就是它在 ViewController 中的调用方式:

        CaptureManager.shared.startSession()

最佳答案

我担心您的问题比代码示例中提到的队列更多。但您不必再担心了,我们知道了!

在我们修改任何代码之前,让我们就此达成一致;相机本身应该有自己的线程。不在 DispatchQueue.main 上,永远不会。

让我们为我们的相机创建一个队列,例如;

var ourCameraQueue = DispatchQueue(label: "our-camera-queue-label")

然后在您共享的所有代码上使用此队列,并将所有代码包装在其中的每个函数中;

func oneOfTheFuncs() {
ourCameraQueue.async {
...
}
}

这应该会让事情变得更快一些。

一个注意事项是您可能想要初始化(或者更好的注入(inject),但我们稍后会来,也许......) ourCameraQueue 作为 中的第一件事初始化方法。初始化后,确保将 init 方法中的所有剩余代码也包装到 ourCameraQueue.async {} 中。

同时跳过 ViewController 包装,然后阅读有关代码注入(inject)的内容,这将帮助您在未来的实现之旅中有所帮助。

关于ios - 在多线程 Swift 中处理来自 AVFoundation 的视频帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69997053/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com