gpt4 book ai didi

avfoundation - 未调用 captureOutput

转载 作者:行者123 更新时间:2023-12-04 09:45:35 26 4
gpt4 key购买 nike

我已经研究这个太久了。

我正在尝试获取 MacOS 网络摄像头数据并在网络摄像头输出的帧上运行 CIDetect。

我知道我需要:

  • 连接 AVCaptureDevice (如输入)到AVCaptureSession
  • 连接 AVCaptureVideoDataOutput (作为输出)到 AVCaptureSession
  • 调用.setSampleBufferDelegate(AVCaptureVideoDataOutputSampleBufferDelegate, DelegateQueue)

  • 出于某种原因,在调用 .setSampleBufferDelegate(...) 后(当然,在 .startRunning() 实例上调用 AVCaptureSession 之后),我的 AVCaptureVideoDataOutputSampleBufferDelegatecaptureOutput没有被调用。

    我发现很多人在网上遇到了这个问题,但我找不到任何解决方案。

    在我看来,这似乎与 DispatchQueue 有关.
    MyDelegate.swift :
    class MyDelegate : NSObject {


    var context: CIContext?;
    var detector : CIDetector?;

    override init() {
    context = CIContext();
    detector = CIDetector(ofType: CIDetectorTypeFace, context: context);
    print("set up!");

    }

    }
    extension MyDelegate : AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection) {
    print("success?");
    var pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!;
    var image : CIImage = CIImage(cvPixelBuffer: pixelBuffer);
    var features : [CIFeature] = detector!.features(in: image);
    for feature in features {
    print(feature.type);
    print(feature.bounds);
    }
    }

    func captureOutput(_ : AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection) {
    print("fail?");
    }
    }
    ViewController.swift :
    var captureSession : AVCaptureSession;
    var captureDevice : AVCaptureDevice?
    var previewLayer : AVCaptureVideoPreviewLayer?

    var vdo : AVCaptureVideoDataOutput;

    var videoDataOutputQueue : DispatchQueue;

    override func viewDidLoad() {
    super.viewDidLoad()

    camera.layer = CALayer()

    // Do any additional setup after loading the view, typically from a nib.
    captureSession.sessionPreset = AVCaptureSessionPresetLow

    // Get all audio and video devices on this machine
    let devices = AVCaptureDevice.devices()

    // Find the FaceTime HD camera object
    for device in devices! {
    print(device)

    // Camera object found and assign it to captureDevice
    if ((device as AnyObject).hasMediaType(AVMediaTypeVideo)) {
    print(device)
    captureDevice = device as? AVCaptureDevice
    }
    }

    if captureDevice != nil {
    do {
    try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
    // vdo : AVCaptureVideoDataOutput;
    vdo.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: NSNumber(value: kCVPixelFormatType_32BGRA)]

    try captureDevice!.lockForConfiguration()
    captureDevice!.activeVideoMinFrameDuration = CMTimeMake(1, 30)
    captureDevice!.unlockForConfiguration()

    videoDataOutputQueue.sync{
    vdo.setSampleBufferDelegate(
    MyDelegate,
    queue: videoDataOutputQueue
    );
    vdo.alwaysDiscardsLateVideoFrames = true
    captureSession.addOutput(vdo)
    captureSession.startRunning();
    }
    } catch {
    print(AVCaptureSessionErrorKey.description)
    }
    }
    viewDidLoad 中的所有必要变量与 AVFoundation 相关已在 Viewcontroller 中实例化的 init() .为了清楚起见,我省略了这一点。

    有任何想法吗?

    谢谢,所以!

    科韦克

    编辑:
    - 修复了来自 self 的设置委托(delegate)至 MyDelegate .

    这就是我初始化 videoDataOutputQueue 的方式:
        videoDataOutputQueue = DispatchQueue(
    label: "VideoDataOutputQueue"
    );

    最佳答案

    您在声明所需的样本缓冲区委托(delegate)方法时犯了一个错误:
    captureOutput(_:didOutputSampleBuffer:from:) .

    请检查并确保它是:

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)

    PS:注意该方法的参数是如何声明的。所有参数都有'!'这意味着自动展开。

    关于avfoundation - 未调用 captureOutput,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44999250/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com