gpt4 book ai didi

swift - 从自定义相机获取深度数据

转载 作者:行者123 更新时间:2023-11-28 07:40:00 25 4
gpt4 key购买 nike

我关注了Capturing Photos with Depth并仔细阅读了 the similar question 中的所有建议,但是,我无法从我的自定义相机中获取任何深度数据。这是我对代码的最新修改,您对这个问题有什么想法吗?

当我点击相机按钮时,我得到:

libc++abi.dylib: terminating with uncaught exception of type NSException

我也查看了解决方案。它们主要与 segue 有关,但我仔细检查了这部分代码和 Storyboard ,看起来还不错。 (在增加代码深度之前我没有任何问题!)

class CameraViewController : UIViewController {
@IBOutlet weak var cameraButton: UIButton!

var captureSession = AVCaptureSession()
var captureDevice: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?

var image: UIImage?

var depthDataMap: CVPixelBuffer?
var depthData: AVDepthData?

override func viewDidLoad() {
super.viewDidLoad()

setupDevice()
setupIO()
setupPreviewLayer()
startRunningCaptureSession()
}

func setupDevice() {
self.captureDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
}

func setupIO() {
guard let captureInputDevice = try? AVCaptureDeviceInput(device: self.captureDevice!),
self.captureSession.canAddInput(captureInputDevice)
else { fatalError("Can't add video input.") }
self.captureSession.beginConfiguration()
self.captureSession.addInput(captureInputDevice)

self.photoOutput = AVCapturePhotoOutput()
self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported
guard self.captureSession.canAddOutput(photoOutput!)
else { fatalError("Can't add photo output.") }
self.captureSession.addOutput(photoOutput!)
self.captureSession.sessionPreset = .photo
self.captureSession.commitConfiguration()
}

func setupPreviewLayer() {
self.cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(self.cameraPreviewLayer!, at: 0)
}
func startRunningCaptureSession() {
self.captureSession.startRunning()
}

@IBAction func cameraButtonDidTap(_ sender: Any) {
let setting = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
setting.isDepthDataDeliveryEnabled = self.photoOutput!.isDepthDataDeliverySupported
self.photoOutput?.capturePhoto(with: setting, delegate: self)
}

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if segue.identifier == "showPhoto" {
let nav = segue.destination as! UINavigationController
let previewVC = nav.topViewController as! PhotoViewController

previewVC.image = self.image
previewVC.depthData = self.depthData
previewVC.depthDataMap = self.depthDataMap
}
}
}

extension CameraViewController: AVCapturePhotoCaptureDelegate{
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let imageData = photo.fileDataRepresentation() {
image = UIImage(data: imageData)
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
let auxiliaryData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource!, 0, kCGImageAuxiliaryDataTypeDisparity) as? [AnyHashable: Any]

let depthData = try? AVDepthData(fromDictionaryRepresentation: auxiliaryData!)
self.depthDataMap = depthData?.depthDataMap

self.performSegue(withIdentifier: "showPhoto", sender: self)
}
}
}

最佳答案

This is the problem with my code:

DepthDataDelivery 将不受支持,除非将照片输出添加到 session 中并且 session 的输入已正确配置以提供深度。

  1. 首先设置 session 预设:

    self.captureSession.sessionPreset = .photo

  2. 添加双摄像头输入后,添加照片输出。

    守卫 self.captureSession.canAddOutput(photoOutput!)

  3. 现在启用深度传递:

    self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported

关于swift - 从自定义相机获取深度数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52543110/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com