gpt4 book ai didi

swift - 如何从 iOS 11 和 Swift 4 中的相机捕获深度数据?

转载 作者:搜寻专家 更新时间:2023-10-30 21:53:09 25 4
gpt4 key购买 nike

我正在尝试使用 AVDepthData 从 iOS 11 中的相机获取深度数据,但是当我使用 AVCapturePhotoCaptureDelegate 设置 photoOutput 时,photo.depthData 为 nil。

所以我尝试使用 AVCaptureDepthDataOutput 设置 AVCaptureDepthDataOutputDelegate,但我不知道如何捕捉深度照片?

有人从 AVDepthData 得到过图像吗?

编辑:

这是我试过的代码:

// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate

@IBOutlet var image_view: UIImageView!
@IBOutlet var capture_button: UIButton!

var captureSession: AVCaptureSession?
var sessionOutput: AVCapturePhotoOutput?
var depthOutput: AVCaptureDepthDataOutput?
var previewLayer: AVCaptureVideoPreviewLayer?

@IBAction func capture(_ sender: Any) {

self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self)

}

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {

self.previewLayer?.removeFromSuperlayer()
self.image_view.image = UIImage(data: photo.fileDataRepresentation()!)

let depth_map = photo.depthData?.depthDataMap
print("depth_map:", depth_map) // is nil

}

func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) {

print("depth data") // never called

}

override func viewDidLoad() {
super.viewDidLoad()

self.captureSession = AVCaptureSession()
self.captureSession?.sessionPreset = .photo

self.sessionOutput = AVCapturePhotoOutput()
self.depthOutput = AVCaptureDepthDataOutput()
self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue"))

do {

let device = AVCaptureDevice.default(for: .video)
let input = try AVCaptureDeviceInput(device: device!)
if(self.captureSession?.canAddInput(input))!{
self.captureSession?.addInput(input)

if(self.captureSession?.canAddOutput(self.sessionOutput!))!{
self.captureSession?.addOutput(self.sessionOutput!)


if(self.captureSession?.canAddOutput(self.depthOutput!))!{
self.captureSession?.addOutput(self.depthOutput!)

self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.previewLayer?.frame = self.image_view.bounds
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.image_view.layer.addSublayer(self.previewLayer!)

}

}

}

} catch {}

self.captureSession?.startRunning()

}

我正在尝试两件事,一件是深度数据为零,另一件是我尝试调用深度委托(delegate)方法。

有人知道我错过了什么吗?

最佳答案

首先,你需要使用双摄像头,否则你将得不到任何深度数据。

let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)

并保留对队列的引用

let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)

您可能还想同步视频和深度数据

var outputSynchronizer: AVCaptureDataOutputSynchronizer?

然后您可以像这样在 viewDidLoad() 方法中同步两个输出

if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}

我建议观看 WWDC session 507 - 他们还提供了一个完整的示例应用程序,可以完全满足您的需求。

https://developer.apple.com/videos/play/wwdc2017/507/

关于swift - 如何从 iOS 11 和 Swift 4 中的相机捕获深度数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44506934/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com