gpt4 book ai didi

ios - SceneKit Metal 深度缓冲区

转载 作者:行者123 更新时间:2023-12-02 04:23:36 24 4
gpt4 key购买 nike

我正在尝试使用 SceneKit 编写增强现实应用程序,并且我需要当前渲染帧中的准确 3D 点,并使用 SCNSceneRenderer's unprojectPoint 给定 2D 像素和深度。方法。这需要 x、y 和 z,其中 x 和 y 是像素坐标,通常 z 是从该帧的深度缓冲区读取的值。

SCNView 的委托(delegate)有这个方法来渲染深度帧:

func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
renderDepthFrame()
}

func renderDepthFrame(){

// setup our viewport
let viewport: CGRect = CGRect(x: 0, y: 0, width: Double(SettingsModel.model.width), height: Double(SettingsModel.model.height))

// depth pass descriptor
let renderPassDescriptor = MTLRenderPassDescriptor()

let depthDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.depth32Float, width: Int(SettingsModel.model.width), height: Int(SettingsModel.model.height), mipmapped: false)
let depthTex = scnView!.device!.makeTexture(descriptor: depthDescriptor)
depthTex.label = "Depth Texture"
renderPassDescriptor.depthAttachment.texture = depthTex
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.storeAction = .store



let commandBuffer = commandQueue.makeCommandBuffer()

scnRenderer.scene = scene
scnRenderer.pointOfView = scnView.pointOfView!

scnRenderer!.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)


// setup our depth buffer so the cpu can access it
let depthImageBuffer: MTLBuffer = scnView!.device!.makeBuffer(length: depthTex.width * depthTex.height*4, options: .storageModeShared)
depthImageBuffer.label = "Depth Buffer"
let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()
blitCommandEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOriginMake(0, 0, 0), sourceSize: MTLSizeMake(Int(SettingsModel.model.width), Int(SettingsModel.model.height), 1), to: depthImageBuffer, destinationOffset: 0, destinationBytesPerRow: 4*Int(SettingsModel.model.width), destinationBytesPerImage: 4*Int(SettingsModel.model.width)*Int(SettingsModel.model.height))
blitCommandEncoder.endEncoding()

commandBuffer.addCompletedHandler({(buffer) -> Void in
let rawPointer: UnsafeMutableRawPointer = UnsafeMutableRawPointer(mutating: depthImageBuffer.contents())
let typedPointer: UnsafeMutablePointer<Float> = rawPointer.assumingMemoryBound(to: Float.self)
self.currentMap = Array(UnsafeBufferPointer(start: typedPointer, count: Int(SettingsModel.model.width)*Int(SettingsModel.model.height)))

})

commandBuffer.commit()

}

这有效。我得到的深度值在 0 到 1 之间。问题是我无法在 unprojectPoint 中使用它们,因为尽管使用相同的 SCNScene 和 SCNCamera,但它们的缩放比例似乎与初始 channel 不同。

我的问题:

  1. 有没有办法直接从 SceneKit SCNView 的主 channel 获取深度值,而无需使用单独的 SCNRenderer 进行额外的 channel ?

  2. 为什么我的 channel 中的深度值与我通过 HitTest 然后取消投影获得的值不匹配?我的 channel 的深度值从 0.78 到 0.94。 HitTest 中的深度值范围从 0.89 到 0.97,奇怪的是,当我在 Python 中渲染场景时,它与场景的 OpenGL 深度值相匹配。

我的预感是这是视口(viewport)中的差异,SceneKit 正在做一些事情将深度值从 -1 缩放到 1,就像 OpenGL 一样。

编辑:如果您想知道,我不能直接使用 hitTest 方法。对于我想要实现的目标来说,这太慢了。

最佳答案

SceneKit 默认使用对数刻度反向 Z 缓冲区。您可以非常轻松地禁用反向 Z 缓冲区 (scnView.usesReverseZ = false),但是使用线性分布将日志深度设置为 [0, 1] 范围需要访问深度缓冲区,即远剪裁范围和近剪裁范围的值。下面是将非反向 z-log-深度取为 [0, 1] 范围内线性分布深度的过程:

float delogDepth(float depth, float nearClip, float farClip) {
// The depth buffer is in Log Format. Probably a 24bit float depth with 8 for stencil.
// https://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
// We need to undo the log format.
// https://stackoverflow.com/questions/18182139/logarithmic-depth-buffer-linearization
float logTuneConstant = nearClip / farClip;
float deloggedDepth = ((pow(logTuneConstant * farClip + 1.0, depth) - 1.0) / logTuneConstant) / farClip;
// The values are going to hover around a particular range. Linearize that distribution.
// This part may not be necessary, depending on how you will use the depth.
// http://glampert.com/2014/01-26/visualizing-the-depth-buffer/
float negativeOneOneDepth = deloggedDepth * 2.0 - 1.0;
float zeroOneDepth = ((2.0 * nearClip) / (farClip + nearClip - negativeOneOneDepth * (farClip - nearClip)));
return zeroOneDepth;
}

关于ios - SceneKit Metal 深度缓冲区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40476426/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com