gpt4 book ai didi

swift - 如何在 Apple Vision 检测到的脸部上应用 3D 模型 "NO AR"

转载 作者:行者123 更新时间:2023-11-30 10:43:40 36 4
gpt4 key购买 nike

使用 iPhoneX 真深摄像头,可以获取任何物体的 3D 坐标,并使用该信息来定位和缩放物体,但对于较旧的 iPhone,我们无法在前置摄像头上使用 AR,这是什么?到目前为止,我们所做的工作是使用 Apple Vison 框架检测面部,并在面部或地标周围绘制一些 2D 路径。我制作了一个 SceneView 并将其应用为具有清晰背景的我的 View 的前层,其下方是 AVCaptureVideoPreviewLayer ,在检测到面部后,我的 3D 对象出现在屏幕上但是根据面部boundingBox正确定位和缩放它需要取消投影和其他我卡在那里的东西,我也尝试使用CATransform3D将2D BoundingBox转换为3D,但我失败了!我想知道我想要实现的目标是否可能?如果我没记错的话,我记得 SnapChat 在 ARKit 在 iPhone 上可用之前就已经这样做了!

Imgur

    override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)

self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!

}

fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}

faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)

let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)

let worldPoint = sceneView.unprojectPoint(unprojectedBox)

self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}

最佳答案

实现此目的的唯一方法是使用正交相机的 SceneKit 并使用 SCNGeometrySource 将 Vision 中的地标与网格的顶点进行匹配。首先,您需要具有与 Vision 相同数量顶点的网格(66 - 77,具体取决于您所在的 Vision Revision)。您可以使用 Blender 等工具创建一个。

The mesh on Blender

然后,在代码上,每次处理地标时,您都会执行以下步骤:1-获取网格顶点:

func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)

let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})

result = vertices

}

return result
}

2-取消投影 Vision 捕获的每个地标并将它们保存在 SCNVector3 数组中:

let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))

3- 使用新顶点修改几何图形:

func reshapeGeometry( _ 顶点: [SCNVector3] ){

let source = SCNGeometrySource(vertices: vertices)

var newSources = [SCNGeometrySource]()
newSources.append(source)

for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}

let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)

let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material

}

我能够做到这一点,这就是我的方法。希望这有帮助!

关于swift - 如何在 Apple Vision 检测到的脸部上应用 3D 模型 "NO AR",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56272702/

36 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com