gpt4 book ai didi

java - Augmented Faces API – 面部地标是如何生成的?

转载 作者:IT老高 更新时间:2023-10-28 13:42:14 26 4
gpt4 key购买 nike

我是一名 IT 学生,想了解(了解)更多关于 Augmented Faces API 的信息在 ARCore 中。

我刚刚看到 ARCore V1.7 release ,以及新的 Augmented Faces API .我得到了这个 API 的巨大潜力。但我没有看到任何关于这个主题的问题或文章。所以我在质疑自己,这里有一些关于这个版本的假设/问题。

假设

  • ARCore 团队正在使用(如 Instagram 和 Snapchat)机器学习来生成全脸的地标。大概HOG Face Detection ..

问题

  • ARCore 如何在智能手机上的用户脸上生成 468 个点?即使在源代码中也找不到任何回应。
  • 他们如何从简单的智能手机相机中获得深度?
  • 如何拒绝人脸检测/跟踪,对自定义对象或 body 的其他部位(如手)?

因此,如果您对此主题有任何建议或意见,让我们分享!

最佳答案

  1. ARCore's new Augmented Faces API, that is working on the front-facing camera without depth sensor, offers a high quality, 468-point 3D canonical mesh that allows users attach such effects to their faces as animated masks, glasses, skin retouching, etc. The mesh provides coordinates and region specific anchors that make it possible to add these effects.

I firmly believe that a facial landmarks detection is generated with a help of computer vision algorithms under the hood of ARCore 1.7. It's also important to say that you can get started in Unity or in Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other Trackables.

enter image description here

As you know, 2+ years ago Google released Face API that performs face detection, which locates faces in pictures, along with their position (where they are in the picture) and orientation (which way they’re facing, relative to the camera). Face API allows you detect landmarks (points of interest on a face) and perform classifications to determine whether the eyes are open or closed, and whether or not a face is smiling. The Face API also detects and follows faces in moving images, which is known as face tracking.

所以,ARCore 1.7 只是从 Face API 借用了一些架构元素,现在它不仅可以检测面部地标并为其生成 468 个点,还可以以 60 fps 的速度实时跟踪它们,将 3D 面部几何体贴在他们身上

请参阅 Google 的 Face Detection Concepts Overview .

enter image description here

  1. 通过移动 RGB 相机拍摄的视频中计算深度 channel 并不是一门火箭科学。您只需将视差公式应用于跟踪的特征。因此,如果静态对象上特征的平移幅度非常高——被跟踪对象更靠近相机,而如果静态对象上特征的幅度非常低——被跟踪对象离相机更远。这些计算深度 channel 的方法对于 The Foundry NUKE 等合成应用程序非常常见。和 Blackmagic Fusion超过10年。现在,在 ARCore 中可以使用相同的原理。

  2. 您不能拒绝对自定义对象或 body 的其他部位(例如手)进行人脸检测/跟踪算法。 Augmented Faces API 仅针对人脸开发。

下面是激活增强面孔功能的 Java 代码的样子:

// Create ARCore session that supports Augmented Faces
public Session createAugmentedFacesSession(Activity activity) throws
UnavailableException {

// Use selfie camera
Session session = new Session(activity,
EnumSet.of(Session.Feature.FRONT_CAMERA));

// Enabling Augmented Faces
Config config = session.getConfig();
config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
session.configure(config);
return session;
}

然后获取检测到的人脸列表:

Collection<AugmentedFace> faceList = session.getAllTrackables(AugmentedFace.class);

最后渲染效果:

for (AugmentedFace face : faceList) {

// Create a face node and add it to the scene.
AugmentedFaceNode faceNode = new AugmentedFaceNode(face);
faceNode.setParent(scene);

// Overlay the 3D assets on the face
faceNode.setFaceRegionsRenderable(faceRegionsRenderable);

// Overlay a texture on the face
faceNode.setFaceMeshTexture(faceMeshTexture);

// .......
}

关于java - Augmented Faces API – 面部地标是如何生成的?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54869965/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com