gpt4 book ai didi

ios - 将 ARFrame#capturedImage 转换为 View 大小

转载 作者:行者123 更新时间:2023-11-29 13:52:45 26 4
gpt4 key购买 nike

当使用 ARSessionDelegate 处理 ARKit 中的原始相机图像时...

func session(_ session: ARSession, didUpdate frame: ARFrame) {

guard let currentFrame = session.currentFrame else { return }
let capturedImage = currentFrame.capturedImage

debugPrint("Display size", UIScreen.main.bounds.size)
debugPrint("Camera frame resolution", CVPixelBufferGetWidth(capturedImage), CVPixelBufferGetHeight(capturedImage))

// ...

}

...如文档所述,相机图像数据与屏幕尺寸不匹配,例如,在 iPhone X 上我得到:

  • 显示尺寸:375x812pt
  • 相机分辨率:1920x1440px

现在有 displayTransform(for:viewportSize:)将相机坐标转换为 View 坐标的 API。像这样使用 API 时:

let ciimage = CIImage(cvImageBuffer: capturedImage)
let transform = currentFrame.displayTransform(for: .portrait, viewportSize: UIScreen.main.bounds.size)
var transformedImage = ciimage.transformed(by: transform)
debugPrint("Transformed size", transformedImage.extent.size)

我得到的尺寸为 2340x1920,这似乎不正确,结果的纵横比应为 375:812 (~0.46)。我在这里错过了什么/使用此 API 将相机图像转换为“由 ARSCNView 显示”的图像的正确方法是什么?

(示例项目:ARKitCameraImage)

最佳答案

事实证明这很复杂,因为 displayTransform(for:viewportSize) 需要归一化的图像坐标,看起来你只需要在纵向模式下翻转坐标并且图像不仅需要转换但也被裁剪了。下面的代码对我有用。我们将不胜感激如何改进这一点的建议。

guard let frame = session.currentFrame else { return }
let imageBuffer = frame.capturedImage

let imageSize = CGSize(width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer))
let viewPort = sceneView.bounds
let viewPortSize = sceneView.bounds.size

let interfaceOrientation : UIInterfaceOrientation
if #available(iOS 13.0, *) {
interfaceOrientation = self.sceneView.window!.windowScene!.interfaceOrientation
} else {
interfaceOrientation = UIApplication.shared.statusBarOrientation
}

let image = CIImage(cvImageBuffer: imageBuffer)

// The camera image doesn't match the view rotation and aspect ratio
// Transform the image:

// 1) Convert to "normalized image coordinates"
let normalizeTransform = CGAffineTransform(scaleX: 1.0/imageSize.width, y: 1.0/imageSize.height)

// 2) Flip the Y axis (for some mysterious reason this is only necessary in portrait mode)
let flipTransform = (interfaceOrientation.isPortrait) ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity

// 3) Apply the transformation provided by ARFrame
// This transformation converts:
// - From Normalized image coordinates (Normalized image coordinates range from (0,0) in the upper left corner of the image to (1,1) in the lower right corner)
// - To view coordinates ("a coordinate space appropriate for rendering the camera image onscreen")
// See also: https://developer.apple.com/documentation/arkit/arframe/2923543-displaytransform

let displayTransform = frame.displayTransform(for: interfaceOrientation, viewportSize: viewPortSize)

// 4) Convert to view size
let toViewPortTransform = CGAffineTransform(scaleX: viewPortSize.width, y: viewPortSize.height)

// Transform the image and crop it to the viewport
let transformedImage = image.transformed(by: normalizeTransform.concatenating(flipTransform).concatenating(displayTransform).concatenating(toViewPortTransform)).cropped(to: viewPort)

关于ios - 将 ARFrame#capturedImage 转换为 View 大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58809070/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com