gpt4 book ai didi

ios - 在 ARSCNView Aspect Fit 中加载大型 3d Object .scn 文件以适应屏幕 ARKIT Swift iOS

转载 作者:IT王子 更新时间:2023-10-29 05:24:10 24 4
gpt4 key购买 nike

我正在使用 3d 模型开发 ARKit 应用程序。为此,我使用了 3d 模型并添加了用于移动、旋转和缩放 3d 模型的手势。

现在我只面临 1 个问题,但我不确定这个问题是否与什么有关。 3d 模型中是否存在问题,或者我的程序中是否缺少任何内容。

问题是我使用的 3d 模型显示非常大并且超出了屏幕。我正在尝试缩小它的尺寸,但它非常大。

这是我的代码:

@IBOutlet var mySceneView: ARSCNView!
var selectedNode = SCNNode()
var prevLoc = CGPoint()
var touchCount : Int = 0

override func viewDidLoad() {
super.viewDidLoad()
self.lblTitle.text = self.sceneTitle
let mySCN = SCNScene.init(named: "art.scnassets/\(self.sceneImagename).scn")!
self.mySceneView.scene = mySCN

let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3Make(0, 0, 0)
self.mySceneView.scene.rootNode.addChildNode(cameraNode)
self.mySceneView.allowsCameraControl = true
self.mySceneView.autoenablesDefaultLighting = true

let tapGesture = UITapGestureRecognizer(target: self, action: #selector(detailPage.doHandleTap(_:)))
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(detailPage.doHandlePan(_:)))
let gesturesArray = NSMutableArray()
gesturesArray.add(tapGesture)
gesturesArray.add(panGesture)
gesturesArray.addObjects(from: self.mySceneView.gestureRecognizers!)
self.mySceneView.gestureRecognizers = (gesturesArray as! [UIGestureRecognizer])
}

//MARK:- Handle Gesture
@objc func doHandlePan(_ sender: UIPanGestureRecognizer) {
var delta = sender.translation(in: self.view)
let loc = sender.location(in: self.view)
if sender.state == .began {
self.prevLoc = loc
self.touchCount = sender.numberOfTouches
} else if sender.state == .changed {
delta = CGPoint(x: loc.x - prevLoc.x, y: loc.y - prevLoc.y)
prevLoc = loc
if self.touchCount != sender.numberOfTouches {
return
}

var rotMat = SCNMatrix4()
if touchCount == 2 {
rotMat = SCNMatrix4MakeTranslation(Float(delta.x * 0.025), Float(delta.y * -0.025), 0)
} else {
let rotMatX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotMatY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.x), 0, 1, 0)
rotMat = SCNMatrix4Mult(rotMatX, rotMatY)
}

let transMat = SCNMatrix4MakeTranslation(selectedNode.position.x, selectedNode.position.y, selectedNode.position.z)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(transMat))

let parentNodeTransMat = SCNMatrix4MakeTranslation((selectedNode.parent?.worldPosition.x)!, (selectedNode.parent?.worldPosition.y)!, (selectedNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult(selectedNode.parent!.worldTransform, SCNMatrix4Invert(parentNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, parentNodeMatWOTrans)

let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.mySceneView.pointOfView?.worldPosition.x)!, (self.mySceneView.pointOfView?.worldPosition.y)!, (self.mySceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult(self.mySceneView.pointOfView!.worldTransform, SCNMatrix4Invert(camorbitNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, rotMat)

selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, camorbitNodeMatWOTrans)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, transMat)
}
}

@objc func doHandleTap(_ sender: UITapGestureRecognizer) {
let p = sender.location(in: self.mySceneView)
var hitResults = self.mySceneView.hitTest(p, options: nil)

if (p.x > self.mySceneView.frame.size.width-100 || p.y < 100) {
self.mySceneView.allowsCameraControl = !self.mySceneView.allowsCameraControl
}

if hitResults.count > 0 {
let result = hitResults[0]
let material = result.node.geometry?.firstMaterial
selectedNode = result.node

SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3

SCNTransaction.completionBlock = {
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3
SCNTransaction.commit()
}
material?.emission.contents = UIColor.white
SCNTransaction.commit()
}
}

我的问题是:

我们可以在屏幕中心设置适合屏幕大小的任意大小的 3d 对象模型纵横比吗?请建议是否有一些方法。

我们将不胜感激任何指导或建议。

最佳答案

你需要的是使用getBoundingSphereCenter获取边界球体大小,然后可以将其投影到屏幕上。或者获取该半径与 scenekit 相机和物体位置之间的距离的比率。这样你就会知道对象在屏幕上看起来有多大。要缩小比例,您只需设置对象的 scale 属性即可。

对于第二部分,你可以使用projectPoint .

关于ios - 在 ARSCNView Aspect Fit 中加载大型 3d Object .scn 文件以适应屏幕 ARKIT Swift iOS,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57054418/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com