gpt4 book ai didi

swift - 在 AR 资源组中保存 ARKit 屏幕截图

转载 作者:行者123 更新时间:2023-11-28 05:59:33 25 4
gpt4 key购买 nike

我试图在运行时将 UIImage 保存到 AR 资源组中,以便之后检测它。您可以使用 referenceImagesInGroupNamed:bundle: 阅读该组,但没有说明如何编写它。

我正在尝试以下操作:

//step 1: list all reference images
print("BEFORE = ", ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) ?? "None")

//step 2: take snapshot
let snapshot = self.viewController?.sceneView.snapshot()
//optional crops and image processing here

//step 3: add shapshot to bundle
let referenceImage = ARReferenceImage(snapshot.cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: someComputedPhysicalWidth)
// ??

//step 4: list all reference images and ensure new one is added
print("AFTER = ", ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) ?? "None")

//step 5: relaunch session if intempted to redirect it now
sceneView.session.pause()
configuration.detectionImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil)!
sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])

关闭应用程序后,存储在组中的内容会被保存吗?我可以删除其中的内容吗?如果无法进行写访问,我想我可以将创建的 referenceImage 参数保存在某处并在需要时重建 referenceImage?

最佳答案

您无法在运行时修改默认文件夹的内容,但您可以即时创建图像并在以后访问它们。

要动态创建图像,您可以使用以下方法:

enter image description here

为了保持并发性,例如访问任何已保存的快照,您需要将它们保存到设备,然后再访问它们。

这是一个基本的实现,您可以在其中拍摄 ARSCNView 的快照,然后它们会动态加载它们。

这里的问题(您需要解决)是确定需要以米为单位提供的 ARReferenceImagesphysicalSize:

extension ViewController{

//------------------------------------------------
//MARK: Get CIImageProperyOrientation From UIImage
//------------------------------------------------


/// Converts A UIImageOrientation To A CGImagePropertyOrientation
///
/// - Parameter orientation: UIImageOrientation
/// - Returns: CGImagePropertyOrientation
func cgImagePropertyOrientation(_ orientation: UIImageOrientation) -> CGImagePropertyOrientation {
switch orientation {
case .up:
return .up
case .upMirrored:
return .upMirrored
case .down:
return .down
case .downMirrored:
return .downMirrored
case .leftMirrored:
return .leftMirrored
case .right:
return .right
case .rightMirrored:
return .rightMirrored
case .left:
return .left
}
}

//---------------------
//MARK: File Management
//---------------------

/// Returns The Documents Directory
///
/// - Returns: URL
func getDocumentsDirectory() -> URL {

let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let documentsDirectory = paths[0]
return documentsDirectory

}

}

extension ViewController: ARSCNViewDelegate{

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }

//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!

//3. Get The Targets Width & Height
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height

//4. Log The Reference Images Information
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")

//5. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry

//6. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2

//7. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)

//8. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)

//9. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()

//10. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry

//11. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)

//12. Add The Box To The Node
node.addChildNode(boxNode)
}
}

class ViewController: UIViewController {

//1. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
@IBOutlet weak var augmentedRealityView: ARSCNView!

//2. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()

//3. Create Our Session
let augmentedRealitySession = ARSession()

//4. Create An Array To Store Our Reference Images
var customReferenceImages = [ARReferenceImage]()

//5. Create An Identifier So We Can Create A Unique Name For Each Image
var identifier = 0

//--------------------
//MARK: View LifeCycle
//--------------------

override func viewDidLoad() {

setupARSession()

super.viewDidLoad()

}

override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()

}

//--------------------------------
//MARK: Creation Of Dynamic Images
//--------------------------------

/// Saves The Snapshot Of An ARSCNView
@IBAction func saveScreenShot(){

//1. Create A Snapshot Of The ARView
let screenShot = self.augmentedRealityView.snapshot()

//2. Convert It To A PNG
guard let imageData = UIImagePNGRepresentation(screenShot) else { return }

//3. Store The File In The Documents Directory
let fileURL = getDocumentsDirectory().appendingPathComponent("custom\(identifier).png")

//4. Write It To The Documents Directory & Increase The Identifier
do {
try imageData.write(to: fileURL)
identifier += 1
} catch {
print("Error Saving File")
}

//5. Load The Custom Images
loadCustomImages()
}


/// Loads Any Custom Images From The Documents Directory & Appends Them To A Custom [ARReferenceImage]
func loadCustomImages(){

//1. Get Reference To The NSFileManager
let fileManager = FileManager.default

//2. Get The URL Of The Documents Directory
let documentsDirectory = getDocumentsDirectory()

do {

//a. Get All Files In The Documents Directory
let fileURLs = try fileManager.contentsOfDirectory(at: documentsDirectory, includingPropertiesForKeys: nil)

//b. Loop Through Them And If The Path Contains Our Custom Prefix Then Convert To CGImage & Then ARReference Image
for file in fileURLs{

if file.lastPathComponent.hasPrefix("custom"){

if let arImage = UIImage(contentsOfFile: file.path), let arCGImage = arImage.cgImage{

/* Here You Will Need To Work Out The Pysical Widht Of The Image In Metres */

let widthInCM: CGFloat = CGFloat(arCGImage.width) / CGFloat(47)
let widthInMetres: CGFloat = widthInCM * 0.01

let arReferenceImage = ARReferenceImage(arCGImage,
orientation: cgImagePropertyOrientation(arImage.imageOrientation),
physicalWidth: widthInMetres)

arReferenceImage.name = file.lastPathComponent

customReferenceImages.append(arReferenceImage)
}
}
}

} catch {

print("Error Listing Files \(documentsDirectory.path): \(error.localizedDescription)")
}


//3. Set Our ARSession Configuration Detection Images
configuration.detectionImages = Set(customReferenceImages)
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors ])


}

//---------------
//MARK: ARSession
//---------------

/// Sets Up The ARSession
func setupARSession(){

//1. Set The AR Session
augmentedRealityView.session = augmentedRealitySession

//2. Conifgure The Type Of Plane Detection
configuration.planeDetection = []

//3. If In Debug Mode Show Statistics
#if DEBUG
augmentedRealityView.showsStatistics = true
#endif

//4. Run The Session
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
augmentedRealityView.delegate = self

}

}

这个例子从实用的角度和一个快速的模型来看效果很好,尽管正如我所指出的,您需要了解如何正确确定动态创建的引用图像等的大小:

希望对你有帮助

关于swift - 在 AR 资源组中保存 ARKit 屏幕截图,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50236422/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com