gpt4 book ai didi

ios - AVFoundation,DataMatrix,Swift 4

转载 作者:行者123 更新时间:2023-12-01 16:23:29 25 4
gpt4 key购买 nike

啊。我对Swift 4 AVFoundation所做的更改的斗争仍在继续。

我有一个要读取的数据矩阵“QR”代码。

在Swift 3编译中,它读起来就很好,但是在Swift 4中对代码所做的更改却没有被接受。

还请注意,Apple提供的应该与Swift 4一起使用的示例也不会读取DataMatrix代码

当我打印出可用的类型时,数据矩阵可用。

print("types available:\n \(metadataOutput.availableMetadataObjectTypes)")

产量:
types available:
[__ObjC.AVMetadataObject.ObjectType(_rawValue: face), ...
__ObjC.AVMetadataObject.ObjectType(_rawValue: org.iso.DataMatrix), ...

但是,当我运行代码时,永远不会调用didOutput metadataObjects:。但是确实需要其他类型的调用。

另外显式添加类型:
metadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.dataMatrix]

没有什么不同。

有在Swift 4中扫描DataMatrix的经验吗?

码:
import UIKit
import AVFoundation

class ViewController: UIViewController,AVCaptureMetadataOutputObjectsDelegate {

var videoCaptureDevice: AVCaptureDevice = AVCaptureDevice.default(for: AVMediaType.video)!
var device = AVCaptureDevice.default(for: AVMediaType.video)
var output = AVCaptureMetadataOutput()
var previewLayer: AVCaptureVideoPreviewLayer?

var captureSession = AVCaptureSession()
var code: String?

var scannedCode = UILabel()

override func viewDidLoad() {
super.viewDidLoad()

self.setupCamera()
self.addLabelForDisplayingCode()
}

private func setupCamera() {

let input = try? AVCaptureDeviceInput(device: videoCaptureDevice)

if self.captureSession.canAddInput(input!) {
self.captureSession.addInput(input!)
}

self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

if let videoPreviewLayer = self.previewLayer {
videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer.frame = self.view.bounds
view.layer.addSublayer(videoPreviewLayer)
}

let metadataOutput = AVCaptureMetadataOutput()
if self.captureSession.canAddOutput(metadataOutput) {
self.captureSession.addOutput(metadataOutput)

metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)

print("types available:\n \(metadataOutput.availableMetadataObjectTypes)")

metadataOutput.metadataObjectTypes = metadataOutput.availableMetadataObjectTypes
// metadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.dataMatrix]
} else {
print("Could not add metadata output")
}
}

private func addLabelForDisplayingCode() {
view.addSubview(scannedCode)
scannedCode.translatesAutoresizingMaskIntoConstraints = false
scannedCode.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -20.0).isActive = true
scannedCode.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 20.0).isActive = true
scannedCode.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -20.0).isActive = true
scannedCode.heightAnchor.constraint(equalToConstant: 50).isActive = true
scannedCode.font = UIFont.preferredFont(forTextStyle: .title2)
scannedCode.backgroundColor = UIColor.black.withAlphaComponent(0.5)
scannedCode.textAlignment = .center
scannedCode.textColor = UIColor.white
scannedCode.text = "Scanning...."
}

override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)

if (captureSession.isRunning == false) {
captureSession.startRunning();
}
}

override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)

if (captureSession.isRunning == true) {
captureSession.stopRunning();
}
}

func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
print(metadataObjects)
for metadata in metadataObjects {
let readableObject = metadata as! AVMetadataMachineReadableCodeObject
let code = readableObject.stringValue
scannedCode.text = code

}
}
}

非常感谢。

最佳答案

您必须确保未镜像连接。
数据矩阵需要以原始格式读取。

https://developer.apple.com/documentation/avfoundation/avcaptureconnection/1389172-isvideomirrored

关于ios - AVFoundation,DataMatrix,Swift 4,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46737504/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com