gpt4 book ai didi

ios - 如何在视频中放置CALayer?

转载 作者:行者123 更新时间:2023-11-29 05:51:27 25 4
gpt4 key购买 nike

我有一个 UIView(尺寸:W: 375 H: 667),其图像可以放置在其中的任何位置。稍后该图像将与视频叠加并保存。我的问题是,当我查看视频时,在 UIView 中选择的相同位置找不到图像,因为我的视频尺寸为 (720 x 1280)。如何在视频 (720 x 1280) 内的 UIView 中反射(reflect)所选图像的位置?这是我正在使用的代码:

private func watermark(video videoAsset:AVAsset,modelView:MyViewModel, watermarkText text : String!, imageName name : String!, saveToLibrary flag : Bool, watermarkPosition position : QUWatermarkPosition, completion : ((_ status : AVAssetExportSession.Status?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {

DispatchQueue.global(qos: DispatchQoS.QoSClass.default).async {

let mixComposition = AVMutableComposition()


let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let clipVideoTrack:AVAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
do {
try compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: clipVideoTrack, at: CMTime.zero)
}
catch {
print(error.localizedDescription)
}


let videoSize = self.resolutionSizeForLocalVideo(asset: clipVideoTrack)
print("DIMENSIONE DEL VIDEO W: \(videoSize.width) H: \(videoSize.height)")

let parentLayer = CALayer()
let videoLayer = CALayer()

parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)


parentLayer.addSublayer(videoLayer)

//My layer image
let layerTest = CALayer()

layerTest.frame = modelView.frame
layerTest.contents = modelView.image.cgImage

print("A: \(modelView.frame.origin.y) - \(modelView.frame.origin.x)")
print("B: \(layerTest.frame.origin.y) - \(layerTest.frame.origin.x)")
parentLayer.addSublayer(layerTest)

print("PARENT: \(parentLayer.frame.origin.y) - \(parentLayer.frame.origin.x)")
//------------------------

let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)

let instruction = AVMutableVideoCompositionInstruction()

instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: mixComposition.duration)

let layerInstruction = self.videoCompositionInstructionForTrack(track: compositionVideoTrack!, asset: videoAsset)
layerInstruction.setTransform((clipVideoTrack.preferredTransform), at: CMTime.zero)

instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]

let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())
let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("watermarkVideo-\(date).mp4")

let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = url
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.videoComposition = videoComp

exporter?.exportAsynchronously() {
DispatchQueue.main.async {

if exporter?.status == AVAssetExportSession.Status.completed {
let outputURL = exporter?.outputURL
if flag {
// Save to library
// let library = ALAssetsLibrary()

if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL!.path) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
}) { saved, error in
if saved {
completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
}
}
}

// if library.videoAtPathIs(compatibleWithSavedPhotosAlbum: outputURL) {
// library.writeVideoAtPathToSavedPhotosAlbum(outputURL,
// completionBlock: { (assetURL:NSURL!, error:NSError!) -> Void in
//
// completion!(AVAssetExportSessionStatus.Completed, exporter, outputURL)
// })
// }
} else {
completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
}

} else {
// Error
completion!(exporter?.status, exporter, nil)
}
}
}
}

}


private func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {


let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let scale : CGAffineTransform = CGAffineTransform(scaleX: 1, y:1)
instruction.setTransform(assetTrack.preferredTransform.concatenating(scale), at: CMTime.zero)
return instruction
}

这就是我想要得到的:enter image description here

最佳答案

this question 的答案可能会有帮助。当我尝试将用户生成的文本放置在视频上时,我遇到了类似的问题。这对我有用:

首先,我添加了一个辅助方法来将 CGPoint 从一个矩形转换为另一个矩形:

func convertPoint(point: CGPoint, fromRect: CGRect, toRect: CGRect) -> CGPoint {
return CGPoint(x: (toRect.size.width / fromRect.size.width) * point.x, y: (toRect.size.height / fromRect.size.height) * point.y)
}

我使用其中心点定位了 TextView (在您的例子中是 ImageView )。以下是如何使用辅助方法计算调整后的中心点:

let adjustedCenter = convertPoint(point: imageView.center, fromRect: view.frame, toRect: CGRect(x: 0, y: 0, width: 720.0, height: 1280.0))

之后我必须做一些额外的定位,因为 CALayers 的坐标系被翻转,所以这就是最终点可能的样子:

let finalCenter = CGPoint(x: adjustedCenter.x, y: (1280.0 - adjustedCenter.y) - (imageView.bounds.height / 2.0))

然后您可以将 CALayer 的位置属性设置为该点。

layerTest.position = finalCenter

希望有帮助!

关于ios - 如何在视频中放置CALayer?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55633586/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com