gpt4 book ai didi

ios - 在 Swift 3 中向视频添加叠加层

转载 作者:搜寻专家 更新时间:2023-10-31 21:46:47 25 4
gpt4 key购买 nike

我正在学习 AVFoundation,我在尝试在 Swift 3 中保存带有叠加图像的视频时遇到问题。使用 AVMutableComposition 我可以将图像添加到视频中,但是视频被放大并且不局限于拍摄视频的纵向尺寸。我试过:

  • 通过 AVAssetTrack 设置自然大小。
  • AVMutableVideoComposition renderFrame 中将视频限制为纵向大小。
  • 将新视频的边界锁定到录制的视频宽度和高度。

下面的代码与我需要帮助的问题无关。我要添加的图像覆盖了整个纵向 View ,并且在边缘周围都有边框。该应用程序还只允许纵向。

func processVideoWithWatermark(video: AVURLAsset, watermark: UIImage, completion: @escaping (Bool) -> Void) {

let composition = AVMutableComposition()
let asset = AVURLAsset(url: video.url, options: nil)

let track = asset.tracks(withMediaType: AVMediaTypeVideo)
let videoTrack:AVAssetTrack = track[0] as AVAssetTrack
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}

// let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//
// for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
// do {
// try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
// } catch {
// print(error)
// }
//
// }
//
let size = videoTrack.naturalSize

let watermark = watermark.cgImage
let watermarklayer = CALayer()
watermarklayer.contents = watermark
watermarklayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)
watermarklayer.opacity = 1

let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)

let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(watermarklayer)

let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = CGSize(width: screenWidth, height: screenHeight)
layercomposition.renderScale = 1.0
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)

let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)

instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]

let filePath = NSTemporaryDirectory() + self.fileName()
let movieUrl = URL(fileURLWithPath: filePath)

guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = movieUrl

assetExport.exportAsynchronously(completionHandler: {

switch assetExport.status {
case .completed:
print("success")
print(video.url)
self.saveVideoToUserLibrary(fileURL: movieUrl, completion: { (success, error) in
if success {
completion(true)
} else {
completion(false)

}
})

break
case .cancelled:
print("cancelled")
break
case .exporting:
print("exporting")
break
case .failed:
print(video.url)
print("failed: \(assetExport.error!)")
break
case .unknown:
print("unknown")
break
case .waiting:
print("waiting")
break
}
})

}

最佳答案

如果视频层应填充父层,则您的 videoLayer 的 frame 不正确。您需要将大小设置为 size 而不是 screenSize

关于ios - 在 Swift 3 中向视频添加叠加层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45535896/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com