gpt4 book ai didi

ios - AVMutableComposition 没有正确定位视频

转载 作者:塔克拉玛干 更新时间:2023-11-02 20:30:37 25 4
gpt4 key购买 nike

我一天中的大部分时间都在围绕 StackOverflow 进行深入研究,虽然有很多关于该主题的精彩帖子,但我还没有找到解决我的问题的方法。 p>

我正在使用 AVAssetWriter 编写视频文件,没有问题。我的视频文件,如果我保存到我的相机胶卷,可以正确播放并按预期方向播放。以下是我的设置方式;

init(fileUrl:URL!, height:Int, width:Int) {

// Setup the filter writer instance
fileWriter = try? AVAssetWriter(outputURL: fileUrl, fileType: AVFileType.mov)

// Setup the video settings
let videoOutputSettings: Dictionary<String, AnyObject> = [
AVVideoCodecKey : AVVideoCodecType.hevc as AnyObject,
AVVideoWidthKey : width as AnyObject,
AVVideoHeightKey : height as AnyObject
]

// Setup the attributes dictionary
let sourcePixelBufferAttributesDictionary = [
String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA),
String(kCVPixelBufferWidthKey) : Int(width),
String(kCVPixelBufferHeightKey) : Int(height),
String(kCVPixelFormatOpenGLESCompatibility) : kCFBooleanTrue
] as [String : Any]

// Setup the video input
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings)

// Data should be expected in real time
videoInput.expectsMediaDataInRealTime = true

// Perform transform
videoInput.transform = CGAffineTransform(rotationAngle: CGFloat(CGFloat.pi / 2.0))

// Setup pixel buffer intput
assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)

// Add the input
fileWriter.add(videoInput)
}

然后我想使用 AVMutableComposition 来保存应用了图像叠加层的视频,它可以正常工作,除了视频方向不正确;

func postProcessVideo(toFPS: Double, sourceVideo: URL, destination: URL, filterImage: UIImage?, completionHandler: @escaping (_ response: Bool) -> ()) {

// Log
print("Received call to begin post-processing video at:", sourceVideo)

// Instantiate the AVMutableComposion
let composition = AVMutableComposition()

// Setup the video asset
let vidAsset = AVURLAsset(url: sourceVideo, options: [:])

// Get video track
let vtrack = vidAsset.tracks(withMediaType: AVMediaType.video)

// Setup the first video track as asset track
let videoTrack: AVAssetTrack = vtrack[0]

// Setup the video timerange
let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)

// Setup the composition video track
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID())!

// Insert expected time range
do {
try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: kCMTimeZero)
} catch {

}

// Setup the preferred transform
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform

// Update time scale
let finalTimeScale: Int64 = vidAsset.duration.value * 3

// Adjust video track duration
compositionvideoTrack.scaleTimeRange(CMTimeRangeMake(kCMTimeZero, vidAsset.duration), toDuration: CMTimeMake(finalTimeScale, vidAsset.duration.timescale))

// Setup effect size
let size = videoTrack.naturalSize

// Setup the image
let imglogo = UIImage(named: "gif1.png")
let imglayer = CALayer()
imglayer.contents = imglogo?.cgImage
imglayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
imglayer.opacity = 0.0

// Setup the video layer
let videolayer = CALayer()

// Setup the video layer frame
videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)

// Setup the parent layer
let parentlayer = CALayer()

// Setup the parent layer frame
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)

// Add video layer
parentlayer.addSublayer(videolayer)

// Add filter layer
parentlayer.addSublayer(imglayer)

// Setup the layer composition
let layercomposition = AVMutableVideoComposition()

// Setup the desired frame rate
layercomposition.frameDuration = CMTimeMake(1, Int32(toFPS))

// Setup the render size
layercomposition.renderSize = size

// Setup the animation tool
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

// Setup instruction for filter overlay
let instruction = AVMutableVideoCompositionInstruction()

// Setup the desired time range
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)

// Setup video track
let videotrack = composition.tracks(withMediaType: AVMediaType.video)[0]

// Setup layer instruction
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

// Setup layer instructions
instruction.layerInstructions = [layerinstruction]

// Setup layer composition instructions
layercomposition.instructions = [instruction]

// Instantiate the asset export
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)

// Setup the video composition
assetExport?.videoComposition = layercomposition

// Setup the output file type
assetExport?.outputFileType = AVFileType.mov

// Setup the destination
assetExport?.outputURL = destination

// Export video
assetExport?.exportAsynchronously(completionHandler: {
switch assetExport?.status{
case .failed?:
print("failed \(assetExport!.error)")
case .cancelled?:
print("cancelled \(assetExport!.error)")
default:
print("Movie complete")
completionHandler(true)
}
})
}

很抱歉太长了,但是是否有任何突出的因素可以帮助解释导出过程中的方向变化?

谢谢!

最佳答案

我遇到了关于方向的问题,我是这样解决的:

AVMutableCompositionTrack *a_compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[a_compositionVideoTrack setPreferredTransform:CGAffineTransformRotate(CGAffineTransformMakeScale(-1, 1), M_PI)];

通过旋转和缩放它。它是在 objective-C 中,但你可以很容易地转换它。你只需要改变这个:

// Setup the preferred transform
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform

而不是 preferredTransform 手动给出转换。

关于ios - AVMutableComposition 没有正确定位视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47745814/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com