gpt4 book ai didi

ios - 尝试使用 AVFoundation 在同一帧中组合三个不同的视频轨道时多次获取第一个视频轨道

转载 作者:可可西里 更新时间:2023-11-01 01:07:56 24 4
gpt4 key购买 nike

我想将多个视频及其音频合并到一个视频帧中,因为我正在使用 AVFoundation 框架。

为此,我创建了一个接受 Assets 数组的方法,截至目前,我正在传递三个不同视频的 Assets 。

到目前为止,我已经合并了他们的音频,但问题在于视频帧,其中只有第一个 Assets 的视频在每一帧中重复

Current output video

我正在使用下面的代码来组合视频,这些视频完美地组合了所有三个视频的音频,但输入数组中的第一个视频重复了三次,这是主要问题:

我想要帧中的所有三个不同视频。

func merge(Videos aArrAssets: [AVAsset]){

let mixComposition = AVMutableComposition()

func setup(asset aAsset: AVAsset, WithComposition aComposition: AVMutableComposition) -> AVAssetTrack{

let aMutableCompositionVideoTrack = aComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let aMutableCompositionAudioTrack = aComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)

let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]
let aAudioAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .audio)[0]

do{
try aMutableCompositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aVideoAssetTrack, at: .zero)
try aMutableCompositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aAudioAssetTrack, at: .zero)
}catch{}

return aVideoAssetTrack
}

let aArrVideoTracks = aArrAssets.map { setup(asset: $0, WithComposition: mixComposition) }

var aArrLayerInstructions : [AVMutableVideoCompositionLayerInstruction] = []

//Transform every video
var aNewHeight : CGFloat = 0
for (aIndex,aTrack) in aArrVideoTracks.enumerated(){

aNewHeight += aIndex > 0 ? aArrVideoTracks[aIndex - 1].naturalSize.height : 0

let aLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: aTrack)
let aFristTransform = CGAffineTransform(translationX: 0, y: aNewHeight)

aLayerInstruction.setTransform(aFristTransform, at: .zero)
aArrLayerInstructions.append(aLayerInstruction)
}


let aTotalTime = aArrVideoTracks.map { $0.timeRange.duration }.max()

let aInstruction = AVMutableVideoCompositionInstruction()
aInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: aTotalTime!)
aInstruction.layerInstructions = aArrLayerInstructions

let aVideoComposition = AVMutableVideoComposition()
aVideoComposition.instructions = [aInstruction]
aVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)

let aTotalWidth = aArrVideoTracks.map { $0.naturalSize.width }.max()!
let aTotalHeight = aArrVideoTracks.map { $0.naturalSize.height }.reduce(0){ $0 + $1 }
aVideoComposition.renderSize = CGSize(width: aTotalWidth, height: aTotalHeight)


saveVideo(WithAsset: mixComposition, videoComp : aVideoComposition) { (aError, aUrl) in
print("Location : \(String(describing: aUrl))")
}
}

private func saveVideo(WithAsset aAsset : AVAsset, videoComp : AVVideoComposition, completion: @escaping (_ error: Error?, _ url: URL?) -> Void){


let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "ddMMyyyy_HHmm"
let date = dateFormatter.string(from: NSDate() as Date)

// Exporting
let savePathUrl: URL = URL(fileURLWithPath: NSHomeDirectory() + "/Documents/newVideo_\(date).mov")
do { // delete old video
try FileManager.default.removeItem(at: savePathUrl)
} catch { print(error.localizedDescription) }


let assetExport: AVAssetExportSession = AVAssetExportSession(asset: aAsset, presetName: AVAssetExportPresetMediumQuality)!
assetExport.outputFileType = .mov
assetExport.outputURL = savePathUrl
// assetExport.shouldOptimizeForNetworkUse = true
assetExport.videoComposition = videoComp

assetExport.exportAsynchronously { () -> Void in
switch assetExport.status {
case .completed:
print("success")
completion(nil, savePathUrl)
case .failed:
print("failed \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
case .cancelled:
print("cancelled \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
default:
print("complete")
completion(assetExport.error, nil)
}
}
}

我知道我在代码中做错了一些事情,但无法弄清楚在哪里,所以我需要一些帮助来找出它。

提前致谢。

最佳答案

您的问题是,当您构建AVMutableVideoCompositionLayerInstruction 时,aTrack 引用是对您设置的原始 Assets 轨道的引用

let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]

它的 trackID 是 1,因为它是源 AVAsset 中的第一首轨道。因此,当您检查您的 aArrLayerInstructions 时,您会看到您的指令的 trackID 都是 1。这就是为什么您会收到 3 次第一个视频

(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R8 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R10 = 1
...

解决方案是在构建合成层指令时不枚举源轨道,而是枚举合成轨道。

let tracks = mixComposition.tracks(withMediaType: .video)
for (aIndex,aTrack) in tracks.enumerated(){
...

如果你这样做,你会为你的图层指令获得正确的 trackIDs

(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R2 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R4 = 3
...

关于ios - 尝试使用 AVFoundation 在同一帧中组合三个不同的视频轨道时多次获取第一个视频轨道,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53679676/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com