gpt4 book ai didi

iphone - 如何使用 AVFoundation 组合不同方向的视频剪辑

转载 作者:IT王子 更新时间:2023-10-29 07:51:47 25 4
gpt4 key购买 nike

我正在尝试使用 AVFoundation 将多个视频剪辑合并为一个。我可以使用下面的代码使用 AVMutableComposition 创建单个视频

AVMutableComposition *composition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

CMTime startTime = kCMTimeZero;

/*videoClipPaths is a array of paths of the video clips recorded*/

//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoClipPaths count]; i++) {

NSString *path = (NSString*)[videoClipPaths objectAtIndex:i];

NSURL *url = [[NSURL alloc] initFileURLWithPath:path];

AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
[url release];

AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}

ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:audioTrack atTime:startTime error:nil];

startTime = CMTimeAdd(startTime, [asset duration]);
}

//export the combined video
NSString *combinedPath = /* path of the combined video*/;

NSURL *url = [[NSURL alloc] initFileURLWithPath: combinedPath];

AVAssetExportSession *exporter = [[[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPreset640x480] autorelease];

exporter.outputURL = url;
[url release];

exporter.outputFileType = [[exporter supportedFileTypes] objectAtIndex:0];

[exporter exportAsynchronouslyWithCompletionHandler:^(void){[self combineVideoFinished:exporter.outputURL status:exporter.status error:exporter.error];}];

如果所有视频剪辑都以相同的方向(纵向或横向)录制,则上面的代码可以正常工作。但是,如果我在剪辑中混合了方向,则最终视频的一部分将向右(或向左)旋转 90 度。

我想知道有没有办法在合成时将所有剪辑转换为相同的方向(例如第一个剪辑的方向)。从我从 XCode 文档中读到的内容 AVMutableVideoCompositionLayerInstruction 似乎可以用来转换 AVAsset,但我不确定如何创建和应用几个不同的层指令到相应的剪辑和使用然后在合成中 (AVMutableComposition*)

如有任何帮助,我们将不胜感激!

最佳答案

这就是我的工作。然后我使用 AVAssetExportSession 来创建实际文件。但我警告您,CGAffineTransforms 有时应用较晚,因此您会在视频转换之前看到一两个原始文件。我不知道为什么会这样,不同的视频组合会产生预期的结果,但有时会关闭。

AVMutableComposition *composition = [AVMutableComposition composition];    
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1,30);
videoComposition.renderScale = 1.0;

AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTrack];

// Get only paths the user selected NSMutableArray *array = [NSMutableArray array]; for(NSString* string in videoPathArray){
if(![string isEqualToString:@""]){
[array addObject:string];
}

self.videoPathArray = array;

float time = 0;

for (int i = 0; i<self.videoPathArray.count; i++) {

AVURLAsset *sourceAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:[videoPathArray objectAtIndex:i]] options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];

NSError *error = nil;

BOOL ok = NO;
AVAssetTrack *sourceVideoTrack = [[sourceAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];

CGSize temp = CGSizeApplyAffineTransform(sourceVideoTrack.naturalSize, sourceVideoTrack.preferredTransform);
CGSize size = CGSizeMake(fabsf(temp.width), fabsf(temp.height));
CGAffineTransform transform = sourceVideoTrack.preferredTransform;

videoComposition.renderSize = sourceVideoTrack.naturalSize;
if (size.width > size.height) {
[layerInstruction setTransform:transform atTime:CMTimeMakeWithSeconds(time, 30)];
} else {

float s = size.width/size.height;

CGAffineTransform new = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(s,s));

float x = (size.height - size.width*s)/2;

CGAffineTransform newer = CGAffineTransformConcat(new, CGAffineTransformMakeTranslation(x, 0));

[layerInstruction setTransform:newer atTime:CMTimeMakeWithSeconds(time, 30)];
}

ok = [compositionVideoTrack insertTimeRange:sourceVideoTrack.timeRange ofTrack:sourceVideoTrack atTime:[composition duration] error:&error];

if (!ok) {
// Deal with the error.
NSLog(@"something went wrong");
}

NSLog(@"\n source asset duration is %f \n source vid track timerange is %f %f \n composition duration is %f \n composition vid track time range is %f %f",CMTimeGetSeconds([sourceAsset duration]), CMTimeGetSeconds(sourceVideoTrack.timeRange.start),CMTimeGetSeconds(sourceVideoTrack.timeRange.duration),CMTimeGetSeconds([composition duration]), CMTimeGetSeconds(compositionVideoTrack.timeRange.start),CMTimeGetSeconds(compositionVideoTrack.timeRange.duration));

time += CMTimeGetSeconds(sourceVideoTrack.timeRange.duration);
}

instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
instruction.timeRange = compositionVideoTrack.timeRange;
videoComposition.instructions = [NSArray arrayWithObject:instruction];

这就是我的工作。然后我使用 AVAssetExportSession 来创建实际文件。但我警告您,CGAffineTransforms 有时应用较晚,因此您会在视频转换之前看到一两个原始文件。我不知道为什么会这样,不同的视频组合会产生预期的结果,但有时会关闭。

关于iphone - 如何使用 AVFoundation 组合不同方向的视频剪辑,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6575128/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com