gpt4 book ai didi

AVFoundation - 只合并第一个显示的视频

转载 作者:行者123 更新时间:2023-12-05 07:53:10 26 4
gpt4 key购买 nike

我正在尝试采用不同的方法来组合视频。我正在为每个转换创建一个新轨道。

此代码的问题在于显示了第一个视频,而其他所有视频都是黑色的。

整个片段的音频覆盖都是正确的。看起来视频没有带入合成中,因为文件的大小是 5 M,而它应该是 25M 左右。 5M 大小与第一个剪辑和音轨的大小相关。所有 AVAssets 似乎都是有效的。这些文件确实存在于文件系统中。这是代码:

- (void)mergeVideos:(NSMutableArray *)assets withCompletion:(void (^)(NSString *))completion; {


// NSMutableArray *instructions = [NSMutableArray new];
CGSize size = CGSizeZero;
CMTime currentstarttime = kCMTimeZero;

int tracknumber = 1;
int32_t commontimescale = 600;
CMTime time = kCMTimeZero;

AVMutableComposition *mutableComposition = [AVMutableComposition composition];
NSMutableArray *instructions = [[NSMutableArray alloc] init];

for (NSURL *assetUrl in assets) {

AVAsset *asset = [AVAsset assetWithURL:assetUrl];

NSLog(@"Number of tracks: %lu Incremental track number %i", (unsigned long)[[asset tracks] count], tracknumber);

// make sure the timescales are correct for these tracks
CMTime cliptime = CMTimeConvertScale(asset.duration, commontimescale, kCMTimeRoundingMethod_QuickTime);

AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];

AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;

NSLog(@"Running time: value = %lld timescale = %d", time.value, time.timescale);
NSLog(@"Asset length: value = %lld timescale = %d", asset.duration.value, asset.duration.timescale);
NSLog(@"Converted Scale: value = %lld timescale = %d", cliptime.value, cliptime.timescale);

NSError *error;

[videoCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(kCMTimeZero, time)];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(time, cliptime)
ofTrack:assetTrack
atTime:time
error:&error];
if (error) {
NSLog(@"Error - %@", error.debugDescription);
}

// this flips the video temporarily for the front facing camera
AVMutableVideoCompositionLayerInstruction *inst = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];

// set the flipping trasform to the correct tracks
if ((tracknumber == 2) || (tracknumber == 4) || (tracknumber == 6) || (tracknumber == 8) || (tracknumber == 10)) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI);
[inst setTransform:transform atTime:time];
} else {
CGAffineTransform transform = assetTrack.preferredTransform;
[inst setTransform:transform atTime:time];
}

// don't block the other videos with your black - needs to be the incremental time
[inst setOpacity:0.0 atTime:time];

// add the instructions to the overall array
[instructions addObject:inst];

// increment the total time after w use it for this iteration
time = CMTimeAdd(time, cliptime);

if (CGSizeEqualToSize(size, CGSizeZero)) {
size = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject.naturalSize;;
}

// incrememt the track counter
tracknumber++;
}

AVMutableVideoCompositionInstruction *mainVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, time);

mainVideoCompositionInstruction.layerInstructions = instructions;

// bring all of the video together in the main composition
AVMutableVideoComposition *mainVideoComposition = [AVMutableVideoComposition videoComposition];
mainVideoComposition.instructions = [NSArray arrayWithObject:mainVideoCompositionInstruction];

// setup the audio
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];


// Grab the path, make sure to add it to your project!
NSURL *soundURL = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"bink-bink-lexus-3" ofType:@"aif"]];
AVURLAsset *soundAsset = [AVURLAsset assetWithURL:soundURL];

NSError *error;

// add audio to the entire track
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)
ofTrack:[soundAsset tracksWithMediaType:AVMediaTypeAudio][0]
atTime:kCMTimeZero
error:&error];

// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
// mainVideoComposition.frameDuration = CMTimeMake(1, 30);
mainVideoComposition.renderSize = size;

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths firstObject];
int number = arc4random_uniform(10000);
self.outputFile = [documentsDirectory stringByAppendingFormat:@"/export_%i.mov",number];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition
presetName:AVAssetExportPreset1280x720];

exporter.outputURL = [NSURL fileURLWithPath:self.outputFile];
//Set the output file type
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;


dispatch_group_t group = dispatch_group_create();


dispatch_group_enter(group);

[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_group_leave(group);

}];

dispatch_group_notify(group, dispatch_get_main_queue(), ^{

NSLog(@"Export File (Final) - %@", self.outputFile);
completion(self.outputFile);

});

}

最佳答案

您的问题是,通过使用多个 AVMutableCompositionTracks 并在 kCMTimeZero 之后一次插入一个时间范围,您将导致每个后续轨道的媒体出现在 kCMTimeZero 的合成中。您需要使用 insertEmptyTimeRange:如果你想走这条路。它会根据您插入的空白范围的持续时间及时向前移动该特定轨道的媒体。

或者,更简单的方法是使用单个 AVMutableCompositionTrack。

关于AVFoundation - 只合并第一个显示的视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32977879/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com