gpt4 book ai didi

ios - AVAssetExportSession 结合视频文件和视频之间的卡住帧

转载 作者:行者123 更新时间:2023-11-28 21:42:47 48 4
gpt4 key购买 nike

我有一个应用程序可以将视频文件组合在一起制作一个长视频。视频之间可能会有延迟(例如,V1 从 t=0s 开始并运行 5 秒,V1 从 t=10s 开始)。在这种情况下,我希望视频卡住 V1 的最后一帧,直到 V2 开始。

我正在使用下面的代码,但在视频之间,整个视频都变白了。

关于如何获得我正在寻找的效果有什么想法吗?

谢谢!

@interface VideoJoins : NSObject

-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;

@property (nonatomic, strong) NSURL* url;
@property (nonatomic) NSTimeInterval delay;

@end

+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

CMTime startTime = kCMTimeZero;

/*videoClipPaths is a array of paths of the video clips recorded*/

//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}

AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];

CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}

AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}

BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];

startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}

//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}

//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];

exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;

[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(@"Export Failed");
NSError* err = exporter.error;
NSLog(@"ExportSessionError: %@", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(@"Export Cancelled");
NSLog(@"ExportSessionError: %@", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}

编辑:开始工作了。以下是我如何提取图像并从这些图像生成视频:

+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);

CGSize size = image.size;

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];

//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(@"Error:%@", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}

+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];

if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}

AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;

CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);

if(handler)
{
handler(thumb);
}
}

最佳答案

AVMutableComposition 只能将视频拼接在一起。我通过做两件事来做到这一点:

  • 提取第一个视频的最后一帧作为图像。
  • 使用此图片制作视频(持续时间取决于您的要求)。

然后您可以组合这三个视频(V1、V2 和您的单图视频)。这两项任务都非常容易完成。

要从视频中提取图像,请看这个 link .如果您不想使用已接受的答案使用的 MPMoviePlayerController,请查看 Steve 的其他答案。

要使用图像制作视频,请查看此 link .问题是关于音频的问题,但我认为您不需要音频。所以只看问题中提到的方法本身。

更新:有一种更简单的方法,但它有一个缺点。你可以有两个 AVPlayer。第一个播放您的视频,中间有白框。另一个坐在后面,在视频 1 的最后一帧暂停。所以当中间部分出现时,您将看到第二个 AVPlayer 加载了最后一帧。所以从整体上看,视频 1 似乎已暂停。相信我,肉眼无法辨认玩家何时更换。但明显的缺点是您导出的视频将与空白帧相同。因此,如果您只想在您的应用程序中播放它,您可以采用这种方法。

关于ios - AVAssetExportSession 结合视频文件和视频之间的卡住帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29846663/

48 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com