gpt4 book ai didi

ios - 后台 iOS 视频合并

转载 作者:可可西里 更新时间:2023-11-01 03:29:57 44 4
gpt4 key购买 nike

任务:将传单图像合并到传单视频中。

案例:

  • 创建传单[添加表情符号图像/文本..等]
  • 制作视频

案例一

  • 按下后退按钮[用户将转到传单屏幕的应用程序列表],在此期间我们将 flyerSnapShoot 合并到 flyerVideo 中。它完美运行
  • 转到 Phone Gallery,我们会在其中看到更新的视频。

案例2

  • 按 iPhone 主页按钮,我正在做与上面相同的事情,但面临以下错误

FAIL = Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo=0x17266d40 {NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x172b3920 "The operation couldn’t be completed. (OSStatus error -16980.)", NSLocalizedFailureReason=An unknown error occurred (-16980)}

代码:

- (void)modifyVideo:(NSURL *)src destination:(NSURL *)dest crop:(CGRect)crop
scale:(CGFloat)scale overlay:(UIImage *)image
completion:(void (^)(NSInteger, NSError *))callback {

// Get a pointer to the asset
AVURLAsset* firstAsset = [AVURLAsset URLAssetWithURL:src options:nil];

// Make an instance of avmutablecomposition so that we can edit this asset:
AVMutableComposition* mixComposition = [AVMutableComposition composition];

// Add tracks to this composition
AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

// Audio track
AVMutableCompositionTrack *audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

// Image video is always 30 seconds. So we use that unless the background video is smaller.
CMTime inTime = CMTimeMake( MAX_VIDEO_LENGTH * VIDEOFRAME, VIDEOFRAME );
if ( CMTimeCompare( firstAsset.duration, inTime ) < 0 ) {
inTime = firstAsset.duration;
}

// Add to the video track.
NSArray *videos = [firstAsset tracksWithMediaType:AVMediaTypeVideo];
CGAffineTransform transform;
if ( videos.count > 0 ) {
AVAssetTrack *track = [videos objectAtIndex:0];
[videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, inTime) ofTrack:track atTime:kCMTimeZero error:nil];
transform = track.preferredTransform;
videoTrack.preferredTransform = transform;
}

// Add the audio track.
NSArray *audios = [firstAsset tracksWithMediaType:AVMediaTypeAudio];
if ( audios.count > 0 ) {
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, inTime) ofTrack:[audios objectAtIndex:0] atTime:kCMTimeZero error:nil];
}

NSLog(@"Natural size: %.2f x %.2f", videoTrack.naturalSize.width, videoTrack.naturalSize.height);

// Set the mix composition size.
mixComposition.naturalSize = crop.size;

// Set up the composition parameters.
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, VIDEOFRAME );
videoComposition.renderSize = crop.size;
videoComposition.renderScale = 1.0;

// Pass through parameters for animation.
AVMutableVideoCompositionInstruction *passThroughInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
passThroughInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, inTime);

// Layer instructions
AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];

// Set the transform to maintain orientation
if ( scale != 1.0 ) {
CGAffineTransform scaleTransform = CGAffineTransformMakeScale( scale, scale);
CGAffineTransform translateTransform = CGAffineTransformTranslate( CGAffineTransformIdentity,
-crop.origin.x,
-crop.origin.y);
transform = CGAffineTransformConcat( transform, scaleTransform );
transform = CGAffineTransformConcat( transform, translateTransform);
}

[passThroughLayer setTransform:transform atTime:kCMTimeZero];

passThroughInstruction.layerInstructions = @[ passThroughLayer ];
videoComposition.instructions = @[passThroughInstruction];

// If an image is given, then put that in the animation.
if ( image != nil ) {

// Layer that merges the video and image
CALayer *parentLayer = [CALayer layer];
parentLayer.frame = CGRectMake( 0, 0, crop.size.width, crop.size.height);

// Layer that renders the video.
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0, 0, crop.size.width, crop.size.height );
[parentLayer addSublayer:videoLayer];

// Layer that renders flyerly image.
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, crop.size.width, crop.size.height );
imageLayer.contents = (id)image.CGImage;
[imageLayer setMasksToBounds:YES];

[parentLayer addSublayer:imageLayer];

// Setup the animation tool
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
}

// Now export the movie
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetHighestQuality];
exportSession.videoComposition = videoComposition;

// Export the URL
exportSession.outputURL = dest;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
exportSession.shouldOptimizeForNetworkUse = YES;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
callback( exportSession.status, exportSession.error );
}];
}

我从 AppDelegate.m 调用这个函数

- (void)applicationDidEnterBackground:(UIApplication *)application
{
bgTask = [application beginBackgroundTaskWithName:@"MyTask" expirationHandler:^{
// Clean up any unfinished task business by marking where you
// stopped or ending the task outright.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
}];

// Start the long-running task and return immediately.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

// Do the work associated with the task, preferably in chunks.
[self goingToBg];

[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
});

NSLog(@"backgroundTimeRemaining: %f", [[UIApplication sharedApplication] backgroundTimeRemaining]);
}

最佳答案

在这个问题上做了很多 RND,没有找到解决方案。

想分享几个链接,希望如果他们遇到同样的问题[要求],它会对堆栈社区有所帮助。

链接 1:AVExportSession to run in background

与问题相关的引用[从上面的 Link1 复制]

Sadly, since AVAssetExportSession uses the gpu to do some of it's work, it cannot run in the background if you are using an AVVideoComposition.

链接 2:Starting AVAssetExportSession in the Background

与问题相关的引用[从上面的 Link2 复制]

You can start AVAssetExportSession in background. The only limitations in AVFoundation to performing work in the background, are using AVVideoCompositions or AVMutableVideoCompositions. AVVideoCompositions are using the GPU, and the GPU cannot be used in the background

后台任务的网址:

APPLE DEV URL

RAYWENDERLICH URL

Stack question

关于ios - 后台 iOS 视频合并,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28694975/

44 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com