gpt4 book ai didi

iOS - 逐帧读取视频文件,图像处理,然后另存为新视频文件

转载 作者:可可西里 更新时间:2023-11-01 05:41:48 28 4
gpt4 key购买 nike

我尝试从 iPhone 相册中逐帧读取视频。图像处理后,我会将它们保存为新视频。我运行我的代码没有任何错误,但相册中没有新视频。

这是我的代码。

// Video writer init
- (BOOL)setupAssetWriterForURL:(CMFormatDescriptionRef)formatDescription
{
float bitsPerPixel;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
int numPixels = dimensions.width * dimensions.height;
int bitsPerSecond;

if ( numPixels < (640 * 480) )
bitsPerPixel = 4.05;
else
bitsPerPixel = 11.4;

bitsPerSecond = numPixels * bitsPerPixel;

NSDictionary *videoCompressionSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInteger:dimensions.width], AVVideoWidthKey,
[NSNumber numberWithInteger:dimensions.height], AVVideoHeightKey,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:bitsPerSecond], AVVideoAverageBitRateKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey,
nil], AVVideoCompressionPropertiesKey,
nil];
if ([assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) {
assetWriterVideoIn = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
assetWriterVideoIn.expectsMediaDataInRealTime = YES;
assetWriterVideoIn.transform = [self transformFromCurrentVideoOrientationToOrientation:self.referenceOrientation];
if ([assetWriter canAddInput:assetWriterVideoIn])
[assetWriter addInput:assetWriterVideoIn];
else {
NSLog(@"Couldn't add asset writer video input.");
return NO;
}
}
else {
NSLog(@"Couldn't apply video output settings.");
return NO;
}

return YES;
}

阅读视频

- (void)readMovie:(NSURL *)url
{
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1)
{
videoTrack = [tracks objectAtIndex:0];

NSError * error = nil;

// _movieReader is a member variable
AVAssetReader *movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(@"_movieReader fail!\n");

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings =
[NSDictionary dictionaryWithObject:value forKey:key];

[movieReader addOutput:[AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:videoTrack
outputSettings:videoSettings]];
[movieReader startReading];

while ([movieReader status] == AVAssetReaderStatusReading)
{
AVAssetReaderTrackOutput * output = [movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer)
{
if ( !assetWriter ) {
outputURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/%llu.mov", NSTemporaryDirectory(), mach_absolute_time()]];

NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:outputURL fileType:(NSString *)kUTTypeQuickTimeMovie error:&error];
if (error)
[self showError:error];


if (assetWriter) {
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setupAssetWriterForURL:formatDescription];
}
}
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);


CVPixelBufferLockBaseAddress(imageBuffer,0);

int bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
int bufferWidth = CVPixelBufferGetWidth(imageBuffer);
int bufferHeight = CVPixelBufferGetHeight(imageBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);


for( int row = 0; row < bufferHeight; row++ ) {
for( int column = 0; column < bufferWidth; column++ ) {

pixel[0] = (pixel[0]+pixel[1]+pixel[2])/3;
pixel[1] = (pixel[0]+pixel[1]+pixel[2])/3;
pixel[2] = (pixel[0]+pixel[1]+pixel[2])/3;

pixel += 4;
}
}


CVPixelBufferUnlockBaseAddress(imageBuffer,0);

if ( assetWriter ) {
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
}


CFRelease(sampleBuffer);
}

}
if (assetWriter) {
[assetWriterVideoIn markAsFinished];
assetWriter = nil;
[assetWriter finishWriting];
assetWriterVideoIn = nil;
assetWriter = nil;

[self saveMovieToCameraRoll];
}
else {
[self showError:[assetWriter error]];
}

}
});
}];

}
- (void) writeSampleBuffer:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType
{
if ( assetWriter.status == AVAssetWriterStatusUnknown ) {

if ([assetWriter startWriting]) {
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
}
else {
[self showError:[assetWriter error]];
}
}

if ( assetWriter.status == AVAssetWriterStatusWriting ) {

if (mediaType == AVMediaTypeVideo) {
if (assetWriterVideoIn.readyForMoreMediaData) {
if (![assetWriterVideoIn appendSampleBuffer:sampleBuffer]) {
[self showError:[assetWriter error]];
}
}
}

}
}
- (void)saveMovieToCameraRoll
{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeVideoAtPathToSavedPhotosAlbum:outputURL
completionBlock:^(NSURL *assetURL, NSError *error) {
if (error){
[self showError:error];
NSLog(@"save fail");
}
else
{
[self removeFile:outputURL];
NSLog(@"!!!");
}
});
}];
}
- (void)removeFile:(NSURL *)fileURL
{
NSFileManager *fileManager = [NSFileManager defaultManager];
NSString *filePath = [fileURL path];
if ([fileManager fileExistsAtPath:filePath]) {
NSError *error;
BOOL success = [fileManager removeItemAtPath:filePath error:&error];
if (!success)
[self showError:error];
}
}

有什么建议吗?

最佳答案

我有点晚了,但它可能对其他人有帮助,代码几乎是正确的,但只需注释一行代码即可,它在 readMovie: 方法中

                            //assetWriter = nil; commented line  
[assetWriter finishWriting];
assetWriterVideoIn = nil;
assetWriter = nil;

[self saveMovieToCameraRoll];
}`

关于iOS - 逐帧读取视频文件,图像处理,然后另存为新视频文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13992251/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com