gpt4 book ai didi

iphone - AVAssetWriter 困境

转载 作者:技术小花猫 更新时间:2023-10-29 10:39:38 26 4
gpt4 key购买 nike

我正在尝试使用 AVAssetWriter 将 CGImages 写入文件以从图像创建视频。

我已经让它在模拟器上以三种不同的方式成功运行,但在运行 iOS 4.3 的 iPhone 4 上,每种方法都失败了。

这都与像素缓冲区有关。

我的第一个方法是只根据需要创建像素缓冲区,而不使用池。这行得通,但内存占用太大,无法在设备上运行。

我的第二种方法是使用推荐的 AVAssetWriterInputPixelBufferAdaptor,然后使用 CVPixelBufferPoolCreatePixelBuffer 从适配器 pixelBufferPool 中提取像素缓冲区。

这也适用于模拟器,但在设备上失败,因为从未分配适配器的像素缓冲池。我没有收到任何错误消息。

最后,我尝试使用 CVPixelBufferPoolCreate 创建自己的像素缓冲池。这也适用于模拟器,但在设备上,一切正常,直到我尝试使用 appendPixelBuffer 附加像素缓冲区,但每次都失败。

我在网上找到的这方面信息非常少。我的代码基于我发现的示例,但现在好几天都没有运气。如果任何人有成功使用 AVAssetWriter 执行此操作的经验,请看一下,如果您发现任何不合适的地方,请告诉我。

注意:您会看到注释掉的尝试 block 。

首先,设置

- (BOOL) openVideoFile: (NSString *) path withSize:(CGSize)imageSize {
size = CGSizeMake (480.0, 320.0);//imageSize;

NSError *error = nil;
videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
if (error != nil)
return NO;

NSDictionary *videoCleanApertureSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:size.width], AVVideoCleanApertureWidthKey,
[NSNumber numberWithDouble:size.height], AVVideoCleanApertureHeightKey,
[NSNumber numberWithInt:10], AVVideoCleanApertureHorizontalOffsetKey,
[NSNumber numberWithInt:10], AVVideoCleanApertureVerticalOffsetKey,
nil];


NSDictionary *videoAspectRatioSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1], AVVideoPixelAspectRatioHorizontalSpacingKey,
[NSNumber numberWithInt:1],AVVideoPixelAspectRatioVerticalSpacingKey,
nil];



NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
//[NSNumber numberWithInt:960000], AVVideoAverageBitRateKey,
// [NSNumber numberWithInt:1],AVVideoMaxKeyFrameIntervalKey,
videoCleanApertureSettings, AVVideoCleanApertureKey,
videoAspectRatioSettings, AVVideoPixelAspectRatioKey,
//AVVideoProfileLevelH264Main31, AVVideoProfileLevelKey,
nil];

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
codecSettings,AVVideoCompressionPropertiesKey,
[NSNumber numberWithDouble:size.width], AVVideoWidthKey,
[NSNumber numberWithDouble:size.height], AVVideoHeightKey,
nil];
writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
NSMutableDictionary * bufferAttributes = [[NSMutableDictionary alloc] init];
[bufferAttributes setObject: [NSNumber numberWithInt: kCVPixelFormatType_32ARGB]
forKey: (NSString *) kCVPixelBufferPixelFormatTypeKey];
[bufferAttributes setObject: [NSNumber numberWithInt: 480]
forKey: (NSString *) kCVPixelBufferWidthKey];
[bufferAttributes setObject: [NSNumber numberWithInt: 320]
forKey: (NSString *) kCVPixelBufferHeightKey];


//NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
//[bufferAttributes setObject: [NSNumber numberWithInt: 640]
// forKey: (NSString *) kCVPixelBufferWidthKey];
//[bufferAttributes setObject: [NSNumber numberWithInt: 480]
// forKey: (NSString *) kCVPixelBufferHeightKey];
adaptor = [[AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil] retain];

//CVPixelBufferPoolCreate (kCFAllocatorSystemDefault,NULL,(CFDictionaryRef)bufferAttributes,&pixelBufferPool);
//Create buffer pool
NSMutableDictionary* attributes;
attributes = [NSMutableDictionary dictionary];

int width = 480;
int height = 320;

[attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithInt:width] forKey: (NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithInt:height] forKey: (NSString*)kCVPixelBufferHeightKey];
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef) attributes, &pixelBufferPool);


NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

writerInput.expectsMediaDataInRealTime = YES;

//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

buffer = NULL;
lastTime = kCMTimeZero;
presentTime = kCMTimeZero;

return YES;
}

接下来,追加写入器和创建要追加的像素缓冲区的两个方法。

- (void) writeImageToMovie:(CGImageRef)image 
{
if([writerInput isReadyForMoreMediaData])
{
// CMTime frameTime = CMTimeMake(1, 20);
// CMTime lastTime=CMTimeMake(i, 20); //i is from 0 to 24 of the loop above
// CMTime presentTime=CMTimeAdd(lastTime, frameTime);

buffer = [self pixelBufferFromCGImage:image];
BOOL success = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (!success) NSLog(@"Failed to appendPixelBuffer");
CVPixelBufferRelease(buffer);

presentTime = CMTimeAdd(lastTime, CMTimeMake(5, 1000));
lastTime = presentTime;
}
else
{
NSLog(@"error - writerInput not ready");
}
}

- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pxbuffer;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
if (pixelBufferPool == NULL) NSLog(@"pixelBufferPool is null!");
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, pixelBufferPool, &pxbuffer);
/*if (pxbuffer == NULL) {
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);

}*/
//NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);


CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
//NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
//NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(90, 10, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;
}

最佳答案

我已经找到了这个问题的解决方案。

如果你想让 AVAudioPlayer 和 AVAssetWriter 一起正确运行,你必须有一个“可混合”的 Audio Session 类别。

您可以使用可混合的类别,例如 AVAudioSessionCategoryAmbient。

但是,我需要使用 AVAudioSessionCategoryPlayAndRecord。

您可以通过实现此设置将任何类别设置为可混合:

OSStatus propertySetError = 0;

UInt32 allowMixing = true;

propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);

关于iphone - AVAssetWriter 困境,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6189535/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com