gpt4 book ai didi

ios - CGContextDrawImage 相机应用程序崩溃

转载 作者:行者123 更新时间:2023-11-29 04:26:45 26 4
gpt4 key购买 nike

我正在尝试使用 AVCaptureSession 获取图像。我遵循了本教程 http://www.benjaminloulier.com/posts/2-ios4-and-direct-access-to-the-camera .我正在从图像引用创建 uiimage,然后从该 uiimage 获取像素。但应用程序在一段时间后崩溃(少于 30 秒)。我尝试使用 Leaks 进行分析,但也崩溃了。使用日志,我发现应用程序在 CGContextDrawImage(context, rect, image1.CGImage); 行之前崩溃。你们对我可能做错的事情有什么建议吗?我还在应用程序崩溃前几秒钟看到内存分配错误。请帮忙。

代码发布在下面..

// Create a UIImage from sample buffer data

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
lock = @"YES";

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);

// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();


size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);

// Create a Quartz direct-access data provider that uses data we supply.
NSData *data = [NSData dataWithBytes:baseAddress length:bufferSize];

CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

CGImageRef quartzImage = CGImageCreate(width, height, 8, 32, bytesPerRow,
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);

CGDataProviderRelease(dataProvider);

// Unlock the pixel buffer

CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

CVPixelBufferUnlockBaseAddress(imageBuffer,0);
baseAddress = nil;
[data release];
lock = @"NO";
return(image);
}

-(void)calculate
{
@try {

UIImage *image1 = [self stillImage]; //Capture an image from the camera.
//Extract the pixels from the camera image.

CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();

size_t bytesPerRow = image1.size.width*4;
unsigned char* bitmapData = (unsigned char*)malloc(bytesPerRow*image1.size.height);

CGContextRef context = CGBitmapContextCreate(bitmapData, image1.size.width, image1.size.height, 8, bytesPerRow,colourSpace,kCGImageAlphaPremultipliedFirst|kCGBitmapByteOrder32Big);

CGColorSpaceRelease(colourSpace);

CGContextDrawImage(context, rect, image1.CGImage);

unsigned char* pixels = (unsigned char*)CGBitmapContextGetData(context);

totalLuminance = 0.0;
for(int p=0; p<image1.size.width*image1.size.height*4; p+=4)
{
totalLuminance += pixels[p]*0.3 + pixels[p+1]*0.59 + pixels[p+2]*0.11;
}

totalLuminance /= (image1.size.height * image1.size.width);

pixels = nil;

bitmapData = nil;

[image1 release];

CGContextRelease(context);
//image1 = nil;

//totalLuminance = [n floatValue]; //Calculate the total luminance.
float f = [del.camcont.slider value];
float total = totalLuminance * f;
NSString *ns = [NSString stringWithFormat:@"Lux : %0.2f", total];
NSLog(@"slider = %f",f);
NSLog(@"totlaluminance = %f",totalLuminance);
NSLog(@"%@",ns);
//NSString *ns = [NSString initWithFormat:@"Lux : %0.2f", total];
[del.camcont.lux setText:ns];//Display the total luminance.

self.stillImage = nil;
//[self.stillImage release];
ns = nil;
//n = nil;
//del = nil;
}

@catch (NSException *exception) {
NSLog(@"main: Caught %@: %@", [exception name], [exception reason]);
}
}

最佳答案

我不清楚为什么你要采用CMSampleBufferRef,然后创建一个CGImageRef,然后创建一个UIImage,然后采用它UIImageCGImageRef 并吸出数据,然后将其插入 unsigned char 指针(本质上,它指向与首先位于 CMSampleBufferRef 中)。

如果你这样做,你将简化你的生活(并且你应该发现它更容易调试):

CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
uint8_t *pixels = malloc(bytesPerRow*height);
memcpy(pixels, baseAddress, bytesPerRow*height);
baseAddress = nil;
imageBuffer = nil;
sampleBuffer = nil;
float totalLuminance = 0.0;
for(int r=0; r<height; r++)
{
for(int p=0, p<width, p+=4)
{
totalLuminance += pixels[p+(r*bytesPerRow)]*0.3
+ pixels[p+1+(r*bytesPerRow)]*0.59
+ pixels[p+2+(r*bytesPerRow)]*0.11;
{
}
free(pixels);
totalLuminance /= (width * height);

(嵌套的 for 循环是为了补偿 bytesPerRow 不能假定与 width*4 相同的事实,因为填充。)

关于ios - CGContextDrawImage 相机应用程序崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12218255/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com