gpt4 book ai didi

ios - AVCaptureSession 废弃内存 - 分配 - 工具

转载 作者:塔克拉玛干 更新时间:2023-11-02 10:19:21 28 4
gpt4 key购买 nike

我使用默认的 AVCaptureSession 来捕获相机 View 。
一切正常,我没有任何泄漏,但是当我在启动和关闭 AVCaptureDevice 后使用分配来查找废弃的内存时,它显示了大约 230 个对象,它们仍然存在。

这是我的代码:

Controller .h:

@interface Controller : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate> {
AVCaptureSession *captureSession;
AVCaptureDevice *device;

IBOutlet UIView *previewLayer;
}
@property (nonatomic, retain) AVCaptureSession *captureSession;
@property (nonatomic, retain) UIView *previewLayer;

- (void)setupCaptureSession;
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer;

Controller .m:

- (void)setupCaptureSession {       
NSError *error = nil;

[self setCaptureSession: [[AVCaptureSession alloc] init]];

self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;

device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus] && [device lockForConfiguration:&error]) {
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}

AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// TODO: Obsługa błędu, gdy nie uda się utworzyć wejścia
}
[[self captureSession] addInput:input];

AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[[self captureSession] addOutput:output];

dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];


output.minFrameDuration = CMTimeMake(1, 15);

[[self captureSession] startRunning];

AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
captureVideoPreviewLayer.frame = previewLayer.bounds;
[previewLayer.layer insertSublayer:captureVideoPreviewLayer atIndex:0];
[previewLayer setHidden:NO];
}

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (mutex && ![device isAdjustingFocus] && ![device isAdjustingExposure] && ![device isAdjustingWhiteBalance]) {
// Create a UIImage from the sample buffer data
mutex = NO;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

image = [Tools rotateImage:image andRotateAngle:UIImageOrientationUp];

CGRect rect;
rect.size.width = 210;
rect.size.height = 50;
rect.origin.x = 75;
rect.origin.y = 175;

UIImage *croppedImage = [image resizedImage:image.size interpolationQuality:kCGInterpolationHigh];
croppedImage = [croppedImage croppedImage:rect];

croppedImage = [self processImage:croppedImage];
[NSThread detachNewThreadSelector:@selector(threadedReadAndProcessImage:) toTarget:self withObject:croppedImage];
}
}

// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);

// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

UIImage *image = [UIImage imageWithCGImage:quartzImage];

CGImageRelease(quartzImage);

return (image);
}

我用这段代码清理所有内容:

- (void)cancelTapped {
[[self captureSession] stopRunning], self.captureSession = nil;

for (UIView *view in self.previewLayer.subviews) {
[view removeFromSuperview];
}

[self dismissModalViewControllerAnimated:YES];
}

- (void)dealloc {
[super dealloc];

[captureSession release];
[device release];
[previewLayer release];
}

仪器向我展示了这样的东西: http://i.stack.imgur.com/NBWgZ.png

http://i.stack.imgur.com/1GB6C.png

知道我做错了什么吗?

最佳答案

- (void)setupCaptureSession {       
NSError *error = nil;

[self setCaptureSession: [[AVCaptureSession alloc] init]];
...

这会泄漏捕获 session ,这将使所有输入和输出及其所有内部小助手保持事件状态。

两种选择:

AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
[session release], session = nil;
// or:
self.captureSession = [[[AVCaptureSession alloc] init] autorelease];

关于ios - AVCaptureSession 废弃内存 - 分配 - 工具,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5275063/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com