gpt4 book ai didi

ios - AVCaptureSession 输出样本缓冲区保存到 CoreData

转载 作者:行者123 更新时间:2023-12-01 19:15:21 25 4
gpt4 key购买 nike

我正在使用 AVCaptureSession 使用 AVCaptureVideoDataOutput 类中的 setSampleBufferDelegate 方法从相机捕获帧。委托(delegate)方法如下所示。您可以看到我转换为 UIImage 并将其放置在 UIImageView 中。我想将每个 UIImage 保存到磁盘并将 URL 存储在新的托管对象中,但我不知道如何正确获取托管对象上下文,因为每次调用都会使用串行调度队列生成一个新线程。任何人都可以建议一个使用 CoreData 和调度队列的解决方案,以便我可以构建存储在磁盘上并对应于托管对象的图像集合。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);

/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);

/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
Same thing as for the CALayer we are not in the main thread so ...*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

/*We relase the CGImageRef*/
CGImageRelease(newImage);

[self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

[pool drain];
}

最佳答案

recommended solution就是为每个线程创建一个新的 NSManagedObjectContext,每个线程都指向一个 NSPersistentStoreCoordinator。您可能还想监听 NSManagedObjectContextDidSaveNotification,将更改合并到主线程的上下文中(使用恰当命名的 mergeChangesFromContextDidSaveNotification:)。

就个人而言,我喜欢在中心位置使用这样的访问器来处理每个线程上下文:

- (NSManagedObjectContext *) managedObjectContext {
NSManagedObjectContext *context = [[[NSThread currentThread] threadDictionary] objectForKey:@"NSManagedObjectContext"];
if (context == nil) {
context = [[[NSManagedObjectContext alloc] init] autorelease];
[context setPersistentStoreCoordinator:self.persistentStoreCoordinator];
[[[NSThread currentThread] threadDictionary] setObject:context forKey:@"NSManagedObjectContext"];
}
return context;
}

请记住,在线程之间传递 NSManagedObjects 并不比传递上下文更容易。相反,您必须传递 NSManagedObjectID(来自对象的 objectID 属性),然后在目标线程中使用该线程上下文的 objectWithID: 方法来获取等效对象。

关于ios - AVCaptureSession 输出样本缓冲区保存到 CoreData,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5689874/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com