gpt4 book ai didi

ios - 为什么当我从捕获输出协议(protocol)调用图像时我的图像没有更新?

转载 作者:可可西里 更新时间:2023-11-01 05:15:14 26 4
gpt4 key购买 nike

我正在尝试做一些非常简单的事情。我想全屏显示视频层,并且每秒用我当时得到的 CMSampleBufferRef 更新一次 UIImage。但是我遇到了两个不同的问题。第一个是改变:

[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];

还会修改视频预览层,我以为它只会修改av foundation向delegate发送信息的速率但它似乎会影响整个 session (看起来更明显)。所以这让我的视频每秒更新一次。我想我可以省略这些行并简单地在委托(delegate)中添加一个计时器,以便每秒将 CMSampleBufferRef 发送到另一个方法来处理它。但我不知道这是否是正确的方法。

我的第二个问题是 UIImageView 没有更新,或者有时它只更新一次并且之后不会改变。我正在使用这种方法来更新它:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[imageView setImage:image];
// Add your code here that uses the image.
NSLog(@"update");
}

这是我从苹果示例中获取的。该方法每秒都被正确调用,我通过阅读更新消息进行了检查。但图像根本没有改变。另外,sampleBuffer 是自动销毁还是我必须释放它?

这是另外两个重要的方法:查看是否加载:

- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.

session = [[AVCaptureSession alloc] init];

// Add inputs and outputs.
if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
session.sessionPreset = AVCaptureSessionPreset640x480;
}
else {
// Handle the failure.
NSLog(@"Cannot set session preset to 640x480");
}

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];

if (!input) {
// Handle the error appropriately.
NSLog(@"Could create input: %@", error);
}

if ([session canAddInput:input]) {
[session addInput:input];
}
else {
// Handle the failure.
NSLog(@"Could not add input");
}

// DATA OUTPUT
dataOutput = [[AVCaptureVideoDataOutput alloc] init];

if ([session canAddOutput:dataOutput]) {
[session addOutput:dataOutput];

dataOutput.videoSettings =
[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey: (id)kCVPixelBufferPixelFormatTypeKey];
//dataOutput.minFrameDuration = CMTimeMake(1, 15);
//dataOutput.minFrameDuration = CMTimeMake(1, 1);
AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];

[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];

}
else {
// Handle the failure.
NSLog(@"Could not add output");
}
// DATA OUTPUT END

dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);


captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];

[captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
[captureVideoPreviewLayer setPosition:videoLayer.layer.position];

[videoLayer.layer addSublayer:captureVideoPreviewLayer];

[session startRunning];
}

将 CMSampleBufferRef 转换为 UIImage:

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);

// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

return (image);
}

在此先感谢您能给我的任何帮助。

最佳答案

来自 captureOutput:didOutputSampleBuffer:fromConnection: 方法的文档:

This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.

这意味着如果您需要在此方法中使用缓冲区更新 UI,您需要像这样在主队列上执行此操作:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {

UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
}

编辑:关于您的第一个问题:我不确定我是否理解这个问题,但是如果你只想每秒更新一次图像,你也可以有一个“lastImageUpdateTime”值来与“didOutputSampleBuffer”方法进行比较,看看是否经过了足够的时间并且只在那里更新图像,否则忽略样本缓冲区。

关于ios - 为什么当我从捕获输出协议(protocol)调用图像时我的图像没有更新?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9514111/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com