gpt4 book ai didi

ios - 使用 AVFoundation 访问图像和音频

转载 作者:行者123 更新时间:2023-12-01 16:56:27 24 4
gpt4 key购买 nike

我正在使用 AVFoundation 访问图像和音频以制作视频。问题是当我为音频添加设备时。

AVCaptureDevice *audioDevice     = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput * microphone_input = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
AVCaptureAudioDataOutput * audio_output = [[AVCaptureAudioDataOutput alloc] init];
[self.captureSession2 addInput:microphone_input];
[self.captureSession2 addOutput:audio_output];
dispatch_queue_t queue2;
queue2 = dispatch_queue_create("Audio", NULL);
[audio_output setSampleBufferDelegate:self queue:queue2];
dispatch_release(queue2);

和相机的图像。
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

//putting it on the input.
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];

//selecting the Output.
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", 0);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

毕竟通过代表获得原始数据
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if ([captureOutput isKindOfClass:[AVCaptureAudioDataOutput class]])
[self sendAudeoRaw:sampleBuffer];
if ([captureOutput isKindOfClass:[AVCaptureVideoDataOutput class]])
[self sendVideoRaw:sampleBuffer];}

获取图像原始数据的速度非常慢,大约每秒 2 张图像。我该如何改进它,因为我每秒查看大约 10-12 张图像。
请帮助

最佳答案

做这四件事开始:

创建一个全局队列,在释放封装对象之前不要释放它;指定 'serial' 作为队列类型,并使目标成为主队列:

_captureOutputQueue  = dispatch_queue_create_with_target("bush.alan.james.PhotosRemote.captureOutputQueue", DISPATCH_QUEUE_SERIAL, dispatch_get_main_queue());

从每个样本缓冲区中获取媒体类型描述,以确定样本缓冲区是否包含音频或视频数据:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDescription);
if (mediaType == kCMMediaType_Audio)...
if (mediaType == kCMMediaType_Video)...

不是通过方法调用将样本缓冲区发送到另一个类,而是让另一个类成为数据输出委托(delegate);否则,您将工作加倍。

最后,确保您在自己的队列中运行 AVSession。根据 Apple 的 AVCaptureSession 文档:

The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive). See AVCam-iOS: Using AVFoundation to Capture Images and Movies for an implementation example.



这包括对配置相机的方法的任何调用,特别是对调用 AVCaptureSession 的 startRunning 或 stopRunning 方法的任何调用:
dispatch_async(self.sessionQueue, ^{
[self configureSession];
});

dispatch_async(self.sessionQueue, ^{
[self.session startRunning];
});

dispatch_async(self.sessionQueue, ^{
[self.session stopRunning];
});

如果您不能将委托(delegate)设置为处理样本缓冲区的类,您可以考虑将它们放在两个类都可以访问的队列中,然后传递一个键:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
static char kMyKey; // set key to any value; pass the key--not the sample buffer--to the receiver
dispatch_queue_set_specific(((AppDelegate *)[[UIApplication sharedApplication] delegate].serialQueue,
&kMyKey,
(void*)CFRetain(sampleBuffer),
(dispatch_function_t)CFRelease);
});
}

在接收器类中:
dispatch_async(((AppDelegate *)[[UIApplication sharedApplication] delegate]).serialQueue, ^{
CMSampleBufferRef sb = dispatch_get_specific(&kMyKey);
NSLog(@"sb: %i", CMSampleBufferIsValid(sb));
});

关于ios - 使用 AVFoundation 访问图像和音频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11374164/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com