gpt4 book ai didi

objective-c - 在 Mac/Cocoa 中捕获相机缓冲区

转载 作者:行者123 更新时间:2023-12-03 17:15:26 24 4
gpt4 key购买 nike

在我的应用程序中,我需要从相机捕获图像缓冲区并将其通过网络传递到另一端,

我使用了以下代码,

-(void)startVideoSessionInSubThread{
// Create the capture session

pPool = [[NSAutoreleasePool alloc]init];

mCaptureSession = [[QTCaptureSession alloc] init] ;

// Connect inputs and outputs to the session
BOOL success = NO;
NSError *error;

// Find a video device

QTCaptureDevice *videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
success = [videoDevice open:&error];


// If a video input device can't be found or opened, try to find and open a muxed input device

if (!success) {
videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeMuxed];
success = [videoDevice open:&error];

}

if (!success) {
videoDevice = nil;
// Handle error


}

if (videoDevice) {
//Add the video device to the session as a device input

mCaptureVideoDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:videoDevice];
success = [mCaptureSession addInput:mCaptureVideoDeviceInput error:&error];
if (!success) {
// Handle error
}


mCaptureDecompressedVideoOutput = [[QTCaptureDecompressedVideoOutput alloc] init];

[mCaptureDecompressedVideoOutput setPixelBufferAttributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:320.0], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:240.0], (id)kCVPixelBufferHeightKey,
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
// kCVPixelFormatType_32BGRA , (id)kCVPixelBufferPixelFormatTypeKey,
nil]];

[mCaptureDecompressedVideoOutput setDelegate:self];

[mCaptureDecompressedVideoOutput setMinimumVideoFrameInterval:0.0333333333333]; // to have video effect, 33 fps

success = [mCaptureSession addOutput:mCaptureDecompressedVideoOutput error:&error];

if (!success) {
[[NSAlert alertWithError:error] runModal];
return;
}

[mCaptureView setCaptureSession:mCaptureSession];
bVideoStart = NO;
[mCaptureSession startRunning];
bVideoStart = NO;

}

}
-(void)startVideoSession{
// start video from different session
[NSThread detachNewThreadSelector:@selector(startVideoSessionInSubThread) toTarget:self withObject:nil];
}

在回调函数中

// Do something with the buffer 
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputVideoFrame:(CVImageBufferRef)videoFrame
withSampleBuffer:(QTSampleBuffer *)sampleBuffer
fromConnection:(QTCaptureConnection *)connection


[self processImageBufferNew:videoFrame];

return;
}

在函数processImageBufferNew中,我将图像添加到队列中,它是一个同步队列,现在有一个单独的线程,来读取队列并处理缓冲区,

发生的情况是,如果我看到日志,则控制在捕获回调中非常频繁地出现,因此发送帧变得非常慢并且队列大小增加非常快,

对设计有什么建议吗?

我单独运行网络线程,其中查询队列具有最旧的节点,因此可以按顺序发送,通过日志,似乎在一分钟内添加了超过 500 个节点,导致内存增加和cpu饥饿。

我应该使用任何其他逻辑来捕获相机帧吗?

最佳答案

如果您无法通过网络发送帧,就像 QTCaptureDecompressedVideoOutput 的 captureOutput: didOutputVideoFrame: withSampleBuffer: fromConnection:] 委托(delegate)方法一样快,您将不得不开始丢弃某个点上的帧(当内存不足时、当要发送的固定节点数组上的空间不足时等)。

我建议选择某种网络数据包传输算法,其中丢帧不会那么明显或突然。更快的网络吞吐量意味着更少的帧丢失。网络速度较慢意味着必须发送更多帧。

关于objective-c - 在 Mac/Cocoa 中捕获相机缓冲区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8405464/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com