gpt4 book ai didi

iphone - 将 CVImageBufferRef 转换为 CVPixelBufferRef

转载 作者:可可西里 更新时间:2023-11-01 03:25:22 25 4
gpt4 key购买 nike

我是 iOS 编程和多媒体的新手,我正在研究一个名为 RosyWriter 的示例项目,该项目由 apple 在 this link 提供。 .在这里,我看到在代码中有一个名为 captureOutput:didOutputSampleBuffer:fromConnection 的函数,如下所示:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);

if ( connection == videoConnection ) {

// Get framerate
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
[self calculateFramerateAtTimestamp:timestamp];

// Get frame dimensions (for onscreen display)
if (self.videoDimensions.width == 0 && self.videoDimensions.height == 0)
self.videoDimensions = CMVideoFormatDescriptionGetDimensions( formatDescription );

// Get buffer type
if ( self.videoType == 0 )
self.videoType = CMFormatDescriptionGetMediaSubType( formatDescription );

CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// Synchronously process the pixel buffer to de-green it.
[self processPixelBuffer:pixelBuffer];

// Enqueue it for preview. This is a shallow queue, so if image processing is taking too long,
// we'll drop this frame for preview (this keeps preview latency low).
OSStatus err = CMBufferQueueEnqueue(previewBufferQueue, sampleBuffer);
if ( !err ) {
dispatch_async(dispatch_get_main_queue(), ^{
CMSampleBufferRef sbuf = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(previewBufferQueue);
if (sbuf) {
CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf);
[self.delegate pixelBufferReadyForDisplay:pixBuf];
CFRelease(sbuf);
}
});
}
}

CFRetain(sampleBuffer);
CFRetain(formatDescription);
dispatch_async(movieWritingQueue, ^{

if ( assetWriter ) {

BOOL wasReadyToRecord = (readyToRecordAudio && readyToRecordVideo);

if (connection == videoConnection) {

// Initialize the video input if this is not done yet
if (!readyToRecordVideo)
readyToRecordVideo = [self setupAssetWriterVideoInput:formatDescription];

// Write video data to file
if (readyToRecordVideo && readyToRecordAudio)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
}
else if (connection == audioConnection) {

// Initialize the audio input if this is not done yet
if (!readyToRecordAudio)
readyToRecordAudio = [self setupAssetWriterAudioInput:formatDescription];

// Write audio data to file
if (readyToRecordAudio && readyToRecordVideo)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
}

BOOL isReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if ( !wasReadyToRecord && isReadyToRecord ) {
recordingWillBeStarted = NO;
self.recording = YES;
[self.delegate recordingDidStart];
}
}
CFRelease(sampleBuffer);
CFRelease(formatDescription);
});
}

这里调用了一个名为 pixelBufferReadyForDisplay 的函数,它需要一个类型为 CVPixelBufferRef 的参数

pixelBufferReadyForDisplay 原型(prototype)

- (void)pixelBufferReadyForDisplay:(CVPixelBufferRef)pixelBuffer; 

但是在上面的代码中,当调用这个函数时,它传递了 pixBuf 类型的变量 CVImageBufferRef

所以我的问题是,将 CVImageBufferRef 转换为 CVPixelBufferRef 是否不需要使用任何函数或类型转换,还是由编译器隐式完成?

谢谢。

最佳答案

如果您在 Xcode 文档中搜索 CVPixelBufferRef,您将找到以下内容:

typedef CVImageBufferRef CVPixelBufferRef;

所以 CVImageBufferRef 是 CVPixelBufferRef 的同义词。它们可以互换。

您正在查看一些非常粗糙的代码。 RosyWriter 和另一个名为“Chromakey”的示例应用程序对像素缓冲区进行了一些非常低级的处理。如果您是 iOS 开发的新手和多媒体的新手,您可能不想如此深入、如此快速地挖掘。这有点像一年级医学生尝试进行心肺移植。

关于iphone - 将 CVImageBufferRef 转换为 CVPixelBufferRef,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18660861/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com