gpt4 book ai didi

ios - 将 CVImageBuffer 转换为 YUV420 对象

转载 作者:太空宇宙 更新时间:2023-11-03 22:30:07 25 4
gpt4 key购买 nike

我想保留来自摄像机的流式视频的 YUV420 格式以避免转换为灰度时的损失,但我也想保留颜色分量。最终目标是使用像 OpenCV 这样的计算机视觉库进行处理。虽然我最终可能会选择 BGRA,但我仍然希望能够使用 YUV 测试一个可行的解决方案。那么如何将像素格式为 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 的 CVImageBuffer 转换为单个内存块?

被拒绝的解决方案:

  • CIImage 非常方便,但不允许渲染为 YUV 格式的位图。
  • cv::Mat 用 C++ 污染你的 Obj-C 代码

最佳答案

AVCaptureSessionDelegate

这将根据指定的像素格式将数据填充到包含字节的 NSObject 中。我继续为 BGRA 或 YUV 像素格式提供检测和 malloc 内存的能力。所以这个解决方案非常适合测试这两者。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef videoImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(videoImageBuffer, 0);

void *baseAddress = NULL;
NSUInteger totalBytes = 0;
size_t width = CVPixelBufferGetWidth(videoImageBuffer);
size_t height = 0;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(videoImageBuffer);
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange ||
pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
size_t planeCount = CVPixelBufferGetPlaneCount(videoImageBuffer);
baseAddress = CVPixelBufferGetBaseAddressOfPlane(videoImageBuffer, 0);

for (int plane = 0; plane < planeCount; plane++) {
size_t planeHeight = CVPixelBufferGetHeightOfPlane(videoImageBuffer, plane);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(videoImageBuffer, plane);
height += planeHeight;
totalBytes += (int)planeHeight * (int)bytesPerRow;
}
} else if (pixelFormat == kCVPixelFormatType_32BGRA) {
baseAddress = CVPixelBufferGetBaseAddress(videoImageBuffer);
height = CVPixelBufferGetHeight(videoImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
totalBytes += (int)height * (int)bytesPerRow;
}

// Doesn't have to be an NSData object
NSData *rawPixelData = [NSData dataWithBytes:baseAddress length:totalBytes];

// Just a plain-ol-NSObject with the following properties
NTNUVideoFrame *videoFrame = [[NTNUVideoFrame alloc] init];
videoFrame.width = width;
videoFrame.height = height;
videoFrame.bytesPerRow = bytesPerRow;
videoFrame.pixelFormat = pixelFormat;
// Alternatively if you switch rawPixelData to void *
// videoFrame.rawPixelData = baseAddress;
videoFrame.rawPixelData = rawPixelData;
[self.delegate didUpdateVideoFrame:videoFrame];

CVPixelBufferUnlockBaseAddress(videoImageBuffer, 0);
}

你唯一需要记住的是,如果你计划切换线程,你将需要 mallocmemcpy 基地址,或者 dispatch_async 而你不这样做使用 NSData。解锁基地址后,像素数据将不再有效。

void *rawPixelData = malloc(totalBytes);
memcpy(rawPixelData, baseAddress, totalBytes);

此时您需要考虑在完成后对该内存块调用 free 的问题。

关于ios - 将 CVImageBuffer 转换为 YUV420 对象,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27889825/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com