gpt4 book ai didi

ios - 如何在 Objective-C 、iOS 中缩放/调整 CVPixelBufferRef 的大小

转载 作者:技术小花猫 更新时间:2023-10-29 11:22:49 51 4
gpt4 key购买 nike

我正在尝试将图像从 CVPixelBufferRef 调整为 299x299。理想情况下也会裁剪图像。原始像素缓冲区为 640x320,目标是在不丢失纵横比(裁剪到中心)的情况下缩放/裁剪到 299x299。

我找到了在 objective-c 中调整 UIImage 大小的代码,但没有找到调整 CVPixelBufferRef 大小的代码。我发现对象 C 许多不同图像类型的各种非常复杂的示例,但没有一个专门用于调整 CVPixelBufferRef 的大小。

什么是最简单/最好的方法,请包括确切的代码。

...我尝试了 selton 的答案,但这没有用,因为缩放缓冲区中的结果类型不正确(进入断言代码),

OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
int doReverseChannels;
if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
doReverseChannels = 1;
} else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
doReverseChannels = 0;
} else {
assert(false); // Unknown source format
}

最佳答案

使用 CoreMLHelpers作为灵感。我们可以创建一个 C 函数来满足您的需求。根据您的像素格式要求,我认为此解决方案将是最有效的选择。我使用 AVCaputureVideoDataOutput 进行测试。

希望对您有所帮助!

AVCaptureVideoDataOutputSampleBufferDelegate 实现。这里的大部分工作是创建居中裁剪矩形。使用 AVMakeRectWithAspectRatioInsideRect 是关键(它完全符合您的要求)。

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }

size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);

CGRect videoRect = CGRectMake(0, 0, width, height);
CGSize scaledSize = CGSizeMake(299, 299);

// Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);

CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);

// Do other things here
// For example
CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
// End example

CVPixelBufferRelease(croppedAndScaled);
}

方法一:数据操作和加速

这个函数的基本前提是它首先裁剪到指定的矩形,然后缩放到最终需要的大小。裁剪是通过简单地忽略矩形外的数据来实现的。缩放是通过 Accelerate 的 vImageScale_ARGB8888 函数实现的。再次感谢 CoreMLHelpers 提供的见解。

void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);

assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
assert(scaleSize.width > 0 && scaleSize.height > 0);
}

void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
if (baseAddress != NULL) {
free((void *)baseAddress);
}
}

// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {

OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
assert(inputPixelFormat == kCVPixelFormatType_32BGRA
|| inputPixelFormat == kCVPixelFormatType_32ABGR
|| inputPixelFormat == kCVPixelFormatType_32ARGB
|| inputPixelFormat == kCVPixelFormatType_32RGBA);

assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);

if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
NSLog(@"Could not lock base address");
return nil;
}

void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
if (sourceData == NULL) {
NSLog(@"Error: could not get pixel buffer base address");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}

size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;

vImage_Buffer croppedvImageBuffer = {
.data = ((char *)sourceData) + offset,
.height = (vImagePixelCount)CGRectGetHeight(croppingRect),
.width = (vImagePixelCount)CGRectGetWidth(croppingRect),
.rowBytes = sourceBytesPerRow
};

size_t scaledBytesPerRow = scaledSize.width * 4;
void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
if (scaledData == NULL) {
NSLog(@"Error: out of memory");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}

vImage_Buffer scaledvImageBuffer = {
.data = scaledData,
.height = (vImagePixelCount)scaledSize.height,
.width = (vImagePixelCount)scaledSize.width,
.rowBytes = scaledBytesPerRow
};

/* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
* other channel orderings of 4-channel images, such as RGBA or BGRA.*/
vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);

if (error != kvImageNoError) {
NSLog(@"Error: %ld", error);
free(scaledData);
return nil;
}

OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
CVPixelBufferRef outputPixelBuffer = NULL;
CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);

if (status != kCVReturnSuccess) {
NSLog(@"Error: could not create new pixel buffer");
free(scaledData);
return nil;
}

return outputPixelBuffer;
}

方法二:CoreImage

此方法更易于阅读,并且具有对您传入的像素缓冲区格式完全不可知的好处,这对于某些用例来说是一个加分项。诚然,您受限于 CoreImage 支持的格式。

CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
CGRect cropRect,
CGSize scaleSize,
CIContext *context) {

assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);

CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
image = [image imageByCroppingToRect:cropRect];

CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);

image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];

// Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];

CVPixelBufferRef output = NULL;

CVPixelBufferCreate(nil,
CGRectGetWidth(image.extent),
CGRectGetHeight(image.extent),
CVPixelBufferGetPixelFormatType(pixelBuffer),
nil,
&output);

if (output != NULL) {
[context render:image toCVPixelBuffer:output];
}

return output;
}

创建 CIContext 可以在调用站点完成,也可以创建并存储在属性中。有关选项的信息,请参阅 documentation .

// Create a CIContext using default settings, this will
// typically use the GPU and Metal by default if supported
if (self.context == nil) {
self.context = [CIContext context];
}

关于ios - 如何在 Objective-C 、iOS 中缩放/调整 CVPixelBufferRef 的大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51445776/

51 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com