gpt4 book ai didi

ios - 如何将加速框架与核心图形一起使用?

转载 作者:行者123 更新时间:2023-12-01 19:04:45 25 4
gpt4 key购买 nike

我有一个项目。它基本上是从iPhone相机拍摄照片,并对照片应用一些效果。在应用效果之前,我先使用核心图形将图像缩放到适当的大小。
尺寸。缩放和旋转图像后,我使用Accelerate framework(vImage)来创建效果。我的问题是,应用效果后,它会变成一些蓝色的图像。但是,如果不使用核心图形缩放图像,则不会出现偏蓝的外观。

我使用的缩放代码来自this帖子。

这是我的生效代码:

- (UIImage *)applyFiltertoImage:(UIImage *)img
{
CGImageRef image = img.CGImage;
vImage_Buffer inBuffer, outBuffer;
void *pixelBuffer;

CGDataProviderRef inProvider = CGImageGetDataProvider(image);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

inBuffer.width = CGImageGetWidth(image);
inBuffer.height = CGImageGetHeight(image);
inBuffer.rowBytes = CGImageGetBytesPerRow(image);

inBuffer.data = (void *)CFDataGetBytePtr(inBitmapData);

pixelBuffer = malloc(CGImageGetBytesPerRow(image) * CGImageGetHeight(image));

if (pixelBuffer == NULL) {
NSLog(@"No buffer");
}

outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(image);
outBuffer.height = CGImageGetHeight(image);
outBuffer.rowBytes = CGImageGetBytesPerRow(image);

vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, self.kernel, self.size, self.size, self.divisor, NULL, kvImageEdgeExtend);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);

CGImageRef imageRef = CGBitmapContextCreateImage(ctx);

UIImage *blurredImage = [UIImage imageWithCGImage:imageRef];

CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);

return blurredImage;
}

最佳答案

避免手动重新定义CGContext

尝试让vImage为您初始化值。 vImageBuffer_InitWithCGImage可以帮助您避免一些痛苦。

直截了当的版本

- (UIImage *)applyFiltertoImage:(UIImage *)image
CGImageRef originalImageRef = image.CGImage;
CGColorSpaceRef originalColorSpace = CGColorSpaceRetain(CGImageGetColorSpace(originalImageRef));

if (_pixelBuffer == NULL) {
_pixelBuffer = malloc(CGImageGetBytesPerRow(originalImageRef) * CGImageGetHeight(originalImageRef));
}

vImage_CGImageFormat inputImageFormat =
{
.bitsPerComponent = (uint32_t) CGImageGetBitsPerComponent(originalImageRef),
.bitsPerPixel = (uint32_t) CGImageGetBitsPerComponent(originalImageRef) * (uint32_t)(CGColorSpaceGetNumberOfComponents(originalColorSpace) + (kCGImageAlphaNone != CGImageGetAlphaInfo(originalImageRef))),
.colorSpace = originalColorSpace,
.bitmapInfo = CGImageGetBitmapInfo(originalImageRef),
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault
};
vImage_Buffer inputImageBuffer;
vImageBuffer_InitWithCGImage(&inputImageBuffer, &inputImageFormat, NULL, originalImageRef, kvImageNoFlags);

vImage_Buffer outputImageBuffer = {
.data = _pixelBuffer,
.width = CGImageGetWidth(originalImageRef),
.height = CGImageGetHeight(originalImageRef),
.rowBytes = CGImageGetBytesPerRow(originalImageRef)
};

vImage_Error error;
error = vImageConvolve_ARGB8888(&inputImageBuffer,
&outputImageBuffer,
NULL,
0,
0,
self.kernel,
self.size,
self.divisor,
NULL,
kvImageEdgeExtend);
if (error) {
NSLog(@"vImage error %zd", error);
}
free(inputImageBuffer.data);

vImage_CGImageFormat outFormat =
{
.bitsPerComponent = (uint32_t) CGImageGetBitsPerComponent(originalImageRef),
.bitsPerPixel = (uint32_t) CGImageGetBitsPerComponent(originalImageRef) * (uint32_t)(CGColorSpaceGetNumberOfComponents(originalColorSpace) + (kCGImageAlphaNone != CGImageGetAlphaInfo(originalImageRef))),
.colorSpace = originalColorSpace,
.bitmapInfo = CGImageGetBitmapInfo(originalImageRef),
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault
};
CGImageRef modifiedImageRef = vImageCreateCGImageFromBuffer(&outputImageBuffer,
&outFormat,
NULL,
NULL,
kvImageNoFlags,
&error);
CGColorSpaceRelease(originalColorSpace);

UIImage *returnImage = [UIImage imageWithCGImage:modifiedImageRef];
CGImageRelease(modifiedImageRef);

return returnImage;
}

高性能版

每个图像一次创建_inputImageBuffer,_outputImageBuffer和_outputImageFormat,然后将滤镜重新应用到源图像。 vImage预热后,它将开始从通话中节省几毫秒的时间。
- (UIImage *)applyFilter
vImage_Error error;
error = vImageConvolve_ARGB8888(&_inputImageBuffer,
&_outputImageBuffer,
NULL,
0,
0,
self.kernel,
self.size,
self.divisor,
NULL,
kvImageEdgeExtend);
if (error) {
NSLog(@"vImage error %zd", error);
}

CGImageRef modifiedImageRef = vImageCreateCGImageFromBuffer(&_outputImageBuffer,
&_outputImageFormat,
NULL,
NULL,
kvImageNoFlags,
&error);
UIImage *returnImage = [UIImage imageWithCGImage:modifiedImageRef];
CGImageRelease(modifiedImageRef);

return returnImage;
}

关于ios - 如何将加速框架与核心图形一起使用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20401763/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com