gpt4 book ai didi

ios - UIImage在缩放时变得模糊。为什么?(iOS 5.0)

转载 作者:技术小花猫 更新时间:2023-10-29 10:28:53 27 4
gpt4 key购买 nike

UIImage在缩放的时候总是模糊,要保持清晰怎么办?

- (UIImage *)rescaleImageToSize:(CGSize)size {
CGRect rect = CGRectMake(0.0, 0.0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
[self drawInRect:rect]; // scales image to rect
UIImage *resImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resImage;
}

最佳答案

舍入

首先,请确保在缩放之前对尺寸进行四舍五入。 drawInRect: 在这种情况下可以模糊原本可用的图像。舍入到最接近的整数值:

size.width = truncf(size.width);
size.height = truncf(size.height);

对于某些任务,您可能需要向下舍入 (floorf) 或向上舍入 (ceilf)。

CILanczosScaleTransform 不可用

然后,请忽略我之前对 CILanczosScaleTransform 的建议。虽然部分 Core Image 在 iOS 5.0 中可用,但 Lanczos 缩放不可用。如果它确实可用,请使用它。对于在 Mac OS 上工作的人来说,它是可用的,使用它。

v图像缩放

但是,vImage 中提供了一种高质量的缩放算法。 .下图显示了使用它的方法 (vImageScaledImage) 与不同上下文插值选项的比较。另请注意这些选项在不同缩放级别下的行为有何不同。

在此diagram ,它保留了最多的线条细节: Scaling comparison on diagram

在此photograph ,比较左下方的叶子: Scaling comparison on tree photograph

在此photograph ,比较右下角的纹理: Scaling comparison on rock photograph

不要在 pixel art 上使用它;它会产生奇怪的缩放伪像: Scaling comparison on pixel art, showing scaling artifacts

虽然它在 some images 上它有有趣的舍入效果: Scaling comparison on Space Invader

性能

毫不奇怪,kCGImageInterpolationHigh 是最慢的标准图像插值选项。此处实现的 vImageScaledImage 仍然较慢。为了将分形图像缩小到其原始大小的一半,它花费了 UIImageInterpolationHigh 的 110% 的时间。缩小到四分之一,花费了 340% 的时间。

如果你在模拟器中运行它,你可能会不这么认为;在那里,它可以比 kCGImageInterpolationHigh 快得多。据推测,vImage 多核优化使其在桌面上具有相对优势。

代码

// Method: vImageScaledImage:(UIImage*) sourceImage withSize:(CGSize) destSize
// Returns even better scaling than drawing to a context with kCGInterpolationHigh.
// This employs the vImage routines in Accelerate.framework.
// For more information about vImage, see https://developer.apple.com/library/mac/#documentation/performance/Conceptual/vImage/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001001-CH201-TPXREF101
// Large quantities of memory are manually allocated and (hopefully) freed here. Test your application for leaks before and after using this method.
- (UIImage*) vImageScaledImage:(UIImage*) sourceImage withSize:(CGSize) destSize;
{
UIImage *destImage = nil;

if (sourceImage)
{
// First, convert the UIImage to an array of bytes, in the format expected by vImage.
// Thanks: http://stackoverflow.com/a/1262893/1318452
CGImageRef sourceRef = [sourceImage CGImage];
NSUInteger sourceWidth = CGImageGetWidth(sourceRef);
NSUInteger sourceHeight = CGImageGetHeight(sourceRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *sourceData = (unsigned char*) calloc(sourceHeight * sourceWidth * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger sourceBytesPerRow = bytesPerPixel * sourceWidth;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(sourceData, sourceWidth, sourceHeight,
bitsPerComponent, sourceBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, sourceWidth, sourceHeight), sourceRef);
CGContextRelease(context);

// We now have the source data. Construct a pixel array
NSUInteger destWidth = (NSUInteger) destSize.width;
NSUInteger destHeight = (NSUInteger) destSize.height;
NSUInteger destBytesPerRow = bytesPerPixel * destWidth;
unsigned char *destData = (unsigned char*) calloc(destHeight * destWidth * 4, sizeof(unsigned char));

// Now create vImage structures for the two pixel arrays.
// Thanks: https://github.com/dhoerl/PhotoScrollerNetwork
vImage_Buffer src = {
.data = sourceData,
.height = sourceHeight,
.width = sourceWidth,
.rowBytes = sourceBytesPerRow
};

vImage_Buffer dest = {
.data = destData,
.height = destHeight,
.width = destWidth,
.rowBytes = destBytesPerRow
};

// Carry out the scaling.
vImage_Error err = vImageScale_ARGB8888 (
&src,
&dest,
NULL,
kvImageHighQualityResampling
);

// The source bytes are no longer needed.
free(sourceData);

// Convert the destination bytes to a UIImage.
CGContextRef destContext = CGBitmapContextCreate(destData, destWidth, destHeight,
bitsPerComponent, destBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
CGImageRef destRef = CGBitmapContextCreateImage(destContext);

// Store the result.
destImage = [UIImage imageWithCGImage:destRef];

// Free up the remaining memory.
CGImageRelease(destRef);

CGColorSpaceRelease(colorSpace);
CGContextRelease(destContext);

// The destination bytes are no longer needed.
free(destData);

if (err != kvImageNoError)
{
NSString *errorReason = [NSString stringWithFormat:@"vImageScale returned error code %d", err];
NSDictionary *errorInfo = [NSDictionary dictionaryWithObjectsAndKeys:
sourceImage, @"sourceImage",
[NSValue valueWithCGSize:destSize], @"destSize",
nil];

NSException *exception = [NSException exceptionWithName:@"HighQualityImageScalingFailureException" reason:errorReason userInfo:errorInfo];

@throw exception;
}
}
return destImage;
}

关于ios - UIImage在缩放时变得模糊。为什么?(iOS 5.0),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10068095/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com