gpt4 book ai didi

iphone - 在小框架中显示大图像 UIImageView 仍然消耗大量内存?

转载 作者:行者123 更新时间:2023-12-03 19:39:16 26 4
gpt4 key购买 nike

我有几个大图像,大小约为 1.5 MB。我使用 UIViewContentModeScaleFit 在 UIImageView 中显示它们中的每一个。

UIImageViews 的帧大小仅为 150 * 150。

我的问题是

我知道如果我全屏显示大图像,内存会大大增加。

但是如果它们在小的 UIImageView 中,它们仍然会消耗内存吗?

谢谢

最佳答案

UIImageUIImageView 是不同的东西。每个 UIImageView 都知道显示其关联的 UIImage 的尺寸。 UIImage 没有如何显示自身或如何用于显示的概念,因此仅更改 UIImageView 的大小这一行为不会对UIImage。因此它对总内存使用没有影响。

您可能想要做的是使用 Core Graphics 获取 UIImage 并生成其 150x150 版本作为新的 UIImage,然后将其推送到 UIImageView.

要执行缩放,应使用如下代码(按我输入的方式编写,因此未进行彻底检查)即可完成工作:

#include <math.h>

- (UIImage *)scaledImageForImage:(UIImage *)srcImage toSize:(CGSize)maxSize
{
// pick the target dimensions, as though applying
// UIViewContentModeScaleAspectFit; seed some values first
CGSize sizeOfImage = [srcImage size];
CGSize targetSize; // to store the output size

// logic here: we're going to scale so as to apply some multiplier
// to both the width and height of the input image. That multiplier
// is either going to make the source width fill the output width or
// it's going to make the source height fill the output height. Of the
// two possibilities, we want the smaller one, since the larger will
// make the other axis too large
if(maxSize.width / sizeOfImage.width < maxSize.height / sizeOfImage.height)
{
// we'll letter box then; scaling width to fill width, since
// that's the smaller scale of the two possibilities
targetSize.width = maxSize.width;

// height is the original height adjusted proportionally
// to match the proportional adjustment in width
targetSize.height =
(maxSize.width / sizeOfImage.width) * sizeOfImage.height;
}
else
{
// basically the same as the above, except that we pillar box
targetSize.height = maxSize.height;
targetSize.width =
(maxSize.height / sizeOfImage.height) * sizeOfImage.width;
}

// images can be integral sizes only, so round up
// the target size and width, then construct a target
// rect that centres the output image within that size;
// this all ensures sub-pixel accuracy
CGRect targetRect;

// store the original target size and round up the original
targetRect.size = targetSize;
targetSize.width = ceilf(targetSize.width);
targetSize.height = ceilf(targetSize.height);

// work out how to centre the source image within the integral-sized
// output image
targetRect.origin.x = (targetSize.width - targetRect.size.height) * 0.5f;
targetRect.origin.y = (targetSize.height - targetRect.size.width) * 0.5f;

// now create a CGContext to draw to, draw the image to it suitably
// scaled and positioned, and turn the thing into a UIImage

// get a suitable CoreGraphics context to draw to, in RGBA;
// I'm assuming iOS 4 or later here, to save some manual memory
// management.
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
NULL,
targetSize.width, targetSize.height,
8, targetSize.width * 4,
colourSpace,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);

// clear the context, since it may currently contain anything.
CGContextClearRect(context,
CGRectMake(0.0f, 0.0f, targetSize.width, targetSize.height));

// draw the given image to the newly created context
CGContextDrawImage(context, targetRect, [srcImage CGImage]);

// get an image from the CG context, wrapping it as a UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];

// clean up
CGContextRelease(context);
CGImageRelease(cgImage);

return returnImage;
}

显然,我通过大量注释使其看起来很复杂,但实际上只有 23 行。

关于iphone - 在小框架中显示大图像 UIImageView 仍然消耗大量内存?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6767873/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com