gpt4 book ai didi

ios - 在 iOS 中将 UIImage 读入缓冲区并写回 UIImage 时,原始文件和输出文件不匹配

转载 作者:行者123 更新时间:2023-11-29 03:17:22 29 4
gpt4 key购买 nike

使用下面的代码,我从 ALAsset 获取 UIImage

  ALAssetRepresentation * assetRepresentation = [asset defaultRepresentation]; // Big Picture
imageRef = [assetRepresentation fullResolutionImage];

if (imageRef)
{
UIImage* image = [UIImage imageWithCGImage: imageRef];
}

现在我使用下面的代码从这个 UIImage 创建一个缓冲区:

  + (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {

CGImageRef imageRef = image.CGImage;

// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];

if(!context) {
return NULL;
}

size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);

CGRect rect = CGRectMake(0, 0, width, height);

// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);

// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);

// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;

unsigned char *newBitmap = NULL;

if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);

if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}

free(bitmapData);

} else {
NSLog(@"Error getting bitmap pixel data\n");
}

CGContextRelease(context);

return newBitmap;
}

现在,我使用以下代码将缓冲区转换回 UIImage:

 + (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer withWidth:(int) width withHeight:(int) height
{
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();

if(colorSpaceRef == NULL) {
NSLog(@"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);

uint32_t* pixels = (uint32_t*)malloc(bufferLength);

if(pixels == NULL) {
NSLog(@"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}

CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);

if(context == NULL) {
NSLog(@"Error context not created");
free(pixels);
}

UIImage *image = nil;

if(context)
{

CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);

CGImageRef imageRef = CGBitmapContextCreateImage(context);

// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:@selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}

CGImageRelease(imageRef);
CGContextRelease(context);
}

CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);

if(pixels) {
free(pixels);
}
return image;
}

+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;

size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;

size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);

size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;

colorSpace = CGColorSpaceCreateDeviceRGB();

if(!colorSpace) {
NSLog(@"Error allocating color space RGB\n");
return NULL;
}

// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);

if(!bitmapData) {
NSLog(@"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}

//Create bitmap context

context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast); // RGBA
if(!context) {
free(bitmapData);
NSLog(@"Bitmap context not created");
}

CGColorSpaceRelease(colorSpace);

return context;
}

然后..保存到相册:

 +(void) saveImageToPhotoAlbum : (UIImage*) image
{
if( image != nil)
{
NSData* imageData = UIImagePNGRepresentation(image); // get png representation
UIImage* pngImage = [UIImage imageWithData:imageData];

UIImageWriteToSavedPhotosAlbum(pngImage, self, nil, nil);
}
else
{
NSLog(@"Couldn't save to Photo Album due to invalid image..");
}
}

我的通话如下

void *imageData = [self convertUIImageToBitmapRGBA8 : image];



UIImage* uiImage = [self convertBitmapRGBA8ToUIImage : imageData withWidth:width withHeight: height];

[self saveImageToPhotoAlbum:uiImage];

当我这样做时..文件大小发生变化并且两个文件似乎不一样..

e,g;如果原始文件大小为 33KB,经过此过程后,它会更改为 332KB.. 这里出了什么问题?

最佳答案

是的,将图像加载到 UIImage 中然后调用 UIImagePNGRepresentation(甚至 UIImageJPEGRepresentation)可能会导致文件大小发生变化。从文件大小猜测,原始图像看起来像是 JPEG。 PNG 文件通常比压缩后的 JPEG 文件大,即使压缩程度适中也是如此。如果您想要大小相当的文件,请尝试使用各种质量设置(例如 compressionQuality 为 0.99 或 0.9 或 0.8)的 UIImageJPEGRepresentation

顺便说一句,只是将图像加载到 UIImage 中的练习可能会导致原始 Assets 发生一些变化(如果没有别的,剥离元数据,可能会改变颜色空间等) .).

关于ios - 在 iOS 中将 UIImage 读入缓冲区并写回 UIImage 时,原始文件和输出文件不匹配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21536281/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com