gpt4 book ai didi

ios - 如何将 BGRA 字节转换为 UIImage 进行保存?

转载 作者:行者123 更新时间:2023-11-29 00:22:56 45 4
gpt4 key购买 nike

我想使用 GPUImage 框架捕获原始像素数据进行操作。我这样捕获数据:

 CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];

//raw values
UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);

//test out dropbox upload here
[self uploadDropbox:dataForRawBytes];
//end of dropbox upload


// Do whatever with your bytes
// [self processImages:dataForRawBytes];

CVPixelBufferUnlockBaseAddress(cameraFrame, 0); }];

我正在为相机使用以下设置:

 NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];

出于测试目的,我想将捕获的图像保存到保管箱,为此我需要将其保存到 tmp 目录,我将如何保存 dataForRawBytes?任何帮助将不胜感激!

最佳答案

所以我能够弄清楚如何从原始数据中获取 UIImage,这是我修改后的代码:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
size_t width = CVPixelBufferGetWidth(cameraFrame);
size_t height = CVPixelBufferGetHeight(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes

// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

// release color space
CGColorSpaceRelease(colorSpace);

//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
//is the image captured, now we can test saving it.

我需要创建颜色空间等属性并生成 CDContexyRef 并使用它来最终获得 UIImage,并且在调试时我可以正确地看到我捕获的图像。

关于ios - 如何将 BGRA 字节转换为 UIImage 进行保存?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43852473/

45 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com