gpt4 book ai didi

ios - 操纵 CVPixelBufferRef 的高度和宽度

转载 作者:塔克拉玛干 更新时间:2023-11-02 22:54:52 24 4
gpt4 key购买 nike

我正在使用 GPUImage 库函数来操作 CVPixelbuffer 的高度和宽度。我正在录制纵向视频,当用户旋转设备时,我的屏幕会自动调整为横向模式。我希望横向框架适合屏幕。

例如:- 我以 320x568 的纵向模式开始播放视频,当我将设备转为横向模式时,我的帧为 568x320,我希望它适合 320x568。为了调整这个东西,我想操纵 CVPixelBuffer。但这占用了大量内存,最后我的应用程序崩溃了。

 - (CVPixelBufferRef) GPUImageCreateResizedSampleBufferWithBuffer:(CVPixelBufferRef)cameraFrame withBuffer:(CGSize)finalSize withSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef pixel_buffer = NULL;

// CVPixelBufferCreateWithPlanarBytes for YUV input
@autoreleasepool {

CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));

CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *sourceImageBytes = (GLubyte *)CVPixelBufferGetBaseAddress(cameraFrame);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);

GLubyte *imageData = (GLubyte *) calloc(1, ((int)finalSize.width * (int)finalSize.height * 4));


CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

CGRect scaledRect = AVMakeRectWithAspectRatioInsideRect(originalSize, CGRectMake(0, 0, finalSize.width, finalSize.height));

CGContextDrawImage(imageContext, scaledRect, cgImageFromBytes);
CGImageRelease(cgImageFromBytes);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
CGDataProviderRelease(dataProvider);

CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);

CMTime frameTime = CMTimeMake(1, 30);
CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};

CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, &sampleBuffer);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
CFRelease(videoInfo);
// CVPixelBufferRelease(pixel_buffer);

}
return pixel_buffer;

}

最佳答案

CG* - CoreGraphics 使用 CPU,对于实时视频来说太慢,请使用 CV* 和 GPU

    // - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);

CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIImage *resultImg = [baseImg imageByCroppingToRect:outputFrameCropRect];
resultImg = [resultImg imageByApplyingTransform:outputFrameTransform];

// created once
// glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// ciContext = [CIContext contextWithEAGLContext:glCtx options:@{kCIContextWorkingColorSpace:[NSNull null]}];
// ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
// CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);

[ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];

CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);

关于ios - 操纵 CVPixelBufferRef 的高度和宽度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41424549/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com