gpt4 book ai didi

ios - CVOpenGLESTextureCacheCreateTextureFromImage 无法创建 IOSurface

转载 作者:IT王子 更新时间:2023-10-29 08:16:33 24 4
gpt4 key购买 nike

对于我当前的项目,我正在读取 iPhone 的主摄像头输出。然后,我通过以下方法将像素缓冲区转换为缓存的 OpenGL 纹理:CVOpenGLESTextureCacheCreateTextureFromImage。这在处理用于预览的相机帧时效果很好。测试了 iPhone 3GS、4、4S、iPod Touch(第 4 代)和 IOS5、IOS6 的不同组合。

但是,对于具有非常高分辨率的实际最终图像,这仅适用于这些组合:

  • iPhone 3GS + IOS 5.1.1
  • iPhone 4 + IOS 5.1.1
  • iPhone 4S + IOS 6.0
  • iPod Touch(第 4 代)+ IOS 5.0

这不适用于:iPhone 4 + IOS6。

控制台中的确切错误消息:

Failed to create IOSurface image (texture)
2012-10-01 16:24:30.663 GLCameraRipple[676:907] Error at CVOpenGLESTextureCacheCreateTextureFromImage -6683

我已通过更改 Apple 的 GLCameraRipple 项目来隔离此问题。你可以在这里查看我的版本:http://lab.bitshiftcop.com/iosurface.zip

以下是我如何将静止输出添加到当前 session :

- (void)setupAVCapture
{
//-- Create CVOpenGLESTextureCacheRef for optimal CVImageBufferRef to GLES texture conversion.
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, &_videoTextureCache);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}

//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];

//-- Set preset session size.
[_session setSessionPreset:_sessionPreset];

//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);

//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);

[_session addInput:input];

//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording

//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview

// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];


// Add still output
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
if([_session canAddOutput:stillOutput]) [_session addOutput:stillOutput];

[_session addOutput:dataOutput];
[_session commitConfiguration];

[_session startRunning];
}

下面是我如何捕获静态输出并对其进行处理:

- (void)capturePhoto
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}

[stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
// Process hires image
[self captureOutput:stillOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:videoConnection];
}];
}

下面是纹理的创建方式:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVReturn err;
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);

if (!_videoTextureCache)
{
NSLog(@"No video texture cache");
return;
}

if (_ripple == nil ||
width != _textureWidth ||
height != _textureHeight)
{
_textureWidth = width;
_textureHeight = height;

_ripple = [[RippleModel alloc] initWithScreenWidth:_screenWidth
screenHeight:_screenHeight
meshFactor:_meshFactor
touchRadius:5
textureWidth:_textureWidth
textureHeight:_textureHeight];

[self setupBuffers];
}

[self cleanUpTextures];

NSLog(@"%zi x %zi", _textureWidth, _textureHeight);

// RGBA texture
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
_textureWidth,
_textureHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&_chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}

glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}

对于解决这个问题有什么建议吗?

最佳答案

iPhone 4(以及 iPhone 3GS 和 iPod Touch 第 4 代)使用 PowerVR SGX 535 GPU,maximum OpenGL ES texture size is 2048x2048 .这个值可以通过调用找到

glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);

第 4 代 iPod Touch。摄像头分辨率为 720x960,iPhone 3GS 为 640x1136,但 iPhone 4 的后置摄像头分辨率为 1936x2592,这对于单个纹理来说太大了。

您始终可以在保持纵横比 (1529x2048) 的同时,以较小的尺寸重写捕获的图像。布拉德·拉尔森 (Brad Larson) 在 his GPUImage framework 上做了这件事,但它非常简单,只需使用 Core Graphics 重绘原始像素缓冲区的数据,然后从重绘数据中制作另一个像素缓冲区。框架的其余部分也是一个很好的资源。

关于ios - CVOpenGLESTextureCacheCreateTextureFromImage 无法创建 IOSurface,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12675655/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com