gpt4 book ai didi

iphone - 如何在不使用 iOS 5.0 特定功能的情况下重写 GLCameraRipple 示例?

转载 作者:行者123 更新时间:2023-11-28 18:25:48 26 4
gpt4 key购买 nike

如何重写 Apple 的 GLCameraRipple example这样它就不需要 iOS 5.0 了吗?

我需要让它在 iOS 4.x 上运行,所以我不能使用 CVOpenGLESTextureCacheCreateTextureFromImage。我该怎么办?

作为后续,我使用下面的代码提供 YUV 数据而不是 RGB,但是图片不正确,屏幕是绿色的。似乎 UV 平面不起作用。

CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);

glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));

glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

[self drawFrame];

glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

我该如何解决这个问题?

最佳答案

如果您将像素格式从 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 切换为 kCVPixelFormatType_32BGRA(在 RippleViewController 的第 315 行),则 captureOutput:didOutputSampleBuffer:fromConnection: 将收到一个样本缓冲区,其中的图像缓冲区可以通过 glTexImage2D(或者 glTexSubImage2D 如果你想保持你的纹理大小为 2 的幂)直接上传到 OpenGL。这是可行的,因为迄今为止所有 iOS 设备都支持 GL_APPLE_texture_format_BGRA8888 扩展,允许您指定 GL_BGRA 的其他非标准格式。

因此,您需要使用 glGenTextures 提前在某处创建一个纹理,并将第 235 行替换为类似以下内容:

glBindTexture(GL_TEXTURE_2D, myTexture);

CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_BGRA, GL_UNSIGNED_BYTE,
CVPixelBufferGetBaseAddress(pixelBuffer));

CVPixelBufferUnlockBaseAddress(pixelBuffer);

您可能需要检查 CVPixelBufferGetBytesPerRow 的结果是 CVPixelBufferGetWidth 的四倍;我从文档中不确定它是否总是保证(实际上,这可能意味着它不是),但只要它是四的倍数,你就可以提供 CVPixelBufferGetBytesPerRow 除以四作为你的假装宽度,假设你上传的是一张子图片。

编辑:为了回应下面作为评论发布的后续问题,如果您想坚持接收帧并使它们在 YUV 中可供 GPU 使用,那么代码在视觉上会变得难看,因为您返回的是指向各种 channel 组件的结构,但你会想要这样的东西:

// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);

// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);

uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);

/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
otherwise you'll need to shuffle memory a little */

glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_LUMINANCE, GL_UNSIGNED_BYTE,
yBaseAddress);

// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);

uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);

/* TODO: a check on uvRowBytes, as above */

glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE,
uvBaseAddress);

CVPixelBufferUnlockBaseAddress(pixelBuffer);

关于iphone - 如何在不使用 iOS 5.0 特定功能的情况下重写 GLCameraRipple 示例?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9773632/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com