gpt4 book ai didi

ios - 在 iOS 4.3 中将 YpCbCr iPhone 4 相机帧渲染为 OpenGL ES 2.0 纹理

转载 作者:可可西里 更新时间:2023-11-01 03:28:37 24 4
gpt4 key购买 nike

我试图在 iPhone 4 上的 iOS 4.3 中将原生平面图像渲染为 OpenGL ES 2.0 纹理。然而,纹理最终变成全黑。我的相机配置如下:

[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] 
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

我像这样将像素数据传递给我的纹理:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));

我的碎片着色器是:

varying highp vec2 textureCoordinate;

uniform sampler2D videoFrame;

void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}

我的顶点着色器是

attribute vec4 position;
attribute vec4 inputTextureCoordinate;

varying vec2 textureCoordinate;

void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}

这在我处理 BGRA 图像时工作得很好,而我的片段着色器只工作

gl_FragColor = texture2D(videoFrame, textureCoordinate);

如果我在这里遗漏了什么怎么办?谢谢!

最佳答案

好的。我们在这里取得了成功。关键是将 Y 和 UV 作为两个单独的纹理传递给片段着色器。这是最终的着色器:

#ifdef GL_ES
precision mediump float;
#endif

varying vec2 textureCoordinate;

uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;

const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);

void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;

gl_FragColor = vec4(rgb, 1.0);
}

您需要像这样创建纹理:

int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));

glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

然后像这样传递它们:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);

glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);

child ,我松了一口气!

附言yuv2rgb 矩阵的值来自这里 http://en.wikipedia.org/wiki/YUV我从这里复制了代码 http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877弄清楚如何开始获得正确的 YUV 值。

关于ios - 在 iOS 4.3 中将 YpCbCr iPhone 4 相机帧渲染为 OpenGL ES 2.0 纹理,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6432159/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com