gpt4 book ai didi

android - Opengl ES 2.0 : parts of a model are occluded where they shouldn't. 是 z-buffer 造成的吗?

转载 作者:行者123 更新时间:2023-11-29 00:32:05 24 4
gpt4 key购买 nike

我正在使用 Assimp 通过 OpenGL ES 2.0 渲染 3D 模型。我目前遇到一个奇怪的问题,模型的某些部分不可见,即使它们应该是可见的。在这些图片中很容易看出:

Textured model

在第二张图片中,我将 z 缓冲区(线性化版本)渲染到屏幕中,以查看它是否可能是 z 缓冲区问题。相机附近有黑色像素:

zbuffer render

我尝试更改 z-near 和 z-far 的值但没有任何效果。现在我在初始化时这样做:

glEnable(GL_CULL_FACE);// Cull back facing polygons
glEnable(GL_DEPTH_TEST);

我也在为每一帧这样做:

glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);

我以为可能是绕脸的问题,所以我尝试禁用GL_CULL_FACE,但是没有用。我很确定模型没问题,因为 Blender 可以正确渲染它。

我现在正在使用这些着色器:

// vertex shader
uniform mat4 u_ModelMatrix; // A constant representing the model matrix.
uniform mat4 u_ViewMatrix; // A constant representing the view matrix.
uniform mat4 u_ProjectionMatrix; // A constant representing the projection matrix.

attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.


void main()
{
// Transform the vertex into eye space.
mat4 u_ModelViewMatrix = u_ViewMatrix * u_ModelMatrix;
v_Position = vec3(u_ModelViewMatrix * a_Position);

// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;

// Transform the normal's orientation into eye space.
v_Normal = vec3(u_ModelViewMatrix * vec4(a_Normal, 0.0));

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_ProjectionMatrix * u_ModelViewMatrix * a_Position;
}

这是 fragment 着色器:

// fragment shader
uniform sampler2D u_Texture; // The input texture.
uniform int u_TexCount;

varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.

// The entry point for our fragment shader.
void main()
{
vec3 u_LightPos = vec3(1.0);
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);

// Add attenuation.
diffuse = diffuse * (1.0 / distance);

// Add ambient lighting
diffuse = diffuse + 0.2;
diffuse = 1.0;

//gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));// Textured version

float d = (2.0 * 0.1) / (100.0 + 0.1 - gl_FragCoord.z * (100.0 - 0.1));
gl_FragColor = vec4(d, d, d, 1.0);// z-buffer render
}

我正在使用带索引的 VBO 来加载几何图形和内容。

当然,我可以粘贴一些您认为可能相关的其他代码,但现在我很高兴了解是什么导致了这种奇怪的行为,或者我可以做一些可能的测试。

最佳答案

好的,我解决了这个问题。我发布了解决方案,因为它可能对 future 的 google 员工有用。

基本上我没有请求深度缓冲。我在 native 代码中完成所有渲染工作,但所有 Open GL 上下文初始化都是在 Java 端完成的。我使用其中一个 Android 示例 (GL2JNIActivity) 作为起点,但它们没有请求任何深度缓冲区,我没有注意到这一点。

我解决了在设置 ConfigChooser 时将深度缓冲区大小设置为 24 的问题:

setEGLConfigChooser( translucent ?
new ConfigChooser(8, 8, 8, 8, 24 /*depth*/, 0) :
new ConfigChooser(5, 6, 5, 0, 24 /*depth*/, 0 );

关于android - Opengl ES 2.0 : parts of a model are occluded where they shouldn't. 是 z-buffer 造成的吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14912753/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com