- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
问候我的程序员 friend 们,
网上找遍了,网上的例子也查了,还是搞不明白。很抱歉,如果之前有人问过这个问题,经过一周的调试后我很累。我希望你可以帮助我。
基本上,问题是我尝试绘制一些四边形(带三角形),但没有绘制任何东西。以前我在没有 VBO 的情况下按照官方 Android 网站上“三角形示例”中描述的方式进行绘制。一切正常,但我认为在 Renderer.OnDrawFrame() 中更新顶点/索引缓冲区效率不高:)
这是我的代码:
public class FloorPlanRenderer implements GLSurfaceView.Renderer {
public volatile float mAngle;
// mMVPMatrix is an abbreviation for "Model View Projection Matrix"
private final float[] mMVPMatrix = new float[16];
private final float[] mProjectionMatrix = new float[16];
private final float[] mViewMatrix = new float[16];
private final float[] mRotationMatrix = new float[16];
private GLSurfaceView mGlView;
private GlEngine mGlEngine;
private boolean dataSet = false;
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Set the background frame color
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mRotationMatrix, 0);
// Position the eye in front of the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -3.0f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = 0.0f; //-5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
mGlEngine = new GlEngine(10);
mGlEngine.registerQuad(new Wall(-0.5f, 0.4f, -0.2f, 0.4f));
mGlEngine.registerQuad(new Wall(0.5f, 0.4f, 0.2f, 0.4f));
mGlEngine.registerQuad(new Wall(0.0f, 0.0f, 0.0f, 0.3f, 0.02f));
}
@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 3.0f;
final float far = 7.0f;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
@Override
public void onDrawFrame(GL10 gl) {
float[] scratch = new float[16];
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, 1.0f);
// Combine the rotation matrix with the projection and camera view
// Note that the mMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);
mGlEngine.render(scratch);
}
GlEngine类:
public class GlEngine {
public static final int COORDS_PER_VERTEX = 3;
public static final int ORDER_INDICES_PER_QUAD = 6;
public static final int VERTICES_PER_QUAD = 4;
public static final int SIZE_OF_FLOAT = Float.SIZE/Byte.SIZE;
public static final int SIZE_OF_SHORT = Short.SIZE/Byte.SIZE;
private int mQuadsNum = 0;
private int mLastCoordsIndex = 0;
private int mLastOrderIndex = 0;
private final FloatBuffer vertexBuffer;
private final ShortBuffer indexBuffer;
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
"}";
// Use to access and set the view transformation
private int mMVPMatrixHandle;
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
private final int mProgram;
private int mPositionHandle;
private int mColorHandle;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 0.0f };
private boolean mDataInitNeeded = true;
public GlEngine(int quadsNum) {
ByteBuffer bb = ByteBuffer.allocateDirect(quadsNum * VERTICES_PER_QUAD *
COORDS_PER_VERTEX * SIZE_OF_FLOAT);
bb.order(ByteOrder.nativeOrder()); // device hardware's native byte order
vertexBuffer = bb.asFloatBuffer();
ByteBuffer bb2 = ByteBuffer.allocateDirect(quadsNum *
ORDER_INDICES_PER_QUAD * SIZE_OF_SHORT);
bb2.order(ByteOrder.nativeOrder());
indexBuffer = bb2.asShortBuffer();
int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
mProgram = GLES20.glCreateProgram();
GLES20.glAttachShader(mProgram, vertexShader);
GLES20.glAttachShader(mProgram, fragmentShader);
GLES20.glLinkProgram(mProgram);
}
public static int loadShader(int type, String shaderCode){
// create a vertex shader type (GLES20.GL_VERTEX_SHADER)
// or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)
int shader = GLES20.glCreateShader(type);
// add the source code to the shader and compile it
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
public void registerQuad(Wall quad) {
quad.putCoords(vertexBuffer);
quad.putIndices(indexBuffer);
mQuadsNum++;
}
// This code is dealing with VBO side of things
private final int[] mVerticesBufferId = new int[BUFFERS_COUNT];
private final int[] mIndicesBufferId = new int[BUFFERS_COUNT];
private static final int BUFFERS_COUNT = 1;
public void copyToGpu(FloatBuffer vertices) {
GLES20.glGenBuffers(BUFFERS_COUNT, mVerticesBufferId, 0);
// Copy vertices data into GPU memory
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticesBufferId[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertices.capacity() * SIZE_OF_FLOAT, vertices, GLES20.GL_STATIC_DRAW);
// Cleanup buffer
vertices.limit(0);
vertices = null;
}
public void copyToGpu(ShortBuffer indices) {
GLES20.glGenBuffers(BUFFERS_COUNT, mIndicesBufferId, 0);
// Copy vertices data into GPU memory
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mIndicesBufferId[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, indices.capacity() * SIZE_OF_SHORT, indices, GLES20.GL_STATIC_DRAW);
// Cleanup buffer
indices.limit(0);
indices = null;
}
public void render(float[] mvpMatrix) {
setData();
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticesBufferId[0]);
GLES20.glUseProgram(mProgram);
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, 0);
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mIndicesBufferId[0]);
// Draw quads
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, mQuadsNum * ORDER_INDICES_PER_QUAD,
GLES20.GL_UNSIGNED_SHORT, 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
}
// This method is called on gl thread GlSurfaceView.queueEvent(...)
public void setData() {
if (mDataInitNeeded) {
// Reset positions of buffers for consuming in GL
vertexBuffer.position(0);
indexBuffer.position(0);
copyToGpu(vertexBuffer);
copyToGpu(indexBuffer);
mDataInitNeeded = false;
}
}
public void deallocateGlBuffers() {
if (mVerticesBufferId[0] > 0) {
GLES20.glDeleteBuffers(mVerticesBufferId.length, mVerticesBufferId, 0);
mVerticesBufferId[0] = 0;
}
if (mIndicesBufferId[0] > 0) {
GLES20.glDeleteBuffers(mIndicesBufferId.length, mIndicesBufferId, 0);
mIndicesBufferId[0] = 0;
}
}
}
表示矩形的 Wall 类:
public class Wall {
// number of coordinates per vertex in this array
private static final int COORDS_PER_VERTEX = 3;
private static final int VERTICES_NUM = 4; // it's a rect after all
private static final float DEFAULT_WIDTH = 0.05f;
private static final float DEFAULT_COORDS_SOURCE = 0.5f;
private final float mCoords[] = new float[COORDS_PER_VERTEX * VERTICES_NUM];
private final short mDrawOrder[] = { 0, 1, 2, // first triangle
1, 2, 3 }; // second triangle
private int mVertexBufferPosition;
private int mIndexBufferPosition;
private final PointF mA = new PointF(0, 0);
private final PointF mB = new PointF(0, 0);
private float mWidth;
public Wall() {
init(-DEFAULT_COORDS_SOURCE, DEFAULT_COORDS_SOURCE, DEFAULT_COORDS_SOURCE,
-DEFAULT_COORDS_SOURCE, DEFAULT_WIDTH);
}
public Wall(float x1, float y1, float x2, float y2)
{
init(x1, y1, x2, y2, DEFAULT_WIDTH);
}
public Wall(float x1, float y1, float x2, float y2, float width) {
init(x1, y1, x2, y2, width);
}
private void init(float x1, float y1, float x2, float y2, float width) {
mA.x = x1;
mA.y = y1;
mB.x = x2;
mB.y = y2;
mWidth = width;
calcCoords();
}
private void calcCoords() {
float[] vector = {mA.x - mB.x, mA.y - mB.y};
float magnitude = (float) Math.sqrt(vector[0]*vector[0] + vector[1]*vector[1]);
float[] identityVector = {vector[0]/magnitude, vector[1]/magnitude};
float[] orthogonalIdentityVector = {identityVector[1], -identityVector[0]};
mCoords[0] = mA.x + mWidth * orthogonalIdentityVector[0];
mCoords[1] = mA.y + mWidth * orthogonalIdentityVector[1];
mCoords[3] = mA.x - mWidth * orthogonalIdentityVector[0];
mCoords[4] = mA.y - mWidth * orthogonalIdentityVector[1];
mCoords[6] = mB.x + mWidth * orthogonalIdentityVector[0];
mCoords[7] = mB.y + mWidth * orthogonalIdentityVector[1];
mCoords[9] = mB.x - mWidth * orthogonalIdentityVector[0];
mCoords[10] = mB.y - mWidth * orthogonalIdentityVector[1];
}
public void putCoords(FloatBuffer vertexBuffer) {
mVertexBufferPosition = vertexBuffer.position();
for (int i = 0; i < mDrawOrder.length; i++) {
mDrawOrder[i] += mVertexBufferPosition/GlEngine.COORDS_PER_VERTEX;
}
vertexBuffer.put(mCoords);
}
public void putIndices(ShortBuffer indexBuffer) {
mIndexBufferPosition = indexBuffer.position();
indexBuffer.put(mDrawOrder);
}
public float getWidth() {
return mWidth;
}
public void setWidth(float mWidth) {
this.mWidth = mWidth;
}
public PointF getA() {
return mA;
}
public void setA(float x, float y) {
this.mA.x = x;
this.mA.y = y;
}
public PointF getB() {
return mB;
}
public void setB(float x, float y) {
this.mB.x = x;
this.mB.y = y;
}
}
在 Wall 类中,我在放置顶点和索引的位置保存了偏移量,因为此类将来会发生变化,并且它打算在主缓冲区中更新其顶点(不会为每个 OnDrawFrame 重新编译缓冲区)。
谢谢。我希望在您的帮助下,我能以某种方式克服我通往 OpenGl ES 的这个(另一个)障碍。
最佳答案
我真丢人!我顺便将索引放入了错误的数组。而不是这个:
// Copy vertices data into GPU memory
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mIndicesBufferId[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, indices.capacity() * SIZE_OF_SHORT, indices, GLES20.GL_STATIC_DRAW);
应该有:
// Copy vertices data into GPU memory
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mIndicesBufferId[0]);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indices.capacity() * SIZE_OF_SHORT, indices, GLES20.GL_STATIC_DRAW);
为什么要害羞?因为在日志中我看到:
07-23 16:20:05.442 5170-5264/com.example.neutrino.maze W/Adreno-ES20: : GL_INVALID_OPERATION
就在第二次调用 glBufferData 之后,我将 GL_ARRAY_BUFFER
而不是 GL_ELEMENT_ARRAY_BUFFER
。在许多情况下,这肯定是由复制粘贴引起的。
关于java - 无法在 OpenGl/Android 中使用 VBO,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38515875/
在 OpenGL/ES 中,在实现渲染到纹理功能时,您必须小心,不要引起反馈循环(从正在写入的同一纹理中读取像素)。由于显而易见的原因,当您读取和写入纹理的相同像素时,行为是未定义的。但是,如果您正在
正如我们最终都知道的那样,规范是一回事,实现是另一回事。大多数错误是我们自己造成的,但有时情况并非如此。 我相信列出以下内容会很有用: GPU 驱动程序中当前已知的与最新版本的 OpenGL 和 GL
很难说出这里问的是什么。这个问题是模棱两可的、模糊的、不完整的、过于宽泛的或修辞的,无法以目前的形式得到合理的回答。为了帮助澄清这个问题以便可以重新打开它,visit the help center
我正在学习 OpenGL,非常想知道与显卡的交互如何。 我觉得了解它是如何在图形驱动程序中实现的,会让我了解 opengl 的完整内部结构(通过这个我可以知道哪些阶段/因素影响我对 opengl 性能
我正在尝试绘制到大于屏幕尺寸(即 320x480)的渲染缓冲区 (512x512)。 执行 glReadPixels 后,图像看起来是正确的,除非图像的尺寸超过屏幕尺寸——在本例中,超过 320 水平
我正在 Windows 中制作一个 3D 小行星游戏(使用 OpenGL 和 GLUT),您可以在其中穿过一堆障碍物在太空中移动并生存下来。我正在寻找一种方法来针对无聊的 bg 颜色选项设置图像背景。
如果我想要一个包含 100 个 10*10 像素 Sprite 的 Sprite 表,是否可以将它们全部排成一排来制作 1,000*10 像素纹理?还是 GPU 对不那么窄的纹理表现更好?这对性能有什
这个问题在这里已经有了答案: Rendering 2D sprites in a 3D world? (7 个答案) 关闭 6 年前。 我如何概念化让图像始终面对相机。我尝试将三角函数与 arcta
是否可以在 OpenGL 中增加缓冲区? 假设我想使用实例化渲染。每次在世界上生成一个新对象时,我都必须用实例化数据更新缓冲区。 在这种情况下,我有一个 3 个 float 的缓冲区 std::v
有人可以向我解释为什么下面的代码没有绘制任何东西,但如果我使用 GL_LINE_LOOP 它确实形成了一个闭环吗? glBegin(GL_POLYGON); for(int i = 0; i <= N
正如标题所说,OpenGL 中的渲染目标是什么?我对 OpenGL 很陌生,我看到的所有网站都让我很困惑。 它只是一个缓冲区,我在其中放置稍后将用于渲染的东西吗? 如果您能提供一个很好的引用来阅读它,
当使用 OpenGL 1.4 固定功能多纹理时,每个纹理阶段的输出在传递到下一个阶段之前是否都固定在 [0, 1]? spec说(第 153 页): If the value of TEXTURE_E
我比较了 2 个函数 openGL ES 和 openGL gvec4 texelFetchOffset(gsampler2DArray sampler, ivec3 P, int lod, ivec
关闭。这个问题是off-topic .它目前不接受答案。 想改进这个问题吗? Update the question所以它是on-topic用于堆栈溢出。 关闭 10 年前。 Improve thi
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 想改进这个问题?将问题更新为 on-topic对于堆栈溢出。 6年前关闭。 Improve this qu
那么当你调用opengl函数时,比如glDraw或者gLBufferData,是否会导致程序线程停止等待GL完成调用呢? 如果不是,那么 GL 如何处理调用像 glDraw 这样的重要函数,然后立即更
我正在尝试实现级联阴影贴图,当我想访问我的视锥体的每个分区的相应深度纹理时,我遇到了一个错误。 更具体地说,当我想选择正确的阴影纹理时会出现我的问题,如果我尝试下面的代码,我会得到一个像 this 中
我想为OpenGL ES和OpenGL(Windows)使用相同的着色器源。为此,我想定义自定义数据类型并仅使用OpenGL ES函数。 一种方法是定义: #define highp #define
我尝试用 6 个位图映射立方体以实现天空盒效果。我的问题是一个纹理映射到立方体的每个面。我已经检查了 gDEBugger,在立方体纹理内存中我只有一个 图像(因为我尝试加载六个图像)。 代码准备纹理:
在 OpenGL 中偏移深度的最佳方法是什么?我目前每个多边形都有索引顶点属性,我将其传递给 OpenGL 中的顶点着色器。我的目标是在深度上偏移多边形,其中最高索引始终位于较低索引的前面。我目前有这
我是一名优秀的程序员,十分优秀!