- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
所以我一直在尝试使用 OpenGL ES 2.0 创建拖尾粒子效果 ( seen here )。不幸的是,使这成为可能的 OpenGL 命令(累积缓冲区)似乎在 OpenGL es 中不可用。这意味着有必要走很长的路。
This主题描述了一种可能的方法来做这样的事情。但是,我对如何将内容存储在缓冲区中以及如何组合缓冲区感到很困惑。所以我的想法是执行以下操作。
到目前为止,我的理解是缓冲区以与纹理相同的方式存储像素数据,只是使用着色器可以更轻松地绘制缓冲区。
所以这个想法可能是渲染到缓冲区,然后将其移动到纹理中。
我发现的一个这样做的理论是
In retrospect, you should create two FBOs (each with its own texture); using the default framebuffer isn't reliable (the contents aren't guaranteed to be preserved between frames).
After binding the first FBO, clear it then render the scene normally. Once the scene has been rendered, use the texture as a source and render it to the second FBO with blending (the second FBO is never cleared). This will result in the second FBO containing a mix of the new scene and what was there before. Finally, the second FBO should be rendered directly to the window (this can be done by rendering a textured quad, similarly to the previous operation, or by using glBlitFramebuffer).
Essentially, the first FBO takes the place of the default framebuffer while the second FBO takes the place of the accumulation buffer.
In summary:
Initialisation:
For each FBO: - glGenTextures - glBindTexture - glTexImage2D - glBindFrameBuffer - glFramebufferTexture2D
Each frame:
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1) glClear glDraw* // scene
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2) glBindTexture(tex1) glEnable(GL_BLEND) glBlendFunc glDraw* // full-screen quad
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0) glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2) glBlitFramebuffer
不幸的是它没有足够的代码(尤其是初始化让我开始)。
但我已经尝试过了,到目前为止我得到的只是一个令人失望的空白屏幕。我真的不知道我在做什么,所以这段代码可能是错误的。
var fbo1:GLuint = 0
var fbo2:GLuint = 0
var tex1:GLuint = 0
Init()
{
//...Loading shaders OpenGL etc.
//FBO 1
glGenFramebuffers(1, &fbo1)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo1)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
//FBO 2
glGenFramebuffers(1, &fbo2)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo2)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
}
func drawFullScreenTex()
{
glUseProgram(texShader)
let rect:[GLint] = [0, 0, GLint(width), GLint(height)]
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
//Texture is allready
glTexParameteriv(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_CROP_RECT_OES), rect)
glDrawTexiOES(0, 0, 0, width, height)
}
fun draw()
{
//Prep
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo1)
glClearColor(0, 0.1, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
//1
glUseProgram(pointShader);
passTheStuff() //Just passes in uniforms
drawParticles(glGetUniformLocation(pointShader, "color"), size_loc: glGetUniformLocation(pointShader, "pointSize")) //Draws particles
//2
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo2)
drawFullScreenTex()
//3
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), 0)
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), fbo2)
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_NEAREST))
}
顺便说一下,这里有一些我觉得有用的资源。
我的主要问题是:有人可以为此写出代码吗?我想我理解所涉及的理论,但我花了很多时间尝试应用它,但徒劳无功。
如果你想要一个起点,我有 Xcode project绘制点,并且有一个蓝色的点在屏幕上周期性地移动,而且不起作用的代码也在它们中。
注意:如果您要编写代码,您可以使用任何语言 c++、java、swift、objective-c,这将是完美的选择。只要是 OpenGL-ES 就可以
最佳答案
您使用相同的变量 tex1 调用了两次 glGenTextures(1, &tex1)
。这会覆盖变量。稍后调用 glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
时,它不会绑定(bind) fbo1 对应的纹理,而是绑定(bind) fbo2 对应的纹理。每个 fbo 都需要不同的纹理。
作为引用,下面是我的一个工作程序的示例,它使用多个 FBO 并渲染到纹理。
GLuint fbo[n];
GLuint tex[n];
init() {
glGenFramebuffers(n, fbo);
glGenTextures(n, tex);
for (int i = 0; i < n; ++i) {
glBindFramebuffer(GL_FRAMEBUFFER, fbo[i]);
glBindTexture(GL_TEXTURE_2D, tex[i]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[i], 0);
}
}
render() {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw scene into buffer 0
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[0]);
//Draw full screen tex
...
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[n - 1]);
// Draw to screen
return;
}
一些注意事项。为了让它工作,我必须添加纹理参数。
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
这是因为在我的系统上它们默认为 GL_NEAREST_MIPMAP_LINEAR。这对 FBO 纹理不起作用,因为没有生成 mipmap。将这些设置为您喜欢的任何内容。
此外,请确保您启用了纹理
glEnable(GL_TEXTURE_2D)
希望对您有所帮助。
关于ios - 如何在 OpenGL es 2.0 中模拟累积缓冲区(尾随粒子效应),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33975621/
图像显示,但不转换。 CSS 代码显示在上传的页面上。 我正在使用 EverWeb 构建页面。 下面是我正在尝试的代码。提前致谢。 HTML 片段 CSS .image { width: 100
我是一名优秀的程序员,十分优秀!