- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
与Android不同,我对GL/libgdx比较陌生。我需要解决的任务是多方面的,即将Android摄像机的YUV-NV21预览图像实时渲染到libgdx内的屏幕背景。这里是主要的关注点:
最佳答案
简短的答案是将相机图像 channel (Y,UV)加载到纹理中,然后使用自定义片段着色器将这些纹理绘制到网格上,该着色器将为我们完成颜色空间转换。由于此着色器将在GPU上运行,因此它将比CPU快得多,并且当然比Java代码快得多。由于此网格是GL的一部分,因此可以在其上方或下方安全地绘制任何其他3D形状或 Sprite 。
我从这个答案https://stackoverflow.com/a/17615696/1525238开始解决了这个问题。我通过以下链接了解了通用方法:How to use camera view with OpenGL ES,它是为Bada编写的,但原理是相同的。转换公式有点奇怪,所以我用Wikipedia文章YUV Conversion to/from RGB中的转换公式替换了它们。
以下是导致解决方案的步骤:
YUV-NV21说明
来自Android相机的实时图像是预览图像。相机预览的默认色彩空间(以及两个保证的色彩空间之一)是YUV-NV21。这种格式的说明非常分散,因此在这里我将简要说明一下:
图像数据由(宽x高)x 3/2字节组成。第一个宽度x高度字节是Y channel ,每个像素1个亮度字节。以下(宽度/2)x(高度/2)x 2 =宽度x高度/2个字节是UV平面。每两个连续字节是2 x 2 = 4个原始像素的V,U(按照NV21规范的顺序)色度字节。换句话说,UV平面的大小为(宽度/2)x(高度/2)像素,并且在每个维度中都以2的系数进行下采样。此外,U,V色度字节是交错的。
这是一个非常不错的图像,它解释了YUV-NV12,NV21只是翻转了U,V字节:
如何将这种格式转换为RGB?
如问题中所述,如果在Android代码中完成此转换,则将花费太多时间才能上线。幸运的是,它可以在运行于GPU的GL着色器中完成。这将使其非常快速地运行。
一般的想法是将图像的 channel 作为纹理传递给着色器,并以进行RGB转换的方式进行渲染。为此,我们必须首先将图像中的 channel 复制到可以传递给纹理的缓冲区中:
byte[] image;
ByteBuffer yBuffer, uvBuffer;
...
yBuffer.put(image, 0, width*height);
yBuffer.position(0);
uvBuffer.put(image, width*height, width*height/2);
uvBuffer.position(0);
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in
//both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL
//puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's
//why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA,
width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE,
uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
String fragmentShader =
"#ifdef GL_ES\n" +
"precision highp float;\n" +
"#endif\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D y_texture;\n" +
"uniform sampler2D uv_texture;\n" +
"void main (void){\n" +
" float r, g, b, y, u, v;\n" +
//We had put the Y values of each pixel to the R,G,B components by
//GL_LUMINANCE, that's why we're pulling it from the R component,
//we could also use G or B
" y = texture2D(y_texture, v_texCoord).r;\n" +
//We had put the U and V values of each pixel to the A and R,G,B
//components of the texture respectively using GL_LUMINANCE_ALPHA.
//Since U,V bytes are interspread in the texture, this is probably
//the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v;\n" +
" g = y - 0.39465*u - 0.58060*v;\n" +
" b = y + 2.03211*u;\n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0);\n" +
"}\n";
v_texCoord
访问Y和UV纹理,这是由于
v_texCoord
在-1.0和1.0之间,它从纹理的一端缩放到另一端,而不是实际的纹理像素坐标。这是着色器的最佳功能之一。
public interface PlatformDependentCameraController {
void init();
void renderBackground();
void destroy();
}
public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {
private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives
private Camera camera; //The camera object
//The Y and UV buffers that will pass our image channel data to the textures
private ByteBuffer yBuffer;
private ByteBuffer uvBuffer;
ShaderProgram shader; //Our shader
Texture yTexture; //Our Y texture
Texture uvTexture; //Our UV texture
Mesh mesh; //Our mesh that we will draw the texture on
public AndroidDependentCameraController(){
//Our YUV image is 12 bits per pixel
image = new byte[1280*720/8*12];
}
@Override
public void init(){
/*
* Initialize the OpenGL/libgdx stuff
*/
//Do not enforce power of two texture sizes
Texture.setEnforcePotImages(false);
//Allocate textures
yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format
uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format
//Allocate buffers on the native memory space, not inside the JVM heap
yBuffer = ByteBuffer.allocateDirect(1280*720);
uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes
yBuffer.order(ByteOrder.nativeOrder());
uvBuffer.order(ByteOrder.nativeOrder());
//Our vertex shader code; nothing special
String vertexShader =
"attribute vec4 a_position; \n" +
"attribute vec2 a_texCoord; \n" +
"varying vec2 v_texCoord; \n" +
"void main(){ \n" +
" gl_Position = a_position; \n" +
" v_texCoord = a_texCoord; \n" +
"} \n";
//Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
//Effectively making YUV to RGB conversion
String fragmentShader =
"#ifdef GL_ES \n" +
"precision highp float; \n" +
"#endif \n" +
"varying vec2 v_texCoord; \n" +
"uniform sampler2D y_texture; \n" +
"uniform sampler2D uv_texture; \n" +
"void main (void){ \n" +
" float r, g, b, y, u, v; \n" +
//We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
//that's why we're pulling it from the R component, we could also use G or B
" y = texture2D(y_texture, v_texCoord).r; \n" +
//We had put the U and V values of each pixel to the A and R,G,B components of the
//texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
//in the texture, this is probably the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5; \n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5; \n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v; \n" +
" g = y - 0.39465*u - 0.58060*v; \n" +
" b = y + 2.03211*u; \n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0); \n" +
"} \n";
//Create and compile our shader
shader = new ShaderProgram(vertexShader, fragmentShader);
//Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 2, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));
//The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)
float[] vertices = {
-1.0f, 1.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1.0f, -1.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1.0f, -1.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1.0f, 1.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
//The indices come in trios of vertex indices that describe the triangles of our mesh
short[] indices = {0, 1, 2, 0, 2, 3};
//Set vertices and indices to our mesh
mesh.setVertices(vertices);
mesh.setIndices(indices);
/*
* Initialize the Android camera
*/
camera = Camera.open(0);
//We set the buffer ourselves that will be used to hold the preview image
camera.setPreviewCallbackWithBuffer(this);
//Set the camera parameters
Camera.Parameters params = camera.getParameters();
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
params.setPreviewSize(1280,720);
camera.setParameters(params);
//Start the preview
camera.startPreview();
//Set the first buffer, the preview doesn't start unless we set the buffers
camera.addCallbackBuffer(image);
}
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space
camera.addCallbackBuffer(image);
}
@Override
public void renderBackground() {
/*
* Because of Java's limitations, we can't reference the middle of an array and
* we must copy the channels in our byte array into buffers before setting them to textures
*/
//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(image, 0, 1280*720);
yBuffer.position(0);
//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(image, 1280*720, 1280*720/2);
uvBuffer.position(0);
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Draw the textures onto a mesh using our shader
*/
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
//Render our mesh using the shader, which in turn will use our textures to render their content on the mesh
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
}
@Override
public void destroy() {
camera.stopPreview();
camera.setPreviewCallbackWithBuffer(null);
camera.release();
}
}
init()
在开始时被调用一次,
renderBackground()
在每个渲染周期中被调用,
destroy()
在结束时被调用一次:
public class YourApplication implements ApplicationListener {
private final PlatformDependentCameraController deviceCameraControl;
public YourApplication(PlatformDependentCameraController cameraControl) {
this.deviceCameraControl = cameraControl;
}
@Override
public void create() {
deviceCameraControl.init();
}
@Override
public void render() {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
//Render the background that is the live camera image
deviceCameraControl.renderBackground();
/*
* Render anything here (sprites/models etc.) that you want to go on top of the camera image
*/
}
@Override
public void dispose() {
deviceCameraControl.destroy();
}
@Override
public void resize(int width, int height) {
}
@Override
public void pause() {
}
@Override
public void resume() {
}
}
public class MainActivity extends AndroidApplication {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.useGL20 = true; //This line is obsolete in the newest libgdx version
cfg.a = 8;
cfg.b = 8;
cfg.g = 8;
cfg.r = 8;
PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();
initialize(new YourApplication(cameraControl), cfg);
graphics.getView().setKeepScreenOn(true);
}
}
mesh.render(shader, GL20.GL_TRIANGLES);
)始终需要0-1毫秒mesh.render(shader, GL20.GL_TRIANGLES);
)始终需要0-1毫秒关于android - 如何使用OpenGLES 2.0在libgdx中的背景上实时渲染Android的YUV-NV21摄像机图像?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22456884/
在 OpenGL/ES 中,在实现渲染到纹理功能时,您必须小心,不要引起反馈循环(从正在写入的同一纹理中读取像素)。由于显而易见的原因,当您读取和写入纹理的相同像素时,行为是未定义的。但是,如果您正在
正如我们最终都知道的那样,规范是一回事,实现是另一回事。大多数错误是我们自己造成的,但有时情况并非如此。 我相信列出以下内容会很有用: GPU 驱动程序中当前已知的与最新版本的 OpenGL 和 GL
很难说出这里问的是什么。这个问题是模棱两可的、模糊的、不完整的、过于宽泛的或修辞的,无法以目前的形式得到合理的回答。为了帮助澄清这个问题以便可以重新打开它,visit the help center
我正在学习 OpenGL,非常想知道与显卡的交互如何。 我觉得了解它是如何在图形驱动程序中实现的,会让我了解 opengl 的完整内部结构(通过这个我可以知道哪些阶段/因素影响我对 opengl 性能
我正在尝试绘制到大于屏幕尺寸(即 320x480)的渲染缓冲区 (512x512)。 执行 glReadPixels 后,图像看起来是正确的,除非图像的尺寸超过屏幕尺寸——在本例中,超过 320 水平
我正在 Windows 中制作一个 3D 小行星游戏(使用 OpenGL 和 GLUT),您可以在其中穿过一堆障碍物在太空中移动并生存下来。我正在寻找一种方法来针对无聊的 bg 颜色选项设置图像背景。
如果我想要一个包含 100 个 10*10 像素 Sprite 的 Sprite 表,是否可以将它们全部排成一排来制作 1,000*10 像素纹理?还是 GPU 对不那么窄的纹理表现更好?这对性能有什
这个问题在这里已经有了答案: Rendering 2D sprites in a 3D world? (7 个答案) 关闭 6 年前。 我如何概念化让图像始终面对相机。我尝试将三角函数与 arcta
是否可以在 OpenGL 中增加缓冲区? 假设我想使用实例化渲染。每次在世界上生成一个新对象时,我都必须用实例化数据更新缓冲区。 在这种情况下,我有一个 3 个 float 的缓冲区 std::v
有人可以向我解释为什么下面的代码没有绘制任何东西,但如果我使用 GL_LINE_LOOP 它确实形成了一个闭环吗? glBegin(GL_POLYGON); for(int i = 0; i <= N
正如标题所说,OpenGL 中的渲染目标是什么?我对 OpenGL 很陌生,我看到的所有网站都让我很困惑。 它只是一个缓冲区,我在其中放置稍后将用于渲染的东西吗? 如果您能提供一个很好的引用来阅读它,
当使用 OpenGL 1.4 固定功能多纹理时,每个纹理阶段的输出在传递到下一个阶段之前是否都固定在 [0, 1]? spec说(第 153 页): If the value of TEXTURE_E
我比较了 2 个函数 openGL ES 和 openGL gvec4 texelFetchOffset(gsampler2DArray sampler, ivec3 P, int lod, ivec
关闭。这个问题是off-topic .它目前不接受答案。 想改进这个问题吗? Update the question所以它是on-topic用于堆栈溢出。 关闭 10 年前。 Improve thi
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 想改进这个问题?将问题更新为 on-topic对于堆栈溢出。 6年前关闭。 Improve this qu
那么当你调用opengl函数时,比如glDraw或者gLBufferData,是否会导致程序线程停止等待GL完成调用呢? 如果不是,那么 GL 如何处理调用像 glDraw 这样的重要函数,然后立即更
我正在尝试实现级联阴影贴图,当我想访问我的视锥体的每个分区的相应深度纹理时,我遇到了一个错误。 更具体地说,当我想选择正确的阴影纹理时会出现我的问题,如果我尝试下面的代码,我会得到一个像 this 中
我想为OpenGL ES和OpenGL(Windows)使用相同的着色器源。为此,我想定义自定义数据类型并仅使用OpenGL ES函数。 一种方法是定义: #define highp #define
我尝试用 6 个位图映射立方体以实现天空盒效果。我的问题是一个纹理映射到立方体的每个面。我已经检查了 gDEBugger,在立方体纹理内存中我只有一个 图像(因为我尝试加载六个图像)。 代码准备纹理:
在 OpenGL 中偏移深度的最佳方法是什么?我目前每个多边形都有索引顶点属性,我将其传递给 OpenGL 中的顶点着色器。我的目标是在深度上偏移多边形,其中最高索引始终位于较低索引的前面。我目前有这
我是一名优秀的程序员,十分优秀!