- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
与Android不同,我对GL/libgdx比较陌生。我需要解决的任务是多方面的,即将Android摄像机的YUV-NV21预览图像实时渲染到libgdx内的屏幕背景。这里是主要的关注点:
最佳答案
简短的答案是将相机图像 channel (Y,UV)加载到纹理中,然后使用自定义片段着色器将这些纹理绘制到网格上,该着色器将为我们完成颜色空间转换。由于此着色器将在GPU上运行,因此它将比CPU快得多,并且当然比Java代码快得多。由于此网格是GL的一部分,因此可以在其上方或下方安全地绘制任何其他3D形状或 Sprite 。
我从这个答案https://stackoverflow.com/a/17615696/1525238开始解决了这个问题。我通过以下链接了解了通用方法:How to use camera view with OpenGL ES,它是为Bada编写的,但原理是相同的。转换公式有点奇怪,所以我用Wikipedia文章YUV Conversion to/from RGB中的转换公式替换了它们。
以下是导致解决方案的步骤:
YUV-NV21说明
来自Android相机的实时图像是预览图像。相机预览的默认色彩空间(以及两个保证的色彩空间之一)是YUV-NV21。这种格式的说明非常分散,因此在这里我将简要说明一下:
图像数据由(宽x高)x 3/2字节组成。第一个宽度x高度字节是Y channel ,每个像素1个亮度字节。以下(宽度/2)x(高度/2)x 2 =宽度x高度/2个字节是UV平面。每两个连续字节是2 x 2 = 4个原始像素的V,U(按照NV21规范的顺序)色度字节。换句话说,UV平面的大小为(宽度/2)x(高度/2)像素,并且在每个维度中都以2的系数进行下采样。此外,U,V色度字节是交错的。
这是一个非常不错的图像,它解释了YUV-NV12,NV21只是翻转了U,V字节:
如何将这种格式转换为RGB?
如问题中所述,如果在Android代码中完成此转换,则将花费太多时间才能上线。幸运的是,它可以在运行于GPU的GL着色器中完成。这将使其非常快速地运行。
一般的想法是将图像的 channel 作为纹理传递给着色器,并以进行RGB转换的方式进行渲染。为此,我们必须首先将图像中的 channel 复制到可以传递给纹理的缓冲区中:
byte[] image;
ByteBuffer yBuffer, uvBuffer;
...
yBuffer.put(image, 0, width*height);
yBuffer.position(0);
uvBuffer.put(image, width*height, width*height/2);
uvBuffer.position(0);
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in
//both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL
//puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's
//why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA,
width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE,
uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
String fragmentShader =
"#ifdef GL_ES\n" +
"precision highp float;\n" +
"#endif\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D y_texture;\n" +
"uniform sampler2D uv_texture;\n" +
"void main (void){\n" +
" float r, g, b, y, u, v;\n" +
//We had put the Y values of each pixel to the R,G,B components by
//GL_LUMINANCE, that's why we're pulling it from the R component,
//we could also use G or B
" y = texture2D(y_texture, v_texCoord).r;\n" +
//We had put the U and V values of each pixel to the A and R,G,B
//components of the texture respectively using GL_LUMINANCE_ALPHA.
//Since U,V bytes are interspread in the texture, this is probably
//the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v;\n" +
" g = y - 0.39465*u - 0.58060*v;\n" +
" b = y + 2.03211*u;\n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0);\n" +
"}\n";
v_texCoord
访问Y和UV纹理,这是由于
v_texCoord
在-1.0和1.0之间,它从纹理的一端缩放到另一端,而不是实际的纹理像素坐标。这是着色器的最佳功能之一。
public interface PlatformDependentCameraController {
void init();
void renderBackground();
void destroy();
}
public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {
private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives
private Camera camera; //The camera object
//The Y and UV buffers that will pass our image channel data to the textures
private ByteBuffer yBuffer;
private ByteBuffer uvBuffer;
ShaderProgram shader; //Our shader
Texture yTexture; //Our Y texture
Texture uvTexture; //Our UV texture
Mesh mesh; //Our mesh that we will draw the texture on
public AndroidDependentCameraController(){
//Our YUV image is 12 bits per pixel
image = new byte[1280*720/8*12];
}
@Override
public void init(){
/*
* Initialize the OpenGL/libgdx stuff
*/
//Do not enforce power of two texture sizes
Texture.setEnforcePotImages(false);
//Allocate textures
yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format
uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format
//Allocate buffers on the native memory space, not inside the JVM heap
yBuffer = ByteBuffer.allocateDirect(1280*720);
uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes
yBuffer.order(ByteOrder.nativeOrder());
uvBuffer.order(ByteOrder.nativeOrder());
//Our vertex shader code; nothing special
String vertexShader =
"attribute vec4 a_position; \n" +
"attribute vec2 a_texCoord; \n" +
"varying vec2 v_texCoord; \n" +
"void main(){ \n" +
" gl_Position = a_position; \n" +
" v_texCoord = a_texCoord; \n" +
"} \n";
//Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
//Effectively making YUV to RGB conversion
String fragmentShader =
"#ifdef GL_ES \n" +
"precision highp float; \n" +
"#endif \n" +
"varying vec2 v_texCoord; \n" +
"uniform sampler2D y_texture; \n" +
"uniform sampler2D uv_texture; \n" +
"void main (void){ \n" +
" float r, g, b, y, u, v; \n" +
//We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
//that's why we're pulling it from the R component, we could also use G or B
" y = texture2D(y_texture, v_texCoord).r; \n" +
//We had put the U and V values of each pixel to the A and R,G,B components of the
//texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
//in the texture, this is probably the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5; \n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5; \n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v; \n" +
" g = y - 0.39465*u - 0.58060*v; \n" +
" b = y + 2.03211*u; \n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0); \n" +
"} \n";
//Create and compile our shader
shader = new ShaderProgram(vertexShader, fragmentShader);
//Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 2, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));
//The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)
float[] vertices = {
-1.0f, 1.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1.0f, -1.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1.0f, -1.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1.0f, 1.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
//The indices come in trios of vertex indices that describe the triangles of our mesh
short[] indices = {0, 1, 2, 0, 2, 3};
//Set vertices and indices to our mesh
mesh.setVertices(vertices);
mesh.setIndices(indices);
/*
* Initialize the Android camera
*/
camera = Camera.open(0);
//We set the buffer ourselves that will be used to hold the preview image
camera.setPreviewCallbackWithBuffer(this);
//Set the camera parameters
Camera.Parameters params = camera.getParameters();
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
params.setPreviewSize(1280,720);
camera.setParameters(params);
//Start the preview
camera.startPreview();
//Set the first buffer, the preview doesn't start unless we set the buffers
camera.addCallbackBuffer(image);
}
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space
camera.addCallbackBuffer(image);
}
@Override
public void renderBackground() {
/*
* Because of Java's limitations, we can't reference the middle of an array and
* we must copy the channels in our byte array into buffers before setting them to textures
*/
//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(image, 0, 1280*720);
yBuffer.position(0);
//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(image, 1280*720, 1280*720/2);
uvBuffer.position(0);
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Draw the textures onto a mesh using our shader
*/
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
//Render our mesh using the shader, which in turn will use our textures to render their content on the mesh
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
}
@Override
public void destroy() {
camera.stopPreview();
camera.setPreviewCallbackWithBuffer(null);
camera.release();
}
}
init()
在开始时被调用一次,
renderBackground()
在每个渲染周期中被调用,
destroy()
在结束时被调用一次:
public class YourApplication implements ApplicationListener {
private final PlatformDependentCameraController deviceCameraControl;
public YourApplication(PlatformDependentCameraController cameraControl) {
this.deviceCameraControl = cameraControl;
}
@Override
public void create() {
deviceCameraControl.init();
}
@Override
public void render() {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
//Render the background that is the live camera image
deviceCameraControl.renderBackground();
/*
* Render anything here (sprites/models etc.) that you want to go on top of the camera image
*/
}
@Override
public void dispose() {
deviceCameraControl.destroy();
}
@Override
public void resize(int width, int height) {
}
@Override
public void pause() {
}
@Override
public void resume() {
}
}
public class MainActivity extends AndroidApplication {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.useGL20 = true; //This line is obsolete in the newest libgdx version
cfg.a = 8;
cfg.b = 8;
cfg.g = 8;
cfg.r = 8;
PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();
initialize(new YourApplication(cameraControl), cfg);
graphics.getView().setKeepScreenOn(true);
}
}
mesh.render(shader, GL20.GL_TRIANGLES);
)始终需要0-1毫秒mesh.render(shader, GL20.GL_TRIANGLES);
)始终需要0-1毫秒关于android - 如何使用OpenGLES 2.0在libgdx中的背景上实时渲染Android的YUV-NV21摄像机图像?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22456884/
例如,我有 4*4 图像。我想分别提取 Y、U 和 V 分量。如果图像是 YUV 422 ,YUV 420 和 YUV444 怎么办。我有兴趣了解数组的结构 Y、U 和 V 如何存储在 422,420
关闭。这个问题是off-topic .它目前不接受答案。 想改善这个问题吗? Update the question所以它是 on-topic对于堆栈溢出。 8年前关闭。 Improve this q
我无法理解两者之间的确切区别。从研究中,更多地讨论这两者是不同的,但似乎有一些人将它们归为“4 2 2”抽样方案。 YUV 422(I 和 J 版本):“具有一个亮度平面 Y 和 2 个色度平面 U、
我有 10 个 yuv 输入,每个 yuv 都是 WxH 的帧(ip0_WxH.yuv,ip1_WxH.yuv,...,ip9_WxH.yuv) 我需要连接所有 10 个以创建一个包含所有 10 个帧
我的目标是将我解码的帧写入文件中。我知道我捕捉得很好,因为它显示在我的 SDL 播放中,并且我随后对其进行编码,没有任何问题。但似乎我无法将框架正确写入文件。这是代码: #define PIXFMT
我用ffmpeg做HDR测试视频,我的做法是写一张图片,把图片转成yuv420p,然后用ffmpeg做HDR测试视频。 但我发现从 mp4 读取的 yuv 数据与原始输入不同.. 我在这里被困了一段时
已关闭。此问题不符合Stack Overflow guidelines 。目前不接受答案。 已关闭 9 年前。 要求提供代码的问题必须表现出对所解决问题的最低限度的了解。包括尝试的解决方案、为什么它们
我一直在尝试用 C++ 将 YUYV 原始文件转换为 YUV420 原始文件。格式记录在 YUV Formats Page 中.转换后我的输出文件显示为绿色。我采用了来自 Experts Exchan
我正在尝试通过 python 使用 OpenCV 4.1.0 版将平面 YUV 4:2:0 图像转换为 RGB,并且正在努力了解如何格式化数组以传递给 cvtColor 功能。我将所有 3 个 cha
我正在编写一个医疗应用程序,其中有来自 IP 接口(interface)的数据包。其中对像素骨架解码有以下定义。我可以决定在主框架中选择什么来使用不同的像素算法。但我没有正确理解逻辑。 这三种格式是什
关闭。这个问题不满足Stack Overflow guidelines .它目前不接受答案。 想改善这个问题吗?更新问题,使其成为 on-topic对于堆栈溢出。 1年前关闭。 Improve thi
我得到 YUV 到 RGB 函数 1 和 2(来自堆栈溢出) 但是结果是这样的错误http://163.18.62.32/device.jpg 我不明白这一步有什么问题 我的设备是 Moto Mile
我正在使用opencv来实现对象跟踪。我读到 YUV 图像比 RGB 图像更好用。我的问题是,尽管我花了很多时间阅读笔记,但我还是无法理解 YUV 格式。 Y 是亮度,我认为它是根据 R、G、B 分量
我有高清1920x1080 YUV 格式视频。 我想将它们压缩到 640x480和其他转换为其他格式(mp4/avi..) 我使用了以下命令: ffmpeg -f rawvideo -pix_fmt
我正在使用 FFMPEG 从 MXF 视频中提取图像。我有兴趣使用 YUV(最好是 422)颜色空间提取 tiff 格式的图像。 MXF 视频属于 YUV 颜色空间。因此,为什么我想继续在那个色彩空间
我有一个 YUV 文件,我想以 BMP 文件的形式获取每一帧,该怎么做?我可以使用 FFMPEG 和 MPlayer。 最佳答案 您可以使用以下命令: ffmpeg -s widthxheight -
如何从 YUV 视频文件格式中获取信息?我需要了解有关流的信息(例如编解码器、位深度)。使用 ffplay 我可以播放 yuv 视频,但如何获取有关流的信息? 我尝试使用 ffprobe 但它不起作用
我试图将 C++ 程序中生成的数组中的原始 YUV 帧流式传输到使用 FFPEG 的视频。谁能指出我正确的方向? 最佳答案 要将管道 YUV420 平面帧流式传输到 RTMP,请尝试例如 ffmpeg
我有一个要实时流式传输到屏幕的 YUV 数据流(来自视频文件)。 (基本上,我想写一个实时播放视频的程序。) 因此,我正在寻找一种将 YUV 数据发送到屏幕的便携方式。理想情况下,我希望使用可移植的东
我已经通过用 Nvidia 的着色器语言编写的片段着色器实现了 YUV 到 RGB 的转换。 (Y、U 和 V 存储在单独的纹理中,这些纹理通过我的片段着色器中的多纹理组合而成)。它在 OpenGL
我是一名优秀的程序员,十分优秀!