gpt4 book ai didi

android - 在相机流 (GLSL) 上绘制文本或图像

转载 作者:太空宇宙 更新时间:2023-11-03 11:00:45 24 4
gpt4 key购买 nike

我有一个基于 grafika's examples 的直播应用程序,我通过 RTMP 发送我的视频源进行直播。

我现在想通过在我的视频流上叠加文本或 Logo 来为我的视频添加水印。我知道这可以通过 GLSL 过滤来完成,但我不知道如何根据我链接的示例实现它。

我尝试使用 Alpha 混合,但似乎这两种纹理格式在某种程度上不兼容(一种是 TEXTURE_EXTERNAL_OES,另一种是 TEXTURE_2D),我只得到一个黑框作为返回。

编辑:

我的代码基于 Kickflip API:

class CameraSurfaceRenderer implements GLSurfaceView.Renderer {
private static final String TAG = "CameraSurfaceRenderer";
private static final boolean VERBOSE = false;

private CameraEncoder mCameraEncoder;

private FullFrameRect mFullScreenCamera;
private FullFrameRect mFullScreenOverlay; // For texture overlay

private final float[] mSTMatrix = new float[16];
private int mOverlayTextureId;
private int mCameraTextureId;

private boolean mRecordingEnabled;

private int mFrameCount;

// Keep track of selected filters + relevant state
private boolean mIncomingSizeUpdated;
private int mIncomingWidth;
private int mIncomingHeight;
private int mCurrentFilter;
private int mNewFilter;

boolean showBox = false;


/**
* Constructs CameraSurfaceRenderer.
* <p>
* @param recorder video encoder object
*/
public CameraSurfaceRenderer(CameraEncoder recorder) {
mCameraEncoder = recorder;

mCameraTextureId = -1;
mFrameCount = -1;

SessionConfig config = recorder.getConfig();
mIncomingWidth = config.getVideoWidth();
mIncomingHeight = config.getVideoHeight();
mIncomingSizeUpdated = true; // Force texture size update on next onDrawFrame

mCurrentFilter = -1;
mNewFilter = Filters.FILTER_NONE;

mRecordingEnabled = false;
}


/**
* Notifies the renderer that we want to stop or start recording.
*/
public void changeRecordingState(boolean isRecording) {
Log.d(TAG, "changeRecordingState: was " + mRecordingEnabled + " now " + isRecording);
mRecordingEnabled = isRecording;
}

@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
Log.d(TAG, "onSurfaceCreated");
// Set up the texture blitter that will be used for on-screen display. This
// is *not* applied to the recording, because that uses a separate shader.
mFullScreenCamera = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
// For texture overlay:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
mFullScreenOverlay = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_2D));
mOverlayTextureId = GlUtil.createTextureWithTextContent("hello!");
mOverlayTextureId = GlUtil.createTextureFromImage(mCameraView.getContext(), R.drawable.red_dot);
mCameraTextureId = mFullScreenCamera.createTextureObject();

mCameraEncoder.onSurfaceCreated(mCameraTextureId);
mFrameCount = 0;
}

@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
Log.d(TAG, "onSurfaceChanged " + width + "x" + height);
}

@Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE){
if(mFrameCount % 30 == 0){
Log.d(TAG, "onDrawFrame tex=" + mCameraTextureId);
mCameraEncoder.logSavedEglState();
}
}

if (mCurrentFilter != mNewFilter) {
Filters.updateFilter(mFullScreenCamera, mNewFilter);
mCurrentFilter = mNewFilter;
mIncomingSizeUpdated = true;
}

if (mIncomingSizeUpdated) {
mFullScreenCamera.getProgram().setTexSize(mIncomingWidth, mIncomingHeight);
mFullScreenOverlay.getProgram().setTexSize(mIncomingWidth, mIncomingHeight);
mIncomingSizeUpdated = false;
Log.i(TAG, "setTexSize on display Texture");
}

// Draw the video frame.
if(mCameraEncoder.isSurfaceTextureReadyForDisplay()){
mCameraEncoder.getSurfaceTextureForDisplay().updateTexImage();
mCameraEncoder.getSurfaceTextureForDisplay().getTransformMatrix(mSTMatrix);
//Drawing texture overlay:
mFullScreenOverlay.drawFrame(mOverlayTextureId, mSTMatrix);
mFullScreenCamera.drawFrame(mCameraTextureId, mSTMatrix);
}
mFrameCount++;
}

public void signalVertialVideo(FullFrameRect.SCREEN_ROTATION isVertical) {
if (mFullScreenCamera != null) mFullScreenCamera.adjustForVerticalVideo(isVertical, false);
}

/**
* Changes the filter that we're applying to the camera preview.
*/
public void changeFilterMode(int filter) {
mNewFilter = filter;
}

public void handleTouchEvent(MotionEvent ev){
mFullScreenCamera.handleTouchEvent(ev);
}

}

这是在屏幕上渲染图像的代码 (GLSurfaceView),但这实际上并没有覆盖在视频上。如果我没记错的话,这是在 CameraEncoder 上完成的.

问题是,将代码从 CameraSurfaceRenderer 复制到 CameraEncoder(它们在滤镜方面都有相似的代码)不会提供叠加的文本/图像。

最佳答案

The texture object uses the GL_TEXTURE_EXTERNAL_OES texture target, which is defined by the GL_OES_EGL_image_external OpenGL ES extension. This limits how the texture may be used. Each time the texture is bound it must be bound to the GL_TEXTURE_EXTERNAL_OES target rather than the GL_TEXTURE_2D target. Additionally, any OpenGL ES 2.0 shader that samples from the texture must declare its use of this extension using, for example, an "#extension GL_OES_EGL_image_external : require" directive. Such shaders must also access the texture using the samplerExternalOES GLSL sampler type.

https://developer.android.com/reference/android/graphics/SurfaceTexture.html

发布您用于进行 alpha 混合的代码,我可能会修复它。


我可能会覆盖 Texture2dProgram并将其传递给 FullFrame 渲染器。它具有使用 GL_TEXTURE_EXTERNAL_OES 扩展进行渲染的示例代码。基本上,@Override draw 函数,调用基本实现,绑定(bind)水印并绘制。

应该在相机和视频编码器之间。

关于android - 在相机流 (GLSL) 上绘制文本或图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44048389/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com