gpt4 book ai didi

android - 使用 ARCore 提供视频录制功能

转载 作者:塔克拉玛干 更新时间:2023-11-03 00:01:37 24 4
gpt4 key购买 nike

我正在使用此示例 ( https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java ),我想提供录制放置了 AR 对象的视频的功能。

我尝试了多种方法但无济于事,有推荐的方法吗?

最佳答案

从 OpenGL 表面创建视频有点复杂,但是是可行的。我认为最简单的理解方法是使用两个 EGL 表面,一个用于 UI,一个用于媒体编码器。 Grafika 中需要的 EGL 级别调用有一个很好的示例GitHub 上的项目。我用它作为起点来找出 ARCore 的 HelloAR 示例所需的修改。由于有很多更改,我将其分解为几个步骤。

进行更改以支持写入外部存储

要保存视频,您需要将视频文件写入可访问的位置,因此您需要获得此权限。

AndroidManifest.xml文件中声明权限:

   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

然后修改CameraPermissionHelper.java请求外部存储权限和摄像头权限。为此,创建一个权限数组并在请求权限时使用它,并在检查权限状态时迭代它:

private static final String REQUIRED_PERMISSIONS[] = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE
};

public static void requestCameraPermission(Activity activity) {
ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
CAMERA_PERMISSION_CODE);
}

public static boolean hasCameraPermission(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(activity, p) !=
PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}

public static boolean shouldShowRequestPermissionRationale(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
return true;
}
}
return false;
}

为HelloARActivity添加录音

activity_main.xml 底部的 UI 中添加一个简单的按钮和 TextView :

<Button
android:id="@+id/fboRecord_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignStart="@+id/surfaceview"
android:layout_alignTop="@+id/surfaceview"
android:onClick="clickToggleRecording"
android:text="@string/toggleRecordingOn"
tools:ignore="OnClick"/>

<TextView
android:id="@+id/nowRecording_text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/fboRecord_button"
android:layout_alignBottom="@+id/fboRecord_button"
android:layout_toEndOf="@+id/fboRecord_button"
android:text="" />

HelloARActivity中添加用于记录的成员变量:

private VideoRecorder mRecorder;
private android.opengl.EGLConfig mAndroidEGLConfig;

onSurfaceCreated() 中初始化 mAndroidEGLConfig。我们将使用此配置对象来创建编码器表面。

EGL10 egl10 =  (EGL10)EGLContext.getEGL();
javax.microedition.khronos.egl.EGLDisplay display = egl10.eglGetCurrentDisplay();
int v[] = new int[2];
egl10.eglGetConfigAttrib(display,config, EGL10.EGL_CONFIG_ID, v);

EGLDisplay androidDisplay = EGL14.eglGetCurrentDisplay();
int attribs[] = {EGL14.EGL_CONFIG_ID, v[0], EGL14.EGL_NONE};
android.opengl.EGLConfig myConfig[] = new android.opengl.EGLConfig[1];
EGL14.eglChooseConfig(androidDisplay, attribs, 0, myConfig, 0, 1, v, 1);
this.mAndroidEGLConfig = myConfig[0];

重构 onDrawFrame() 方法,以便首先执行所有非绘图代码,然后在名为 draw() 的方法中完成实际绘图。这样在录制过程中,我们可以更新 ARCore 框架,处理输入,然后绘制到 UI,然后再次绘制到编码器。

@Override
public void onDrawFrame(GL10 gl) {

if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that
// the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);

try {
// Obtain the current frame from ARSession. When the
//configuration is set to
// UpdateMode.BLOCKING (it is by default), this will
// throttle the rendering to the camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();

// Handle taps. Handling only one tap per frame, as taps are
// usually low frequency compared to frame rate.
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
if (trackable instanceof Plane
&& ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (mAnchors.size() >= 20) {
mAnchors.get(0).detach();
mAnchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3d model
// in the correct position relative both to the world and to the plane.
mAnchors.add(hit.createAnchor());

// Hits are sorted by depth. Consider only closest hit on a plane.
break;
}
}
}


// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);

// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);

// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();

// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);


draw(frame,camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(),lightIntensity);

if (mRecorder!= null && mRecorder.isRecording()) {
VideoRecorder.CaptureContext ctx = mRecorder.startCapture();
if (ctx != null) {
// draw again
draw(frame, camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(), lightIntensity);

// restore the context
mRecorder.stopCapture(ctx, frame.getTimestamp());
}
}



// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();

// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() ==
com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}


private void draw(Frame frame, boolean paused,
float[] viewMatrix, float[] projectionMatrix,
Pose displayOrientedPose, float lightIntensity) {

// Clear screen to notify driver it should not load
// any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);

// Draw background.
mBackgroundRenderer.draw(frame);

// If not tracking, don't draw 3d objects.
if (paused) {
return;
}

mPointCloud.draw(viewMatrix, projectionMatrix);

// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class),
displayOrientedPose, projectionMatrix);

// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : mAnchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space.
// The Anchor pose is
// updated during calls to session.update() as ARCore refines
// its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);

// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewMatrix, projectionMatrix, lightIntensity);
mVirtualObjectShadow.draw(viewMatrix, projectionMatrix, lightIntensity);
}
}

处理录音的切换:

public void clickToggleRecording(View view) {
Log.d(TAG, "clickToggleRecording");
if (mRecorder == null) {
File outputFile = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES) + "/HelloAR",
"fbo-gl-" + Long.toHexString(System.currentTimeMillis()) + ".mp4");
File dir = outputFile.getParentFile();
if (!dir.exists()) {
dir.mkdirs();
}

try {
mRecorder = new VideoRecorder(mSurfaceView.getWidth(),
mSurfaceView.getHeight(),
VideoRecorder.DEFAULT_BITRATE, outputFile, this);
mRecorder.setEglConfig(mAndroidEGLConfig);
} catch (IOException e) {
Log.e(TAG,"Exception starting recording", e);
}
}
mRecorder.toggleRecording();
updateControls();
}

private void updateControls() {
Button toggleRelease = findViewById(R.id.fboRecord_button);
int id = (mRecorder != null && mRecorder.isRecording()) ?
R.string.toggleRecordingOff : R.string.toggleRecordingOn;
toggleRelease.setText(id);

TextView tv = findViewById(R.id.nowRecording_text);
if (id == R.string.toggleRecordingOff) {
tv.setText(getString(R.string.nowRecording));
} else {
tv.setText("");
}
}

添加监听接口(interface)接收视频录制状态变化:

@Override
public void onVideoRecorderEvent(VideoRecorder.VideoEvent videoEvent) {
Log.d(TAG, "VideoEvent: " + videoEvent);
updateControls();

if (videoEvent == VideoRecorder.VideoEvent.RecordingStopped) {
mRecorder = null;
}
}

实现 VideoRecorder 类以将图像提供给编码器

VideoRecorder 类用于将图像提供给媒体编码器。此类使用媒体编码器的输入表面创建离屏 EGLSurface。一般的方法是在录制期间为 UI 显示绘制一次,然后为媒体编码器表面进行完全相同的绘制调用。

构造函数在录制过程中接受录制参数和一个监听器来推送事件。

public VideoRecorder(int width, int height, int bitrate, File outputFile,
VideoRecorderListener listener) throws IOException {
this.listener = listener;
mEncoderCore = new VideoEncoderCore(width, height, bitrate, outputFile);
mVideoRect = new Rect(0,0,width,height);
}

录制开始时,我们需要为编码器创建一个新的 EGL 表面。然后通知编码器有新帧可用,使编码器表面成为当前 EGL 表面,然后返回,以便调用者可以进行绘图调用。

public CaptureContext startCapture() {

if (mVideoEncoder == null) {
return null;
}

if (mEncoderContext == null) {
mEncoderContext = new CaptureContext();
mEncoderContext.windowDisplay = EGL14.eglGetCurrentDisplay();

// Create a window surface, and attach it to the Surface we received.
int[] surfaceAttribs = {
EGL14.EGL_NONE
};

mEncoderContext.windowDrawSurface = EGL14.eglCreateWindowSurface(
mEncoderContext.windowDisplay,
mEGLConfig,mEncoderCore.getInputSurface(),
surfaceAttribs, 0);
mEncoderContext.windowReadSurface = mEncoderContext.windowDrawSurface;
}

CaptureContext displayContext = new CaptureContext();
displayContext.initialize();

// Draw for recording, swap.
mVideoEncoder.frameAvailableSoon();


// Make the input surface current
// mInputWindowSurface.makeCurrent();
EGL14.eglMakeCurrent(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, mEncoderContext.windowReadSurface,
EGL14.eglGetCurrentContext());

// If we don't set the scissor rect, the glClear() we use to draw the
// light-grey background will draw outside the viewport and muck up our
// letterboxing. Might be better if we disabled the test immediately after
// the glClear(). Of course, if we were clearing the frame background to
// black it wouldn't matter.
//
// We do still need to clear the pixels outside the scissor rect, of course,
// or we'll get garbage at the edges of the recording. We can either clear
// the whole thing and accept that there will be a lot of overdraw, or we
// can issue multiple scissor/clear calls. Some GPUs may have a special
// optimization for zeroing out the color buffer.
//
// For now, be lazy and zero the whole thing. At some point we need to
// examine the performance here.
GLES20.glClearColor(0f, 0f, 0f, 1f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

GLES20.glViewport(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());

return displayContext;
}

当绘制完成后,需要将EGLContext恢复回UI界面:

public void stopCapture(CaptureContext oldContext, long timeStampNanos) {

if (oldContext == null) {
return;
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
EGLExt.eglPresentationTimeANDROID(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, timeStampNanos);

EGL14.eglSwapBuffers(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface);


// Restore.
GLES20.glViewport(0, 0, oldContext.getWidth(), oldContext.getHeight());
EGL14.eglMakeCurrent(oldContext.windowDisplay,
oldContext.windowDrawSurface, oldContext.windowReadSurface,
EGL14.eglGetCurrentContext());
}

添加一些记账方法

public boolean isRecording() {
return mRecording;
}

public void toggleRecording() {
if (isRecording()) {
stopRecording();
} else {
startRecording();
}
}

protected void startRecording() {
mRecording = true;
if (mVideoEncoder == null) {
mVideoEncoder = new TextureMovieEncoder2(mEncoderCore);
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStarted);
}
}

protected void stopRecording() {
mRecording = false;
if (mVideoEncoder != null) {
mVideoEncoder.stopRecording();
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStopped);
}
}

public void setEglConfig(EGLConfig eglConfig) {
this.mEGLConfig = eglConfig;
}

public enum VideoEvent {
RecordingStarted,
RecordingStopped
}

public interface VideoRecorderListener {

void onVideoRecorderEvent(VideoEvent videoEvent);
}

CaptureContext 的内部类跟踪显示和表面,以便轻松处理与 EGL 上下文一起使用的多个表面:

public static class CaptureContext {
EGLDisplay windowDisplay;
EGLSurface windowReadSurface;
EGLSurface windowDrawSurface;
private int mWidth;
private int mHeight;

public void initialize() {
windowDisplay = EGL14.eglGetCurrentDisplay();
windowReadSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
windowDrawSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_READ);
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_WIDTH,
v, 0);
mWidth = v[0];
v[0] = -1;
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_HEIGHT,
v, 0);
mHeight = v[0];
}

/**
* Returns the surface's width, in pixels.
* <p>
* If this is called on a window surface, and the underlying
* surface is in the process
* of changing size, we may not see the new size right away
* (e.g. in the "surfaceChanged"
* callback). The size should match after the next buffer swap.
*/
public int getWidth() {
if (mWidth < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay,
windowDrawSurface, EGL14.EGL_WIDTH, v, 0);
mWidth = v[0];
}
return mWidth;
}

/**
* Returns the surface's height, in pixels.
*/
public int getHeight() {
if (mHeight < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface,
EGL14.EGL_HEIGHT, v, 0);
mHeight = v[0];
}
return mHeight;
}

}

添加 VideoEncoder 类

VideoEncoderCore类是从 Grafika 复制的,以及 TextureMovieEncoder2类。

关于android - 使用 ARCore 提供视频录制功能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47869061/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com