- iOS/Objective-C 元类和类别
- objective-c - -1001 错误,当 NSURLSession 通过 httpproxy 和/etc/hosts
- java - 使用网络类获取 url 地址
- ios - 推送通知中不播放声音
我正在使用此示例 ( https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java ),我想提供录制放置了 AR 对象的视频的功能。
我尝试了多种方法但无济于事,有推荐的方法吗?
最佳答案
从 OpenGL 表面创建视频有点复杂,但是是可行的。我认为最简单的理解方法是使用两个 EGL 表面,一个用于 UI,一个用于媒体编码器。 Grafika 中需要的 EGL 级别调用有一个很好的示例GitHub 上的项目。我用它作为起点来找出 ARCore 的 HelloAR 示例所需的修改。由于有很多更改,我将其分解为几个步骤。
进行更改以支持写入外部存储
要保存视频,您需要将视频文件写入可访问的位置,因此您需要获得此权限。
在AndroidManifest.xml
文件中声明权限:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
然后修改CameraPermissionHelper.java
请求外部存储权限和摄像头权限。为此,创建一个权限数组并在请求权限时使用它,并在检查权限状态时迭代它:
private static final String REQUIRED_PERMISSIONS[] = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE
};
public static void requestCameraPermission(Activity activity) {
ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
CAMERA_PERMISSION_CODE);
}
public static boolean hasCameraPermission(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(activity, p) !=
PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}
public static boolean shouldShowRequestPermissionRationale(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
return true;
}
}
return false;
}
为HelloARActivity添加录音
在 activity_main.xml
底部的 UI 中添加一个简单的按钮和 TextView :
<Button
android:id="@+id/fboRecord_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignStart="@+id/surfaceview"
android:layout_alignTop="@+id/surfaceview"
android:onClick="clickToggleRecording"
android:text="@string/toggleRecordingOn"
tools:ignore="OnClick"/>
<TextView
android:id="@+id/nowRecording_text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/fboRecord_button"
android:layout_alignBottom="@+id/fboRecord_button"
android:layout_toEndOf="@+id/fboRecord_button"
android:text="" />
在HelloARActivity
中添加用于记录的成员变量:
private VideoRecorder mRecorder;
private android.opengl.EGLConfig mAndroidEGLConfig;
在 onSurfaceCreated()
中初始化 mAndroidEGLConfig。我们将使用此配置对象来创建编码器表面。
EGL10 egl10 = (EGL10)EGLContext.getEGL();
javax.microedition.khronos.egl.EGLDisplay display = egl10.eglGetCurrentDisplay();
int v[] = new int[2];
egl10.eglGetConfigAttrib(display,config, EGL10.EGL_CONFIG_ID, v);
EGLDisplay androidDisplay = EGL14.eglGetCurrentDisplay();
int attribs[] = {EGL14.EGL_CONFIG_ID, v[0], EGL14.EGL_NONE};
android.opengl.EGLConfig myConfig[] = new android.opengl.EGLConfig[1];
EGL14.eglChooseConfig(androidDisplay, attribs, 0, myConfig, 0, 1, v, 1);
this.mAndroidEGLConfig = myConfig[0];
重构 onDrawFrame()
方法,以便首先执行所有非绘图代码,然后在名为 draw()
的方法中完成实际绘图。这样在录制过程中,我们可以更新 ARCore 框架,处理输入,然后绘制到 UI,然后再次绘制到编码器。
@Override
public void onDrawFrame(GL10 gl) {
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that
// the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the
//configuration is set to
// UpdateMode.BLOCKING (it is by default), this will
// throttle the rendering to the camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are
// usually low frequency compared to frame rate.
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
if (trackable instanceof Plane
&& ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (mAnchors.size() >= 20) {
mAnchors.get(0).detach();
mAnchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3d model
// in the correct position relative both to the world and to the plane.
mAnchors.add(hit.createAnchor());
// Hits are sorted by depth. Consider only closest hit on a plane.
break;
}
}
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
draw(frame,camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(),lightIntensity);
if (mRecorder!= null && mRecorder.isRecording()) {
VideoRecorder.CaptureContext ctx = mRecorder.startCapture();
if (ctx != null) {
// draw again
draw(frame, camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(), lightIntensity);
// restore the context
mRecorder.stopCapture(ctx, frame.getTimestamp());
}
}
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() ==
com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
private void draw(Frame frame, boolean paused,
float[] viewMatrix, float[] projectionMatrix,
Pose displayOrientedPose, float lightIntensity) {
// Clear screen to notify driver it should not load
// any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (paused) {
return;
}
mPointCloud.draw(viewMatrix, projectionMatrix);
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class),
displayOrientedPose, projectionMatrix);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : mAnchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space.
// The Anchor pose is
// updated during calls to session.update() as ARCore refines
// its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewMatrix, projectionMatrix, lightIntensity);
mVirtualObjectShadow.draw(viewMatrix, projectionMatrix, lightIntensity);
}
}
处理录音的切换:
public void clickToggleRecording(View view) {
Log.d(TAG, "clickToggleRecording");
if (mRecorder == null) {
File outputFile = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES) + "/HelloAR",
"fbo-gl-" + Long.toHexString(System.currentTimeMillis()) + ".mp4");
File dir = outputFile.getParentFile();
if (!dir.exists()) {
dir.mkdirs();
}
try {
mRecorder = new VideoRecorder(mSurfaceView.getWidth(),
mSurfaceView.getHeight(),
VideoRecorder.DEFAULT_BITRATE, outputFile, this);
mRecorder.setEglConfig(mAndroidEGLConfig);
} catch (IOException e) {
Log.e(TAG,"Exception starting recording", e);
}
}
mRecorder.toggleRecording();
updateControls();
}
private void updateControls() {
Button toggleRelease = findViewById(R.id.fboRecord_button);
int id = (mRecorder != null && mRecorder.isRecording()) ?
R.string.toggleRecordingOff : R.string.toggleRecordingOn;
toggleRelease.setText(id);
TextView tv = findViewById(R.id.nowRecording_text);
if (id == R.string.toggleRecordingOff) {
tv.setText(getString(R.string.nowRecording));
} else {
tv.setText("");
}
}
添加监听接口(interface)接收视频录制状态变化:
@Override
public void onVideoRecorderEvent(VideoRecorder.VideoEvent videoEvent) {
Log.d(TAG, "VideoEvent: " + videoEvent);
updateControls();
if (videoEvent == VideoRecorder.VideoEvent.RecordingStopped) {
mRecorder = null;
}
}
实现 VideoRecorder 类以将图像提供给编码器
VideoRecorder 类用于将图像提供给媒体编码器。此类使用媒体编码器的输入表面创建离屏 EGLSurface。一般的方法是在录制期间为 UI 显示绘制一次,然后为媒体编码器表面进行完全相同的绘制调用。
构造函数在录制过程中接受录制参数和一个监听器来推送事件。
public VideoRecorder(int width, int height, int bitrate, File outputFile,
VideoRecorderListener listener) throws IOException {
this.listener = listener;
mEncoderCore = new VideoEncoderCore(width, height, bitrate, outputFile);
mVideoRect = new Rect(0,0,width,height);
}
录制开始时,我们需要为编码器创建一个新的 EGL 表面。然后通知编码器有新帧可用,使编码器表面成为当前 EGL 表面,然后返回,以便调用者可以进行绘图调用。
public CaptureContext startCapture() {
if (mVideoEncoder == null) {
return null;
}
if (mEncoderContext == null) {
mEncoderContext = new CaptureContext();
mEncoderContext.windowDisplay = EGL14.eglGetCurrentDisplay();
// Create a window surface, and attach it to the Surface we received.
int[] surfaceAttribs = {
EGL14.EGL_NONE
};
mEncoderContext.windowDrawSurface = EGL14.eglCreateWindowSurface(
mEncoderContext.windowDisplay,
mEGLConfig,mEncoderCore.getInputSurface(),
surfaceAttribs, 0);
mEncoderContext.windowReadSurface = mEncoderContext.windowDrawSurface;
}
CaptureContext displayContext = new CaptureContext();
displayContext.initialize();
// Draw for recording, swap.
mVideoEncoder.frameAvailableSoon();
// Make the input surface current
// mInputWindowSurface.makeCurrent();
EGL14.eglMakeCurrent(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, mEncoderContext.windowReadSurface,
EGL14.eglGetCurrentContext());
// If we don't set the scissor rect, the glClear() we use to draw the
// light-grey background will draw outside the viewport and muck up our
// letterboxing. Might be better if we disabled the test immediately after
// the glClear(). Of course, if we were clearing the frame background to
// black it wouldn't matter.
//
// We do still need to clear the pixels outside the scissor rect, of course,
// or we'll get garbage at the edges of the recording. We can either clear
// the whole thing and accept that there will be a lot of overdraw, or we
// can issue multiple scissor/clear calls. Some GPUs may have a special
// optimization for zeroing out the color buffer.
//
// For now, be lazy and zero the whole thing. At some point we need to
// examine the performance here.
GLES20.glClearColor(0f, 0f, 0f, 1f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glViewport(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());
return displayContext;
}
当绘制完成后,需要将EGLContext恢复回UI界面:
public void stopCapture(CaptureContext oldContext, long timeStampNanos) {
if (oldContext == null) {
return;
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
EGLExt.eglPresentationTimeANDROID(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, timeStampNanos);
EGL14.eglSwapBuffers(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface);
// Restore.
GLES20.glViewport(0, 0, oldContext.getWidth(), oldContext.getHeight());
EGL14.eglMakeCurrent(oldContext.windowDisplay,
oldContext.windowDrawSurface, oldContext.windowReadSurface,
EGL14.eglGetCurrentContext());
}
添加一些记账方法
public boolean isRecording() {
return mRecording;
}
public void toggleRecording() {
if (isRecording()) {
stopRecording();
} else {
startRecording();
}
}
protected void startRecording() {
mRecording = true;
if (mVideoEncoder == null) {
mVideoEncoder = new TextureMovieEncoder2(mEncoderCore);
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStarted);
}
}
protected void stopRecording() {
mRecording = false;
if (mVideoEncoder != null) {
mVideoEncoder.stopRecording();
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStopped);
}
}
public void setEglConfig(EGLConfig eglConfig) {
this.mEGLConfig = eglConfig;
}
public enum VideoEvent {
RecordingStarted,
RecordingStopped
}
public interface VideoRecorderListener {
void onVideoRecorderEvent(VideoEvent videoEvent);
}
CaptureContext 的内部类跟踪显示和表面,以便轻松处理与 EGL 上下文一起使用的多个表面:
public static class CaptureContext {
EGLDisplay windowDisplay;
EGLSurface windowReadSurface;
EGLSurface windowDrawSurface;
private int mWidth;
private int mHeight;
public void initialize() {
windowDisplay = EGL14.eglGetCurrentDisplay();
windowReadSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
windowDrawSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_READ);
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_WIDTH,
v, 0);
mWidth = v[0];
v[0] = -1;
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_HEIGHT,
v, 0);
mHeight = v[0];
}
/**
* Returns the surface's width, in pixels.
* <p>
* If this is called on a window surface, and the underlying
* surface is in the process
* of changing size, we may not see the new size right away
* (e.g. in the "surfaceChanged"
* callback). The size should match after the next buffer swap.
*/
public int getWidth() {
if (mWidth < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay,
windowDrawSurface, EGL14.EGL_WIDTH, v, 0);
mWidth = v[0];
}
return mWidth;
}
/**
* Returns the surface's height, in pixels.
*/
public int getHeight() {
if (mHeight < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface,
EGL14.EGL_HEIGHT, v, 0);
mHeight = v[0];
}
return mHeight;
}
}
添加 VideoEncoder 类
VideoEncoderCore类是从 Grafika 复制的,以及 TextureMovieEncoder2类。
关于android - 使用 ARCore 提供视频录制功能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47869061/
我对此很陌生,我在这里的论坛上检查过答案,但我没有找到任何真正可以帮助我的答案。我正在尝试播放 res/raw 文件夹中的视频。到目前为止我已经设置了这段代码: MediaPlayer mp; @Ov
我可以播放一个视频剪辑,检测视频的结尾,然后创建一个表单,然后播放另一个视频剪辑。我的问题是,表单 react 不正确,我创建了带有提交按钮和两个单选按钮可供选择的表单。我希望让用户进行选择,验证响应
首先,我必须说我在web2py讨论组中看到过类似的内容,但我不太理解。 我使用 web2py 设置了一个数据库驱动的网站,其中的条目只是 HTML 文本。其中大多数将包含 img和/或video指向相
我正在尝试在视频 View 中播放 YouTube 视频。 我将 xml 布局如下: 代码是这样的: setContentView(R.layout.webview); VideoV
我正在开发一个需要嵌入其中的 youtube 视频播放器的 android 应用程序。我成功地从 API 获得了 RTSP 视频 URL,但是当我试图在我的 android 视频 View 中加载这个
我目前正在从事一个使用 YouTube API 的网络项目。 我完全不熟悉 API。所以每一行代码都需要付出很多努力。 使用以下代码,我可以成功检索播放列表中的项目: https://www.goog
是否可以仅使用视频 ID 和 key 使用 API V3 删除 youtube 视频?我不断收到有关“必需参数:部分”丢失的错误消息。我用服务器和浏览器 api 键试了一下这是我的代码: // $yo
所以我一直坚持这个大约一个小时左右,我就是无法让它工作。到目前为止,我一直在尝试从字符串中提取整个链接,但现在我觉得只获取视频 ID 可能更容易。 RegEx 需要从以下链接样式中获取 ID/URL,
var app = angular.module('speakout', []).config( function($sceDelegateProvider) {
我正在努力从 RSS 提要中阅读音频、视频新闻。我如何确定该 rss 是用于新闻阅读器还是用于音频或视频? 这是视频源:http://feeds.cbsnews.com/CBSNewsVideo 这是
利用python反转图片/视频 准备:一张图片/一段视频 python库:pillow,moviepy 安装库 ?
我希望在用户双击视频区域时让我的视频全屏显示,而不仅仅是在他们单击控件中的小图标时。有没有办法添加事件或其他东西来控制用户点击视频时发生的情况? 谢谢! 最佳答案 按照 Musa 的建议,附
关闭。这个问题需要更多 focused .它目前不接受答案。 想改进这个问题?更新问题,使其仅关注一个问题 editing this post . 7年前关闭。 Improve this questi
我有一个公司培训视频加载到本地服务器上。我正在使用 HTML5 的视频播放来观看这些视频。该服务器无法访问网络,但我已加载 apache 并且端口 8080 对同一网络上的所有机器开放。 这些文件位于
我想混合来自 video.mp4 的视频(时长 1 分钟)和来自 audio.mp3 的音频(10 分钟持续时间)到一个持续时间为 1 分钟的输出文件中。来自 audio.mp3 的音频应该是从 4
关闭。这个问题需要更多 focused .它目前不接受答案。 想改进这个问题?更新问题,使其仅关注一个问题 editing this post . 8年前关闭。 Improve this questi
我正在尝试使用 peer/getUserMedia 创建一个视频 session 网络应用程序。 目前,当我将唯一 ID 发送到视频 session 时,我能够听到/看到任何加入我的 session
考虑到一段时间内的观看次数,我正在评估一种针对半自动脚本的不同方法,该脚本将对视频元数据执行操作。 简而言之,只要视频达到指标中的某个阈值,就说观看次数,它将触发某些操作。 现在要执行此操作,我将不得
我正在通过iBooks创建专门为iPad创建动态ePub电子书的网站。 它需要支持youtube视频播放,所以当我知道视频的直接路径时,我正在使用html5 标记。 有没有一种使用html5 标签嵌入
我对Android不熟悉,我想浏览youtube.com并在Webview内从网站显示视频。当前,当我尝试执行此操作时,将出现设备的浏览器,并让我使用设备浏览器浏览该站点。如果Webview不具备这种
我是一名优秀的程序员,十分优秀!