gpt4 book ai didi

java - 在另一个屏幕元素上设置 ARCore Sceneview

转载 作者:行者123 更新时间:2023-12-02 10:31:03 25 4
gpt4 key购买 nike

我试图通过简单地使用一个 fragment 的提要并将另一个屏幕元素设置为相同的内容来在屏幕上显示一个 arFragment 两次,但我无法弄清楚要采用哪个元素。

我知道,我通过调用获取当前的Camera图像

ArFragment arFragment = (ArFragment) getSupportFragmentManager()
.findFragmentById(R.id.arFragment);
Image image = arFragment.getArSceneView().getArFrame().acquireCameraImage();

但我不知道如何获取另一个屏幕对象并将 View 设置为提要,arFragment 给了我。例如:

TextureView secondView = (TextureView) findViewById(R.id.texture);
secondView.setSurfaceTexture((SurfaceTexture) image);

产生不可转换的类型错误。

我无法使用另一个 arFragment,因为那样会分配另一个相机(显然会产生黑屏和“相机已在使用”错误)。我还没找到

arFrame.assignCamera();

方法,这并不重要,因为Fragment使用的相机只是一个对象,而不是真实的东西。但我无法弄清楚硬件与 fragment 的联系在哪里。如果我没记错的话,我无法在那里读写。

我可以将 feed 转换为位图,或者将其放置到 imageView 上,但我有点害怕每秒执行 60 次。必须有一个简单的解决方案,对吧?...

显示 View 两次并不难 -.-

最佳答案

好的,明白了。转换为 bmp 有点神奇,但我猜确实没有直接的方法。

所以我初始化了一个bytebuffer,分析了android.media.image的YUV分量,将它们转换为Jpeg,然后将其更改为Bitmap,将其旋转90°以匹配原始图片。

// get the arFragment
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.arFragment);
ArSceneView arSceneView = arFragment.getArSceneView();

// set up a Listener to trigger on every frame
arSceneView.getScene().addOnUpdateListener(frameTime ->
{
try
{
frame = arSceneView.getArFrame();
androidMediaImage = frame.acquireCameraImage();
int imageWidth = androidMediaImage.getWidth();
int imageHeight = androidMediaImage.getHeight();

// select the target Container to display the image in
ImageView secondView = (ImageView) findViewById(R.id.imageView3);
byte[] nv21;

// an Android.Media.Image is a YUV-Image which is made out of 3 planes
ByteBuffer yBuffer = androidMediaImage.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = androidMediaImage.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = androidMediaImage.getPlanes()[2].getBuffer();

// set up a Bytearray with the size of all the planes
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();

nv21 = new byte[ySize + uSize + vSize];

// Fill in the array. This code is directly taken from https://www.programcreek.com
//where it was pointed out that U and V have to be swapped
yBuffer.get(nv21, 0 , ySize);
vBuffer.get(nv21, ySize, vSize);
vBuffer.get(nv21, ySize + vSize, uSize);

// combine the three layers to one nv21 image
YuvImage yuvImage = new YuvImage(nv21, ImageFormat.NV21, imageWidth, imageHeight, null);
// Open a Bytestream to feed the compressor
ByteArrayOutputStream out = new ByteArrayOutputStream();
// compress the yuv image to Jpeg. This is important, because the BitmapFactory can't read a
// yuv-coded image directly (belief me I tried -.-)
yuvImage.compressToJpeg(new Rect(0, 0, imageWidth, imageHeight), 50, out);
// now write down the bytes of the image into an array
byte[] imageBytes = out.toByteArray();
// and build the bitmap using the Factory
Bitmap bitmapImage = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);

// use a Matrix for the rotation
Matrix rotationMatrix = new Matrix();
// the thing is basically a bunch of numbers which then can be used to compute the new location of each pixel
rotationMatrix.postRotate(90);
// the rotatedImage will be our target image
Bitmap rotatedImage = Bitmap.createBitmap(bitmapImage, 0,0, bitmapImage.getWidth(), bitmapImage.getHeight(), rotationMatrix, true);

// it's so easy!!!!
secondView.setImageBitmap(rotatedImage);
} catch (NotYetAviableException e)
{
e.printStackTrace();
}
});

如果我完全错了,你显然可以纠正我,并且有一个更简单的解决方案。但至少它有效,所以我很高兴<3

关于java - 在另一个屏幕元素上设置 ARCore Sceneview,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53612633/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com