gpt4 book ai didi

javascript - Three.js:带点的光线转换、InstancedBufferGeometry 和 RawShaderMaterial(GPU 拾取)

转载 作者:行者123 更新时间:2023-11-30 09:20:54 24 4
gpt4 key购买 nike

我正在处理一个使用点、InstancedBufferGeometry 和 RawShaderMaterial 的场景。我想在场景中添加光线转换,这样当点击一个点时我可以确定点击了哪个点。

在之前的场景中[ example ]],我已经能够通过访问 raycaster.intersectObject() 调用返回的匹配项的 .index 属性来确定点击了哪个点。但是,对于下面的几何形状和 Material ,索引始终为 0

有谁知道我如何确定在下面的场景中点击了哪些点?其他人可以就此问题提供的任何帮助将不胜感激。

html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/88/three.min.js'></script>
<script src='https://rawgit.com/YaleDHLab/pix-plot/master/assets/js/trackball-controls.js'></script>

<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in screen coordinates.
*
* To do so, we can use the following variables defined by Three.js:
* attribute vec3 position - stores each vertex's position in world space
* attribute vec2 uv - sets each vertex's the texture coordinates
* uniform mat4 projectionMatrix - maps camera space into screen space
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex. Each attribute must
* contain n_vertices * n_components, where n_components is the length
* of the given datatype (e.g. for a vec2, n_components = 2; for a float,
* n_components = 1)
* `uniforms` are constant across all vertices
* `varyings` are values passed from the vertex to the fragment shader
*
* For the full list of uniforms defined by three, see:
* https://threejs.org/docs/#api/renderers/webgl/WebGLProgram
**/

// set float precision
precision mediump float;

// specify geometry uniforms
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;

// to get the camera attributes:
uniform vec3 cameraPosition;

// blueprint attributes
attribute vec3 position; // sets the blueprint's vertex positions

// instance attributes
attribute vec3 translation; // x y translation offsets for an instance

void main() {
// set point position
vec3 pos = position + translation;
vec4 projected = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
gl_Position = projected;

// use the delta between the point position and camera position to size point
float xDelta = pow(projected[0] - cameraPosition[0], 2.0);
float yDelta = pow(projected[1] - cameraPosition[1], 2.0);
float zDelta = pow(projected[2] - cameraPosition[2], 2.0);
float delta = pow(xDelta + yDelta + zDelta, 0.5);
gl_PointSize = 10000.0 / delta;
}
</script>

<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader.
*
* Attempting to read a varying not generated by the vertex shader will
* throw a warning but won't prevent shader compiling.
**/

precision highp float;

void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
</script>

<script>

/**
* Generate a scene object with a background color
**/

function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xaaaaaa);
return scene;
}

/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/

function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 100000);
camera.position.set(0, 1, -6000);
return camera;
}

/**
* Generate the renderer to be used in the scene
**/

function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}

/**
* Generate the controls to be used in the scene
* @param {obj} camera: the three.js camera for the scene
* @param {obj} renderer: the three.js renderer for the scene
**/

function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}

/**
* Set the current mouse coordinates {-1:1}
* @param {Event} event - triggered on canvas mouse move
**/

function onMousemove(event) {
mouse.x = ( event.clientX / window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
}

/**
* Store the previous mouse position so that when the next
* click event registers we can tell whether the user
* is clicking or dragging.
* @param {Event} event - triggered on canvas mousedown
**/

function onMousedown(event) {
lastMouse.copy(mouse);
}

/**
* Callback for mouseup events on the window. If the user
* clicked an image, zoom to that image.
* @param {Event} event - triggered on canvas mouseup
**/

function onMouseup(event) {
var selected = raycaster.intersectObjects(scene.children);
console.log(selected)
}

// add event listeners for the canvas
function addCanvasListeners() {
var canvas = document.querySelector('canvas');
canvas.addEventListener('mousemove', onMousemove, false)
canvas.addEventListener('mousedown', onMousedown, false)
canvas.addEventListener('mouseup', onMouseup, false)
}

/**
* Generate the points for the scene
* @param {obj} scene: the current scene object
**/

function addPoints(scene) {
// this geometry builds a blueprint and many copies of the blueprint
var geometry = new THREE.InstancedBufferGeometry();

geometry.addAttribute( 'position',
new THREE.BufferAttribute( new Float32Array( [0, 0, 0] ), 3));

// add data for each observation
var n = 10000; // number of observations
var rootN = n**(1/2);
var cellSize = 20;
var translation = new Float32Array( n * 3 );
var translationIterator = 0;
var unit = 0;

for (var i=0; i<n*3; i++) {
switch (i%3) {
case 0: // x dimension
translation[translationIterator++] = (unit % rootN) * cellSize;
break;
case 1: // y dimension
translation[translationIterator++] = Math.floor(unit / rootN) * cellSize;
break;
case 2: // z dimension
translation[translationIterator++] = 0;
break;
}
if (i % 3 == 0) unit++;
}

geometry.addAttribute( 'translation',
new THREE.InstancedBufferAttribute( translation, 3, 1 ) );

var material = new THREE.RawShaderMaterial({
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent,
});
var mesh = new THREE.Points(geometry, material);
mesh.frustumCulled = false; // prevent the mesh from being clipped on drag
scene.add(mesh);
}

/**
* Render!
**/

function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};

/**
* Main
**/

var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
// raycasting
var raycaster = new THREE.Raycaster();
raycaster.params.Points.threshold = 10000;
var mouse = new THREE.Vector2();
var lastMouse = new THREE.Vector2();
addCanvasListeners();
// main
addPoints(scene);
render();

</script>

最佳答案

一种解决方案是使用有时称为 GPU Picking 的技术。

初学https://threejs.org/examples/webgl_interactive_cubes_gpu.html .

理解概念后,研究https://threejs.org/examples/webgl_interactive_instances_gpu.html .

另一种解决方案是在 CPU 上复制在 GPU 上实现的实例化逻辑。您可以在 raycast() 方法中这样做。是否值得取决于您的用例的复杂性。

three.js r.95

关于javascript - Three.js:带点的光线转换、InstancedBufferGeometry 和 RawShaderMaterial(GPU 拾取),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51768396/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com