- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在处理一个使用点、InstancedBufferGeometry 和 RawShaderMaterial 的场景。我想在场景中添加光线转换,这样当点击一个点时我可以确定点击了哪个点。
在之前的场景中[ example ]],我已经能够通过访问 raycaster.intersectObject()
调用返回的匹配项的 .index
属性来确定点击了哪个点。但是,对于下面的几何形状和 Material ,索引始终为 0
。
有谁知道我如何确定在下面的场景中点击了哪些点?其他人可以就此问题提供的任何帮助将不胜感激。
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/88/three.min.js'></script>
<script src='https://rawgit.com/YaleDHLab/pix-plot/master/assets/js/trackball-controls.js'></script>
<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in screen coordinates.
*
* To do so, we can use the following variables defined by Three.js:
* attribute vec3 position - stores each vertex's position in world space
* attribute vec2 uv - sets each vertex's the texture coordinates
* uniform mat4 projectionMatrix - maps camera space into screen space
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex. Each attribute must
* contain n_vertices * n_components, where n_components is the length
* of the given datatype (e.g. for a vec2, n_components = 2; for a float,
* n_components = 1)
* `uniforms` are constant across all vertices
* `varyings` are values passed from the vertex to the fragment shader
*
* For the full list of uniforms defined by three, see:
* https://threejs.org/docs/#api/renderers/webgl/WebGLProgram
**/
// set float precision
precision mediump float;
// specify geometry uniforms
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
// to get the camera attributes:
uniform vec3 cameraPosition;
// blueprint attributes
attribute vec3 position; // sets the blueprint's vertex positions
// instance attributes
attribute vec3 translation; // x y translation offsets for an instance
void main() {
// set point position
vec3 pos = position + translation;
vec4 projected = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
gl_Position = projected;
// use the delta between the point position and camera position to size point
float xDelta = pow(projected[0] - cameraPosition[0], 2.0);
float yDelta = pow(projected[1] - cameraPosition[1], 2.0);
float zDelta = pow(projected[2] - cameraPosition[2], 2.0);
float delta = pow(xDelta + yDelta + zDelta, 0.5);
gl_PointSize = 10000.0 / delta;
}
</script>
<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader.
*
* Attempting to read a varying not generated by the vertex shader will
* throw a warning but won't prevent shader compiling.
**/
precision highp float;
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
</script>
<script>
/**
* Generate a scene object with a background color
**/
function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xaaaaaa);
return scene;
}
/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/
function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 100000);
camera.position.set(0, 1, -6000);
return camera;
}
/**
* Generate the renderer to be used in the scene
**/
function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}
/**
* Generate the controls to be used in the scene
* @param {obj} camera: the three.js camera for the scene
* @param {obj} renderer: the three.js renderer for the scene
**/
function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}
/**
* Set the current mouse coordinates {-1:1}
* @param {Event} event - triggered on canvas mouse move
**/
function onMousemove(event) {
mouse.x = ( event.clientX / window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
}
/**
* Store the previous mouse position so that when the next
* click event registers we can tell whether the user
* is clicking or dragging.
* @param {Event} event - triggered on canvas mousedown
**/
function onMousedown(event) {
lastMouse.copy(mouse);
}
/**
* Callback for mouseup events on the window. If the user
* clicked an image, zoom to that image.
* @param {Event} event - triggered on canvas mouseup
**/
function onMouseup(event) {
var selected = raycaster.intersectObjects(scene.children);
console.log(selected)
}
// add event listeners for the canvas
function addCanvasListeners() {
var canvas = document.querySelector('canvas');
canvas.addEventListener('mousemove', onMousemove, false)
canvas.addEventListener('mousedown', onMousedown, false)
canvas.addEventListener('mouseup', onMouseup, false)
}
/**
* Generate the points for the scene
* @param {obj} scene: the current scene object
**/
function addPoints(scene) {
// this geometry builds a blueprint and many copies of the blueprint
var geometry = new THREE.InstancedBufferGeometry();
geometry.addAttribute( 'position',
new THREE.BufferAttribute( new Float32Array( [0, 0, 0] ), 3));
// add data for each observation
var n = 10000; // number of observations
var rootN = n**(1/2);
var cellSize = 20;
var translation = new Float32Array( n * 3 );
var translationIterator = 0;
var unit = 0;
for (var i=0; i<n*3; i++) {
switch (i%3) {
case 0: // x dimension
translation[translationIterator++] = (unit % rootN) * cellSize;
break;
case 1: // y dimension
translation[translationIterator++] = Math.floor(unit / rootN) * cellSize;
break;
case 2: // z dimension
translation[translationIterator++] = 0;
break;
}
if (i % 3 == 0) unit++;
}
geometry.addAttribute( 'translation',
new THREE.InstancedBufferAttribute( translation, 3, 1 ) );
var material = new THREE.RawShaderMaterial({
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent,
});
var mesh = new THREE.Points(geometry, material);
mesh.frustumCulled = false; // prevent the mesh from being clipped on drag
scene.add(mesh);
}
/**
* Render!
**/
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};
/**
* Main
**/
var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
// raycasting
var raycaster = new THREE.Raycaster();
raycaster.params.Points.threshold = 10000;
var mouse = new THREE.Vector2();
var lastMouse = new THREE.Vector2();
addCanvasListeners();
// main
addPoints(scene);
render();
</script>
最佳答案
一种解决方案是使用有时称为 GPU Picking 的技术。
初学https://threejs.org/examples/webgl_interactive_cubes_gpu.html .
理解概念后,研究https://threejs.org/examples/webgl_interactive_instances_gpu.html .
另一种解决方案是在 CPU 上复制在 GPU 上实现的实例化逻辑。您可以在 raycast()
方法中这样做。是否值得取决于您的用例的复杂性。
three.js r.95
关于javascript - Three.js:带点的光线转换、InstancedBufferGeometry 和 RawShaderMaterial(GPU 拾取),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51768396/
我想知道是否可以为每个 InstancedBufferGeometry 实例拥有不同的顶点颜色。 const xRes = 3; const yRes = 2; const numVertices =
我需要创建一个包含数千个简单网格的场景,因此我决定使用 InstancedBufferGeometry。我从这个例子中复制了大部分代码:https://threejs.org/examples/#we
当我渲染 3000 个模型实例并将它们保留在 View 区域中时,我得到 55FPS,但是如果我有 5000 个模型实例,但将其中 2000 个实例保留在 View 区域之外,我仍然得到 40 FPS
我正在玩实例,非常棒。 现在我试图让每个实例都转换阴影,但没有运气。我制作了兰伯特 Material 的克隆,添加了位置、比例等的实例处理,并且正在工作,但这仅转换一个阴影:位置 (0,0,0) 处原
当我渲染 3000 个模型实例并将它们保留在 View 区域中时,我得到 55FPS,但是如果我有 5000 个模型实例,但将其中 2000 个实例保留在 View 区域之外,我仍然得到 40 FPS
我正在尝试了解新的 THREE.InstancedBufferGeometry 功能的工作原理,以便我可以针对我的用例对其进行评估。具体来说,我需要一种方法,我可以在将来的某个时候添加一个对象的新实例
我正在处理一个使用点、InstancedBufferGeometry 和 RawShaderMaterial 的场景。我想在场景中添加光线转换,这样当点击一个点时我可以确定点击了哪个点。 在之前的场景
我的问题与几年前回答过的问题有关,我开始工作了: three.js texture across InstanceGeometry 我正在将我的应用程序从 r95 更新到最新的 r124。 首先,我创
我是一名优秀的程序员,十分优秀!