gpt4 book ai didi

javascript - 使用回收帧缓冲区的 threejs 片段着色器

转载 作者:塔克拉玛干 更新时间:2023-11-02 21:02:16 27 4
gpt4 key购买 nike

我正在尝试制作一款可以模拟长时间曝光摄影的应用。这个想法是我从网络摄像头抓取当前帧并将其合成到 Canvas 上。随着时间的推移,照片会“曝光”,变得越来越亮。 (参见 http://www.chromeexperiments.com/detail/light-paint-live-mercury/?f=)

我有一个完美运行的着色器。这就像 photoshop 中的“添加”混合模式。问题是我无法让它回收前一帧。

我认为它会像 renderer.autoClear = false; 这样简单,但在这种情况下该选项似乎什么都不做。

这是使用 THREE.EffectComposer 应用着色器的代码。

        onWebcamInit: function () {    
var $stream = $("#user-stream"),
width = $stream.width(),
height = $stream.height(),
near = .1,
far = 10000;

this.renderer = new THREE.WebGLRenderer();
this.renderer.setSize(width, height);
this.renderer.autoClear = false;
this.scene = new THREE.Scene();

this.camera = new THREE.OrthographicCamera(width / -2, width / 2, height / 2, height / -2, near, far);
this.scene.add(this.camera);

this.$el.append(this.renderer.domElement);

this.frameTexture = new THREE.Texture(document.querySelector("#webcam"));
this.compositeTexture = new THREE.Texture(this.renderer.domElement);

this.composer = new THREE.EffectComposer(this.renderer);

// same effect with or without this line
// this.composer.addPass(new THREE.RenderPass(this.scene, this.camera));

var addEffect = new THREE.ShaderPass(addShader);
addEffect.uniforms[ 'exposure' ].value = .5;
addEffect.uniforms[ 'frameTexture' ].value = this.frameTexture;
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);

this.plane = new THREE.Mesh(new THREE.PlaneGeometry(width, height, 1, 1), new THREE.MeshBasicMaterial({map: this.compositeTexture}));
this.scene.add(this.plane);

this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;

new FrameImpulse(this.renderFrame);

},
renderFrame: function () {
this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;
this.composer.render();
}

这是着色器。没什么特别的。

        uniforms: {
"tDiffuse": { type: "t", value: null },
"frameTexture": { type: "t", value: null },
"exposure": { type: "f", value: 1.0 }
},

vertexShader: [
"varying vec2 vUv;",
"void main() {",
"vUv = uv;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",

"}"
].join("\n"),

fragmentShader: [

"uniform sampler2D frameTexture;",
"uniform sampler2D tDiffuse;",
"uniform float exposure;",
"varying vec2 vUv;",

"void main() {",
"vec4 n = texture2D(frameTexture, vUv);",
"vec4 o = texture2D(tDiffuse, vUv);",
"vec3 sum = n.rgb + o.rgb;",
"gl_FragColor = vec4(mix(o.rgb, sum.rgb, exposure), 1.0);",
"}"

].join("\n")

最佳答案

这在本质上等同于 posit labs 的答案,但我已经成功地使用了一个更简化的解决方案 - 我创建了一个 EffectComposer,其中只有我想要回收的 ShaderPass,然后在每个渲染中交换该 composer 的 renderTargets。

初始化:

THREE.EffectComposer.prototype.swapTargets = function() {
var tmp = this.renderTarget2;
this.renderTarget2 = this.renderTarget1;
this.renderTarget1 = tmp;
};

...

composer = new THREE.EffectComposer(renderer,
new THREE.WebGLRenderTarget(512, 512, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat })
);

var addEffect = new THREE.ShaderPass(addShader, 'frameTexture');
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);

渲染:

composer.render();
composer.swapTargets();

辅助 EffectComposer 然后可以获取两个 renderTargets 之一并将其推送到屏幕或进一步转换它。

另请注意,我在初始化 ShaderPass 时将“frameTexture”声明为 textureID。这让 ShaderPass 知道用前一个 Pass 的结果更新 frameTexture uniform。

关于javascript - 使用回收帧缓冲区的 threejs 片段着色器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19872524/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com