gpt4 book ai didi

c++ - 设置射线(原点,方向)和三角形交点(无glm)

转载 作者:太空狗 更新时间:2023-10-29 21:44:10 25 4
gpt4 key购买 nike

Edit3:我的问题与我预期的功能完全不同。我让代码保留下来,也许这对某人有帮助 :)(并且不要忘记调试!)。

我试图找到直线与三角形相交的 vector 。

当前状态:随机交叉,即使鼠标不在地板上和相机 View 依赖(lookat 矩阵)

步骤

  1. 取消投影鼠标坐标
  2. 检查线/三角交点

取消投影鼠标坐标

我检查了 glm::unproject 和 gluUnproject 的源代码并创建了这个函数。

   pixel::CVector3 pixel::CVector::unproject(
CVector2 inPosition,
pixel::CShape window,
pixel::matrix4 projectionMatrix,
pixel::matrix4 modelViewMatrix,
float depth
)
{
// transformation of normalized coordinates
CVector4 inVector;
inVector.x = (2.0f * inPosition.x) / window.width - 1.0f;
inVector.y = (2.0f * inPosition.y) / window.height - 1.0f;
inVector.z = 2.0f * depth - 1.0f;
inVector.w = 1.0f;

// multiply inverted matrix with vector
CVector4 rayWorld = pixel::CVector::multMat4Vec4(pixel::CMatrix::invertMatrix(projectionMatrix * modelViewMatrix), inVector);

CVector3 result;
result.x = rayWorld.x / rayWorld.w;
result.y = rayWorld.y / rayWorld.w;
result.z = rayWorld.z / rayWorld.w;



return result;
}

检查路口

pixel::CVector3 pixel::Ray::intersection(
Ray ray,
pixel::CVector3 v0,
pixel::CVector3 v1,
pixel::CVector3 v2
)
{
// compute normal
CVector3 a, b, n;
a = v1 - v0;
b = v2 - v0;

n = ray.direction.cross(b);

// find determinant
float det = a.dot(n);

if (det < 0.000001f)
{
std::cout << "Ray intersecting with backface triangles \n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}
det = 1.0f / det;

// calculate distance from vertex0 to ray origin
CVector3 s = ray.origin - v0;
float u = det * s.dot(n);

if (u < -0.000001f || u > 1.f + 0.000001f)
{
std::cout << "U: Intersection outside of the triangle!\n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}

CVector3 r = s.cross(a);
float v = det * ray.direction.dot(r);
if (v < -0.000001f || u + v > 1.f + 0.000001f)
{
std::cout << "V/U: Intersection outside of triangle!\n";
return pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
}

// distance from ray to triangle
det = det * b.dot(r);

std::cout << "T: " << det << "\n";

CVector3 endPosition;
endPosition.x = ray.origin.x + (ray.direction.x * det);
endPosition.y = ray.origin.y + (ray.direction.y * det);
endPosition.z = ray.origin.z + (ray.direction.z * det);

return endPosition;
}

用法

    if (event.button.button == SDL_BUTTON_RIGHT)
{

camera->setCameraActive();
float mx = event.motion.x;
float my = window->info.height - event.motion.y;
// ray casting
pixel::Ray ray;

std::cout << "\n\n";

// near
pixel::CVector3 rayNear = pixel::CVector::unproject(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
1.0f
);
// far
pixel::CVector3 rayFar = pixel::CVector::unproject(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
0.0f
);


// normalized direction results in the same behavior
ray.origin = cameraPosition;

ray.direction = pixel::CVector::normalize(rayFar- rayNear);

std::cout << "Raycast \n";
std::cout << "Mouse Position: " << mx << " - " << my << "\n";
std::cout << "Camera Position: " << ray.origin.x << " - " << ray.origin.y << " - " << ray.origin.z << "\n";
std::cout << "Ray direction: " << ray.direction.x << " - " << ray.direction.y << " - " << ray.direction.z << "\n";


pixel::CVector3 vertOne = pixel::CVector::vector3(0.0f, 0.0f, -300.0f);
pixel::CVector3 vertTwo = pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
pixel::CVector3 vertThree = pixel::CVector::vector3(300.0f, 0.0f, 0.0f);
pixel::CVector3 vertFour = pixel::CVector::vector3(300.0f, 0.0f, -300.0f);


pixel::CVector3 rayHit = pixel::Ray::intersection(ray, vertOne, vertTwo, vertThree);
pixel::CVector3 rayHit2 = pixel::Ray::intersection(ray, vertThree, vertFour, vertOne);
std::cout << "Ray hit: " << rayHit.x << " - " << rayHit.y << " - " << rayHit.z << "\n";
std::cout << "Ray hit: " << rayHit2.x << " - " << rayHit2.y << " - " << rayHit2.z << "\n";
std::cout << "--------------------\n";
towerHouse->modelMatrix = pixel::CMatrix::translateMatrix(rayHit);

输出

因为我从未使用过 glm::unproject 或 gluUnproject,所以我不知道正常输出应该是什么样子,但我得到的结果如下:

光线方向:0.109035 -0.0380502 0.0114562

我觉得不对,但是根据其他来源(上面提到的)检查我的代码,我没有看到错误。

光线相交在某些特殊情况下有效(相机旋转),即使我不点击地板,我也会得到相交。从背面命中到三角形外部的交叉点输出也是如此。

所有这些错误看起来问题的主要根源是未投影。

有正确方向的提示吗?

最佳答案

这离这个问题的答案还差得很远,但这太复杂了,无法在评论或聊天中解释。

首先:

            // near
pixel::CVector3 rayNear = pixel::CVector::raycast(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
1.0f // WRONG
);
// far
pixel::CVector3 rayFar = pixel::CVector::raycast(
pixel::CVector::vector2(mx, my),
pixel::CVector::shape2(window->info.internalWidth, window->info.internalHeight),
camera->camInfo.currentProjection,
camera->camInfo.currentView,
0.0f // WRONG
);

Near 在窗口空间中是 0.0,far 是 1.0(取决于深度范围,但是如果你改变了深度范围你应该已经知道了)。

在你的光线转换函数中,你有:

CVector3 result;
result.x = rayWorld.x / rayWorld.w;
result.y = rayWorld.y / rayWorld.w;
result.z = rayWorld.z / rayWorld.w;

有可能w == 0.0 ,此时结果还不是一条射线...它是object-space(不是世界)中的一个位置。通常,您总是会使用行为良好的矩阵,但如果您看过 UnProject (...) 的正式实现,你会注意到他们处理 w == 0.0 的情况具有特殊的返回值或通过设置状态标志。

            pixel::CVector3 vertOne = pixel::CVector::vector3(0.0f, 0.0f, -300.0f);
pixel::CVector3 vertTwo = pixel::CVector::vector3(0.0f, 0.0f, 0.0f);
pixel::CVector3 vertThree = pixel::CVector::vector3(300.0f, 0.0f, 0.0f);
pixel::CVector3 vertFour = pixel::CVector::vector3(300.0f, 0.0f, -300.0f);

这些顶点在什么坐标空间?大概是对象空间,这意味着如果您从相机的视点(在世界空间中定义)转换一条光线,该光线穿过远平面上的一个点,并尝试更频繁地测试与对象空间中的三角形的交点你会错过的。这是因为每个空间的原点、比例和旋转可能不同。在尝试此测试之前,您需要将这些点转换为世界空间(您的原始代码有一个 floor->modelMatrix 可以很好地用于此目的)。

关于c++ - 设置射线(原点,方向)和三角形交点(无glm),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20842623/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com