gpt4 book ai didi

c++ - 光线跟踪平面相交仅绘制部分平面 C++

转载 作者:太空宇宙 更新时间:2023-11-04 12:51:26 25 4
gpt4 key购买 nike

这是一个简单的 C++ 光线追踪程序,它获取来自相机的光线与没有偏移的简单平面和法线 (1, 0, 0) 碰撞的点,并根据点的交点绘制棋盘图案给人一种空间感。然而,飞机并没有像预期的那样绘制,因为它有时会被切断和/或扭曲。是什么导致了这些行为?为什么这个光线追踪代码是错误的?还是需要更多信息才能回答这个问题?

如果 viewPort.x = -150、viewPort.y = 0、viewPort.z = 0、viewPort.px = 0、viewPort.py = 1、viewPort.width = 480 和 viewPort.height = 360,它会显示: Image of render

//viewPort is the camera, bMap is a bitmap and green is the color green
//Get sin and cos of camera angles
float sx, sy, cx, cy;
sx = sin(viewPort.px);
sy = sin(viewPort.py);
cx = cos(viewPort.px);
cy = cos(viewPort.py);

//Get angle of camera view in radians for height and width with 3:4 ratio
float fh, fw;
fh = 60 / 180 * 3.14159;
fw = fh * (1 + 1 / 3);

//Get all rays and their angles
for (int h = 0;h < viewPort.width;++h)
{
//Get Positon of starting ray
Point3D tmp;
tmp.x = viewPort.x + h * cy;
tmp.y = viewPort.y + h * sx * sy;
tmp.z = viewPort.z + h * (0 - sy * cx);

//Get y angle of ray
float sh, ch, dh;
dh = viewPort.py - ((float)(h - viewPort.width / 2) / viewPort.width / 2) * fw;
sh = sin(dh);
ch = cos(dh);

for (int v = 0;v < viewPort.height;++v)
{
//Get x angle of ray
float sv, cv, dv;
dv = viewPort.px - ((float)(v - viewPort.height / 2) / viewPort.height / 2) * fh;
sv = sin(dv);
cv = cos(dv);

//Get slope of ray
float dx, dy, dz;
dx = sv;
dy = 0 - sh * cv;
dz = ch * cv;

if (dx != 0)
{
//Get point of intersection from normal, ray start, and ray angle
Point3D nor = {1, 0, 0};
Point3D lin = {dx, dy, dz};

float denom = DotProduct(nor, lin);
if (std::abs(denom) > 1e-6)
{
float mag = sqrt(tmp.x * tmp.x + tmp.y * tmp.y + tmp.z * tmp.z);

//Point (0, 0, 0) is on plane. p0 - tmp
Point3D p0l0 = {-tmp.x / mag, -tmp.y / mag, -tmp.z / mag};

float t = DotProduct(p0l0, nor) / denom;

//Check if ray is coliding (not line)
if (t >= 0)
{
//Determine square of checkerboard pattern
if ((int)((t * dy + tmp.y) / 16) % 2 == 0)
{
if ((int)std::abs((t * dz + tmp.z) / 16) % 2 == 0)
{
bMap.SetPixel(h, v, green);
}
}
else
{
if ((int)std::abs((t * dz + tmp.z) / 16) % 2 == 1)
{
bMap.SetPixel(h, v, blue);
}
}
}
}
}
//Offset ray position by one
tmp.y += cx;
tmp.z += sx;
}
}

最佳答案

问题是由程序如何确定光线的位置和方向引起的。由于对光线追踪工作原理的误解,我给每条光线一个独特的位置。相反,所有光线都应该共享相同的位置,并根据光线的角度映射到屏幕空间,该角度由屏幕位置缩放并以每个像素为中心。哪个应该占屏幕空间中的每个象限。因此:

angle_x = (2 * ((screen_x + 0.5)/screen_width) - 1) * tan(pi/2 * fov/180) * ascpect_ratio

angle_y = (1 - 2 * ((screen_y + 0.5)/screen_height)) * tan(pi/2 * fov/180)

关于c++ - 光线跟踪平面相交仅绘制部分平面 C++,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48876867/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com