Home | Raytracing Reference | Help

FuzzyPhoton

R - Raytracing Reference

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


Radiosity

Ray

Raycasting

Raytracing

Reflection

Refraction

Rendering

Rotation


Radiosity

Radiosity is a global illumination method used for displaying highly photorealistic views of 3-D scenes. This method considers every surface in the scene to be a reflector, emitter or part reflector-part emitter. We track light transport across the scene and find a global stable state, thus obtaining the perceived intensity of each surface.

In one solution, the lighting calculations are done in a series of "passes". Each surface is divided into small "patches". In the first pass, for each small surface patch, we calculate the amount of light reaching it directly from all emissive patches in the scene. This value scaled by the reflectivity of the patch, plus the amount of light the patch itself emits, gives the total light produced by the patch, called the "excident light". This is stored as a property of the patch. In the subsequent passes, we perform exactly the same operations but instead of using the intrinsically emitted light of the other patches, we use the excident light values. After a large number of passes, the excident light values closely approximate the actual intensities produced by multiple reflections of scattered light in the scene. The scene is now rendered using the excident light values as the perceived illuminances of the patches. The illuminances of points not coinciding with patch centres are calculated using linear interpolation.

This simple algorithm can produce highly realistic displays of scenes, but it has some flaws. Point lights may produce some artifacts, and shiny or transparent surfaces are not well handled. Refraction is impossible to satisfactorily simulate. For this reason, a radiosity program is usually complementary to a raytracing program, with the radiosity calculations forming a pre-rendering section.

Ray

A ray is the basic structure used in a raytracer. A ray in geometry is a segment of a straight line having a fixed point at one end and extending to infinity at the other. The fixed point is called the "origin" of the ray. Any point on a ray having origin P and direction vector U can be written as P + s * U, where s is a scalar value which can be thought of as the distance along the ray from the origin to the point. This structure may be used to model a ray of light which has similar properties (originates at a point and extends to infinity in a straight line). The primary function of raytracer is to calculate the points of intersection of a ray with an object. This amounts to the calculation of the values of s corresponding to the intersection points. The usual method is to substitute the components of the vector P + s * U for the coordinate variables x, y, z in the surface equation, and solve the resulting equation in s. The nearest intersection point (i.e. the point first hit by the light ray) is given by the smallest positive value of s.

Raycasting

Raycasting is a hidden-surface removal method in which a ray is sent out from the eye position and its intersections with various objects in the scene are calculated. The intersection point at the least distance belongs to the visible surface. Raycasting techniques are also used for estimating volumes of objects.

See also: raycasting in "What is raytracing?"

Raytracing

Raytracing is a method for producing computer-generated displays of 3-D scenes. It is based on a semi-physical model of light reflection and refraction. Raytracing algorithms follow a ray of light as it bounces around the scene, collecting intensity contributions on the way. The basic algorithm handles hidden-surface removal, shadows, reflections and refractions. Raytracing methods are especially suited for rendering shiny or transparent objects.

There are two ways to raytrace a scene. Firstly, rays of light emanating from light sources may be followed until they reach the eye. This is known as "forward raytracing". Secondly, rays may be traced backwards from the eye until they reach a light source or escape into space. This is known as "backward raytracing". Backward raytracing eliminates tracking of the large number of rays that miss the eye, hence it is more efficient and is predominantly used.

See also: What is raytracing?

Reflection

Light is reflected at a surface according to the following rules:

1. The angle of incidence and the angle of reflection are equal.
2. The incident ray, reflected ray and surface normal all lie in the same plane.

The angle of incidence is defined as the angle made by the incident ray with the normal, and the angle of reflection is that made by the reflected ray with the normal. If the incident ray has unit direction vector u, and the surface normal is the unit vector n, then the direction of the reflected ray is the vector R, given by

R = u - (2u.n)n.

Refraction

When a ray of light passes from one transparent medium to another, it is usually bent, or deviated, from the straight line path at the interface of the two media. This phenomenon is known as "refraction". The amount of bending depends on the indices of refraction of the media. Refraction obeys the following rules:

1. The incident ray, reflected ray and surface normal all lie in the same plane.
2. The angle of incidence i and angle of refraction r are related by Snell's Law: n = sin i / sin r, where n is the relative index of refraction of the two media.

The angle of incidence is defined as the angle made by the incident ray with the normal, and the angle of refraction is that made by the refracted ray with the normal. If the incident ray has unit direction vector u, the surface normal is the unit vector n, the index of refraction of the incident medium (i.e. the medium through which the incident ray passes) is ni and the index of refraction of the refracting medium is nr then the direction of the refracted ray is the vector T, given by

T = (ni/nr)u + ((ni/nr)cos i - cos r)n.

where cos i is calculated as |u.n| and cos r is calculated by plugging sin i = sqrt(1 - cos^2 i) into Snell's Law. Note that a real value of cos r may not exist. This indicates that "total internal reflection" is taking place.

Rendering

Rendering is the process of producing a computer-generated picture of a virtual model. 3-D rendering refers to the production of an image of a 3-dimensional scene as seen from a particular position and orientation. Various methods used in 3-D rendering include wireframe rendering, z-buffer rendering, scanline methods and raytracing.

Rotation

Entry under maintenance :), back up soon


Siddhartha Chaudhuri, 2002