Home About Eric Topics SourceGear

2007-07-18 13:00:00

6: WPF 3D vs. Ray Tracing

This entry is part 6 of a 12-part series on WPF 3D.

Not a Ray Tracer

To understand what something is, sometimes it is helpful to talk about what it is not.

WPF 3D is not a ray tracer.

The two common ways of rendering 3D objects are:

  1. Describe objects as polygons (broken into triangles).  Do a projection to 2D and use some clever shortcuts to simulate lighting.  This is the method used by accelerated graphics APIs, including WPF, Direct3D, OpenGL and their ilk.  All real-time games use this method or something similar to it.
  2. Describe objects using whatever form makes sense for that object.  For every pixel in the resulting image, fire one or more light rays, tracing the path of light to determine the proper coloring for that pixel.  An example of a ray tracer is POV-Ray.  Some movies use this method, such as Ice Age.

It is generally true that ray tracing produces higher quality images.  So why do we bother with the other method when our grandfathers told us that anything worth doing is worth doing right?  Our grandfathers were not software developers.  Ray tracing is hundreds or thousands of times slower.  Given a choice between a good rendering in 10 milliseconds or a great rendering in 10 seconds, there are times we want to choose the former.

But what is this difference between good and great?  We always say that ray tracing produces higher quality images, but I've seen some pretty amazing stuff rendered by OpenGL or Direct3D.  Exactly how are ray-traced images of higher quality?


Remember in part 5 when I said that every polygon can be decomposed into triangles?  This is true, but not everything is a polygon.  What if you want to render a sphere?  You can only approximate a sphere with a triangle mesh.

Ray tracing doesn't have this problem.  A ray tracer will usually represent a sphere in a more exact fashion.  It knows how to intersect rays with the equation of a sphere.  The quality of the resulting image is limited only by the resolution of that image.

Shadows and Reflections

Accelerated graphics APIs like OpenGL do not automatically draw shadows or reflections.  There just isn't any good way to calculate shadows when you're trying to render 60 frames per second.  Reflections are even worse.

But both of these things are quite simple for a ray tracer.  Want to find shadows?  Fire a ray and find the 3D point where it intersects.  Then fire a ray from that point to the light source.  If the second ray doesn't hit the light source directly, you are in a shadow.

The quintessential demo for a ray tracer is a chrome sphere casting a shadow on the floor.


Remember in part 2 when I was fussing about transparency?  Accelerated graphics APIs need you to sort things because they use a shortcut called a Z-buffer.  Ray tracing can do transparency more accurately because it's not in such a hurry.  A ray tracer can take the time to intersect every ray with every object in the scene, just to make sure it doesn't miss anything.

A Project Idea

As I write this, I look out the window to my left and see the University of Illinois campus.  The fall semester will start in six weeks or so.  I assume that one of this fall's classes will be "Introduction to Computer Graphics".  I further assume that there will be approximately 100 students in that class, and that each of them will be asked to write a ray tracer.

A little web research suggests that in the United States there are 479 colleges and universities that offer a computer science program.  Extending my assumptions to the national level, I figure all of them have a computer graphics course with an average of 50 students every semester.  So as a society, each year we are producing over 47,000 new ray tracer implementations.

And that's just here in the United States.  When I think about all the other CS students worldwide, I shudder to think how many homeless ray tracers are being born every year.  (Please, folks, have your ray tracer spayed or neutered.)

As long as we're writing so many ray tracers, why not write one for WPF?

Because WPF 3D is built on a "retained scene" model, a Viewport3D object has all the information necessary for rendering.  We can write a ray tracer which takes a Viewport3D and returns an image which was constructed by ray tracing its contents.  Basically, it's the same task as writing any other ray tracer, except instead of creating a new scene description language, you just use the one provided by WPF.

Remember of course that in WPF, everything is a triangle.  So don't expect photorealistic rendering of a chrome sphere unless your triangles are really small.

But I still think a WPF ray tracer would actually be useful.  If you already have a Viewport3D anyway, it might be nice to be able to paste it into a WPF ray tracer and get a rendering which has reflections and shadows.

I'll get you started.  The shell of your ray tracer will probably look something like this:

public BitmapSource RayTrace(Viewport3D vp)
    int width = (int)vp.Width;
    int height = (int)vp.Height;

    byte[] pixelData = new byte[width * height * 3];

    for (int c = 0; c < width; c++)
        for (int r = 0; r < height; r++)
            Color clr = GetColorForPixel(vp, c, r);

            pixelData[(r * width + c) * 3 + 0] = clr.R;
            pixelData[(r * width + c) * 3 + 1] = clr.G;
            pixelData[(r * width + c) * 3 + 2] = clr.B;
    BitmapSource bmpSource = BitmapSource.Create(
        width, height,
        96, 96,
        null, pixelData, width * 3);

    return bmpSource;

Now you basically just need to implement GetColorForPixel().  One of the first lines of that method is probably going to look like this:

RayMeshGeometry3DHitTestResult hit =
    VisualTreeHelper.HitTest(vp, new Point(c, r));

Also, in RayTrace(), before you start the loop, you probably want to walk through all the children of the Viewport3D and find all the lights.  This will save you the cost of doing that search once for every pixel.