As with previous shapes to render a triangle we need to be able to calculate if a ray intersects it or not. There are lots of ray-triangle intersection algorithms out there; the one I have chosen to use is the Möller–Trumbore intersection algorithm as it's fairly quick and also gives us the intersection point in barycentric coordinates which will come in useful later on. I won't go into details of the algorithm here; Scratchapixel has a good discussion of it.
The one other thing we need to calculate is the surface normal. This is easy to do - we have the three points defining our triangle's vertices. From there we can calculate two vectors describing two of the sides. (This is needed for the intersection algorithm anyways) The cross product of two vectors gives a vector perpendicular to both - this will give us the normal we need. With this we can then add some triangles to our scene:
A mesh is basically just a collection of triangles, and that's how I've implemented it. I haven't yet attempted to optimise the storage, so there is a lot of repetition because each triangle shares some points with other triangles and these are duplicated rather than stored just once. Intersections with a mesh are straightforward - I can just loop over all the triangles inside the mesh and find the nearest intersection from them, if there is one. This can be slow however as some meshes will contain lots of triangles.
One easy optimisation is to pre-calculate a bounding box for all of the triangles. Luckily we already have an axis-aligned box that we can use for this purpose. Finding the bounding box for a set of points is straightforward because our box is defined in terms of a minimum and maximum point. We therefore loop through all the points in our mesh, find the minimum and maximum values for each axis and then use them to create the box. Our intersection test can then test the box first, and only if it intersects do we then loop through the triangles.
So we have a mesh class - where do we get the mesh data from?
Wavefront .obj files are a common and simple way to describe 3D models. Whilst they can contain all manner of geometry I'm only going to worry about simple vertices and faces for now. The files are text and contain lists of vertices followed by faces, e.g.:
v 0.000000 0.000000 0.000000
v 0.000000 0.000000 1.000000
v 0.000000 1.000000 1.000000
f 1 2 3
The v
lines give the vertex coordinates, in this case \( \big(0, 0, 0 \big) \), \( \big(0, 0, 1 \big) \) and \( \big(0, 1, 1 \big) \). The f
lines represent faces made up of three or more vertices - the numbers references the previously defined vertices, so in this case the f
line represents a triangle made up from the three vertices.
The f
lines can represent polygons of any number of sides, however it's easy to break them up into triangles by creating a fan of triangles around the first vertex in the face.
The only problem with the above is that the vertices could describe an object of any size in any position in 3D space. We need some way to move the object to a position of our choosing. An easy way to do that is with transformation matrices.
We can represent various transformations such as rotations, scaling and translations with a 4x4 matrix we get a new 4D vector representing the transformed vector/point. Turning our vectors and points to 4D homogeneous coordinates is simple - we simply add an extra component that is 1 for point or 0 for a vector:
The 0 for vectors effectively ignores any translation in our transformation matrix - as a vector represents a direction only it makes no sense to translate it. Points represent a position so translating them does have a meaning, and the 1 keeps the translation components in place. To convert back after the transform we just drop the last component.
Combining transformations together into a single matrix is also straightforward - we simply multiply them together. The order of matrix multiplication is important and the second transformation comes first in the multiplication. For example to apply a rotation \( \mathsf{R} \) followed by a translation \( \mathsf{T} \) we would use the matrix \( \mathsf{TR} \).
I've added a Matrix class to represent a 4x4 matrix. As well as operators to represent standard matrix operations such as add and multiply it has methods to transform points and vector as well as static methods to generate matrices representing common transformations:
Translation is the identity matrix with the translation in \( x \), \( y \) and \( z \) axes along the bottom row of the matrix:
\[ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ x & y & z & 1 \end{bmatrix} \]
For scaling we place the factor to scale by (1 being remain the same, larger increase size, smaller reduce size) for each axis along the diagonal with a 1 in the bottom right:
\[ \begin{bmatrix} x & 0 & 0 & 0 \\ 0 & y & 0 & 0 \\ 0 & 0 & z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]
We can define a matrix for rotating about each axis, with respect to the origin. If our angles are in radians then we have the following three matrices for rotating \( \theta \) radians about \( x \), \( y \) and \( z \) respectively:
\[ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & \sin\theta & 0 \\ 0 & -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]
\[ \begin{bmatrix} \cos\theta & 0 & -\sin\theta & 0 \\ 0 & 1 & 0 & 0 \\ \sin\theta & 0 & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]
\[ \begin{bmatrix} \cos\theta & \sin\theta & 0 & 0 \\ -\sin\theta & \cos\theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]
I've added a method to our Mesh class that can 'normalise' it. By normalise I mean:
Once it is normalised it's much easier to perform further translations to position the mesh in the scene. We are now in a position to do an obligatory render of the classic Utah Teapot; here I've rendered a mesh of the teapot from http://goanna.cs.rmit.edu.au/~pknowles/models.html:
Given the teapot is a mesh of triangles it's not surprising that it looks like a mesh of triangles. It would be nicer if we had a smooth teapot. One way to do that is to define normals at each of the vertices and then interpolate them over the triangle. .obj files support vertex normals via lines beginning vn
. Our previous example with vertex normals might look like this:
v 0.000000 0.000000 0.000000
v 0.000000 0.000000 1.000000
v 0.000000 1.000000 1.000000
vn 1.000000 1.000000 0.000000
vn 0.000000 1.000000 1.000000
vn 1.000000 .000000 1.000000
f 1//1 2//2 3//3
The face now points to both the vertex and the normal at the vertex. (The missing number between the backslashes would be the texture coordinate for the vertex; this will come in useful when I implement texture mapping) One thing to note is that the normals aren't necessarily normalized in .obj files so we need to do that ourselves.
How do we interpolate these normals though? As well as giving the point on the ray of the intersection the Möller–Trumbore algorithm also gives us the point on the surface in terms of the barycentric coordinates \( u \) and \( v \). A point \( P \) on the triangle with corners \( A \), \( B \) and \( C \) can be described as:
\[ P = wA + uB + vC \]
However \( u + v + w = 1 \), so we have:
\[ P = (1 - u - v)A + uB + vC \]
I.e. the three barycentric coordinates for a point describe how close to each of the three corners the point is. We can use this when interpolating our vertex normals; if the point is closer to one of the vertices then it's normal will have a greater contribution in the interpolation. We can just swap our vertex normals into the above equation:
\[ \mathbf{N} = (1 - u - v)\mathbf{N_A} + u\mathbf{N_B} + v\mathbf{N_C} \]
Plugging all this into our renderer gives us a nice smooth teapot:
We render more complex objects too, such as this excellent Cthulhu Statuette by Samize:
The code so far can be found on GitHub. Next I'll look at adding some texture to the meshes.
]]>I accidentally added transmittance to the ray tracer when developing reflections; instead of using the reflected direction for the reflected ray I accidentally used the incoming ray's direction:
To add transmittance I've added a transmittance value to our materials, similar to reflectivity but instead describes the amount of light that passes through the object. We calculate the contribution to the overall colour from transmittance by tracing a ray into the object. When this ray intesects with the object again we have our exit point. Trace a ray back from this exit point through the scene as if it were a reflected ray to get the colour from transmittance. We can now add a partially transparent sphere to our scene:
However in real life when a light ray enters a transparent object it usually refracts to a new angle. The size of this refraction is governed by Snell's Law. This can be expressed in vector form as:
\[ \hat{\mathbf{R}} = r \hat{\mathbf{L}} + \hat{\mathbf{N}} (r c - \sqrt{1 - r^2(1 - c^2)}) \]
Where:
Taking this direction and the intersection point gives us our ray into the object. We then reuse the equation on the exit point with the refractive indices reversed, to find the direction of the exit ray. This gives us more realistic transparent spheres:
The sphere on the right has a low refractive index (1.05) so the light doesn't bend that much inside. The sphere on the left has a higher refractive index (1.33) that causes the light to bend enough to flip the image seen inside the sphere.
What if the value under the square root above was negative? Then we have total internal reflection - the ray does not transmit at all, but instead reflects. We can handle this in a simple way by not tracing the transmitted ray but instead add the amount of transmittance to the reflectivity and then proceeding with the reflection as normal.
To test total internal reflection I needed a new shape, a box.
The type of box I decided to add was an axis aligned box, i.e, one where the sides of the box lie along the three axes \( x \), \( y \) and \( z \). Whilst this is fairly restrictive to use in a scene it will be useful later on when to optimise things when we have more complex shapes to work out intersections for.
One way to calculate the intersection between a ray and a box is to use the slabs method. We think of the box as three overlapping slabs, where a slab is the space between the two parallel planes that make up opposite sides of the box.
For each slab we calculate the value of \( d \) from our ray equation \( \mathbf{R} = \mathbf{O} + d \hat{\mathbf{D}} \) that corresponds to the intersection with the near and far side. For each of the three axes we take the largest near intersection and the smallest far intersection. (If the ray is parallel to the axis then we just check it's origin in that axis fits inside the bounds of the box; if not it cannot intersect.) Then if the final near intersection has a greater value of \( d \) that the final far intersection the ray has missed the box. Also if \( d < 0 \) for the far intersection then both points are behind the origin of the ray and it misses the box. (Related to this - if \( d < 0\) for near but not far then the near point is behind the origin but not the far point, i.e. the ray starts inside the box.)
To ray trace the box we also need to find out exactly where on the box it hits. To do this we can plug our value of \( d \) (the near one unless the ray starts inside the box; then use the far one) to find the intersection point. One of the components of the intersection point must match one of the components for a side of the box - we can use this to work out which side and therefore the surface normal. The ray could intersect on an edge or corner of course - I have just returned the first normal found in this case rather than having the normal point out from the edge or corner.
With all this we can now place a transparent box in the sky and get some total internal reflection:
A couple of things to note:
The code so far can be found on GitHub. There is more to do to make these materials look more realistic, such as changing the amount of reflection based upon viewing angle. However I'm bored with transparency for the moment so I'm going to move on to teapots instead...
]]>The Phong reflection model adds to Lambertian reflectance to with a term for specular highlights, the sharp highlights you see on shiny surfaces. The equation for Phong reflectance is:
\[ C = k_A C_A + \sum_{i} (k_D \hat{\mathbf{L}} . \hat{\mathbf{N}} + k_S (\hat{\mathbf{R}} . \hat{\mathbf{V}})^\alpha ) C_{L,i} I_i \]
Where:
All vectors are normalised and the sum is over all lights in the scene. I've kept the specular and diffuse intensities for the lights the same to keep things simple. The reflection vector \( \hat{\mathbf{R}} \) is straightforward to calculate:
\[ \hat{\mathbf{R}} = 2 ( \hat{\mathbf{L}} . \hat{\mathbf{N}}) \hat{\mathbf{N}} - \hat{\mathbf{L}} \]
Note that our materials now have three colours - one for the ambient light, one for the diffuse light and one for the specular light. By having separate colours for the different light contributions it enables us to create materials that, for example, are blue with white specular highlight.
Creating a new ShadingModel subclass using the formula above gives us nice specular highlights on our spheres:
Where would a ray tracer be without reflections? To add them to the ray tracer is fairly straightforward as we've already done the hard work. We can give our materials a reflectivity value between 0 and 1 which describes the amount of light that comes from a reflection. Therefore the amount that comes from direct lighting will be 1 - reflectivity. To work out the light coming from the reflected direction we can reuse our ray tracing algorithm, but this time we place our camera at the point on the surface and make the direction it is pointing the reflected ray \( \hat{\mathbf{R}} \). Of course the point showing in the reflection might also be reflective so we would then have to perform the same procedure recursively.
We can now render the obligatory 'reflective spheres over a chequerboard' picture that all ray tracers should produce:
There is one problem with the algorithm above - it could get into an infinite loop. If we had a scene with a camera inside a box of perfectly reflective surfaces then a ray would bounce around forever. We can prevent this by having a maximum recursion depth. The effect of the recursion depth can be demonstrated by comparing the same scenes with different values:
The first image has no reflections at all. The second just has one reflection, meaning that whilst the balls themselves are no longer grey the reflections of them are. As we increase the depth the multiple reflections slowly fill in the picture.
Another optimisation would be to stop reflecting when the contribute is below some small value. I haven't done this currently.
The code so far can be found on GitHub. Next I'll look at adding support for transparent objects.
]]>When modelling Lambertian Reflectance I made the lights have constant intensity, i.e. the intensity at the point on the surface \( I \) was always 1. In reality the intensity of a light diminishes with distance from the source. To model this all the lights now take an Attenuation instance which is used to determine the intensity at a distance from the light. Rather than have separate sub classes for each type of attenuation I've made the Attenuation type take a function that gives the intensity; there are then static methods to create the various different attenuations. This seemed a simpler approach given the attenuation is just a single, simple function.
We'd expect the intensity to fall off as an inverse square law. As well as creating an inverse square attenuation I've also added linear and inverse. We can demonstrate their difference by rendering three spot lights just above a plane; from left to right, linear, inverse and inverse square:
Currently we determine the colour of our surface at a given point by just taking a single colour. I've now made this a bit more general with a Material class. This is basically a function that gives the colour at a given point. Currently the materials are simple and don't know anything about the geometry of the object they're for - I can see this might change in the future, e.g. mapping a texture to a sphere will involve needing knowledge of the size of the sphere. There are currently three materials:
Solid
. Just a single colour, as before essentially.Chequerboard
. A repeating chequerboard of two colours. Determining the colour at a given point is simply a matter of dividing each of the \( x \), \( y \) and \( z \) (our chequerboard is really 3D, with cubes rather than squares...) co-ordinates by the size of the squares (cubes!) to get the number of the square in each direction. We then add these three numbers - if the total is even we return one colour, if it's odd we return another.SkyGradient
. I've also made the background for the scene a material. Now we project the ray long into the distance if there are no intersections and use that co-ordinate with a material to work out the background colour. SkyGradient
is designed to be used for the background. It's two colours, one at the horizon and one at the zenith. To work out the colour at a given point we just work out the angle from the horizon and interpolate between the two colours based on the size of the angle.Putting these textures in our scene gets us a bit close to the obligatory 'reflective spheres on a chequerboard' all ray tracers must render:
Now I'm creating more images I wanted to make it easier to save them - before I was taking a screen grab of the UI and manually cropping it to just the image. nana has support for PNG files, however you need to include some third party code, namely libpng and zlib. (libpng uses zlib for compression)
There are pre-built static libraries available on the Nana website however I thought I would build them from scratch myself. Partly to give myself a bit more experience with building C++ stuff, partly to (hopefully!) make it easier to build on other platforms and partly to avoid the compiler warnings I got about missing .pdb files when I did use the static libraries...
I therefore downloading the source for them and put them into a ThirdParty folder under the source tree. I also decided to move the nana code into the same folder. And upgrade to Visual Studio 2017 whilst I was at it. I've tried to not change the downloaded source at all if possible to make it easier to upgrade to newer versions; this means I've created new project files for each library. Here are some notes on what I had to do to get everything building:
Downloaded from http://zlib.net/. I've created a custom project file at zlib.vcxproj
based on contrib\vstudio\vc14\zlibstat.vcxproj
with the following changes:
ZLIB_WINAPI
preprocessor definition to stop linker errors. (See http://stackoverflow.com/questions/5424549/unresolved-externals-despite-linking-in-zlib-libzlibstat
to just zlib
./MACHINE
setting from the librarian additional options.Downloaded from http://www.libpng.org/pub/png/libpng.html. I've created a custom project file at libpng.vcxproj
based on projects\vstudio\libpng\libpng.vcxproj
with the following changes:
zlib.props
- I define the path to the zlib source in the project file instead.pnglibconf.h
from \scripts\pnglibconf.h.prebuilt
. There is a project to do that at projects\vstudio\pnglibconf\pnglibconf.vcxproj
- it just copies and renames the file with a custom build step. I've put that build step into my project file and make it run before compilation.Downloaded from http://nanapro.org/en-us/. I've created a custom project file at nana.vcxproj
based on build\vc2015\nana.vcxproj
with the following changes:
STD_CODECVT_NOT_SUPPORTED
to the preprocessor definitions. C++17 deprecates <codecvt>
; adding this flag stops charset.cpp
from using it.NANA_ENABLE_PNG
to enabled PNG support.nana_extrlib
folder.nana_extrlib
folder to the include folders.The code so far can be found on GitHub. Next I'll look at adding a better lighting model so I can start rendering reflections.
]]>Let's add some more types:
Light
. Base class for all types of light. Stores the colour of the light (I'm not doing anything more complicated for now) and defines an abstract method GetLightRayToPoint
- this will return the ray of light that heads from the light towards the point.PointLight
. Implementation of light that emits from a point in space.Our scenes will probably need a general ambient light as well, however I haven't made a class for this that extends light. Reason being is that ambient light is quite different as it is the same in all directions and does not cast shadows. I've found it simpler to treat it as just a colour value for the scene.
Our Scene
class has also been extended to keep a list of all the lights in the scene.
Now onto the shading:
ShadingModel
. Base class for different shading models. Designed to be plugged into the RayTracing
algorithm class so that the same scene can be rendered with different shading models. Defines a single abstract method so far, ShadePoint
. This determines the shading for a given point on a given scene object.Uniform
. Implementation of ShadingModel
that just returns the colour of the scene object, irrespective of the point on it or any lights in the scene. This is effectively the model we had already - plugging it in will produce the boring blue sphere we've already created.Lambertian
. Implementation of ShadingModel
that uses Lambertian reflectance.This is a simple shading model that models an ideal diffusely reflecting surface. This can be expressed as follows:
\[ C = \mathbf{L} . \mathbf{N} C_S C_L I \]
Where:
Our Light class gives us a ray (\( \mathbf{R} \)) from the light to the point, therefore \( \mathbf{R} = -\mathbf{L} \). And for now our lights have constant intensity (i.e. the intensity doesn't decrease with distance) so the equation becomes:
\[ C = -\mathbf{R} . \mathbf{N} C_S C_L \]
This is per light of course so we need to sum over all the lights and we need to add the ambient light (\( C_A \)) giving us:
\[ C = C_S C_A - \sum_{i} \mathbf{R} . \mathbf{N} C_S C_{Li} \]
We have all the values for this except the surface normals. We can tweak the NearestIntersection
method on our Object class to return this as well as the point of intersection. We only have one shape so far, spheres, and calculating the surface normals for those is simple - it will just be the (normalized) vector from the centre to the point of intersection.
Adding a point light to our scene and using this new shading model gives us a nice 3D looking blue sphere:
So far all our lights contribute to the final colour of a point. However there might be an object in the way of the light and the target point, i.e. the point is in shadow. This is fairly easy for us to work out as we have a ray from the light source to the point we're trying to colour. We therefore know the distance from the light source to the point, so we can just loop over the other objects in the scene, find out if they intersect with the light ray and if they do see if they are closer to the light source than the object. If they are they must be blocking the light source so we don't bother adding that lights contribution to the overall colour. With this we can now cast a shadow from one sphere to another:
A flat surface would show shadows much better though, so let's add plane as another object type.
The equation of a plane is:
\[ ( \mathbf{P} - \mathbf{p} ) . \hat{\mathbf{n}} = 0 \]
Where:
As with the sphere we will need to work out where a ray intersects with the plane. Taking our ray equation \( \mathbf{R} = \mathbf{O} + d \hat{\mathbf{D}} \) the intersection will be when \( \mathbf{P} = \mathbf{R} \). Plugging the equations together and rearranging for \( d \) gives us:
\[ d = \frac{ (\mathbf{p} - \mathbf{O}) . \hat{\mathbf{n}} }{ \mathbf{D} . \hat{\mathbf{n}} } \]
If \( \mathbf{D} . \hat{\mathbf{n}} \) is zero the ray must be parallel to the plane - no intersection. The one exception to this would be if \( (\mathbf{p} - \mathbf{O}) . \hat{\mathbf{n}} \) was also zero - that would mean the ray was in the plane. We'll treat this case as no intersection too. Lastly if \( d \) is negative then the line of the ray only meets the plane behind the ray's origin - no intersection.
To calculate the surface normal we need to know if the ray intersects above or below the plane, where 'above' is defined as being in the direction of the surface normal. If we we considered a vector from the intersection point to the light (\( \mathbf{L} \)) then \( \mathbf{L} . \hat{\mathbf{n}} \) would be positive if the angle between the two (\( \theta \)) was 0° -> 90° (i.e. above) and negative if the angle was below (i.e. 90° -> 180°) as \( \mathbf{L} . \hat{\mathbf{n}} = \| \mathbf{L} \| \| \mathbf{\hat{\mathbf{n}}} \| \cos\theta \). Our vector goes in the opposite direction (from the light to the point) so therefore:
If we tried to use a plane for the ground at this point we wouldn't actually see anything! This is because currently our light rays go straight into the scene, parallel to the ground. Instead we need to simulate the camera as being in the centre but a bit behind our view - then the rays will angle down towards the ground and we will see the plane. This is simple to do - just add a distance from the view to the camera to our ray tracing algorithm and adjust the code that creates the rays to trace to use the camera point to give the correct direction. We will still keep the origin of the points as the view however so we can have the camera as far back as we want without worrying about objects between the camera and the view. Putting all this together lets us cast shadows onto a plane:
The code so far can be found on GitHub. Next I'll look at adding so textures to the objects.
]]>First step is to add some types to handle geometry:
Point
. Represents a point in three dimensions.Vector
. Represents a vector in three dimensions. Although very similar to a point as both have \( x \), \( y \) and \( z \) components I've made them two different types as the things you can do with them are very different. For example the dot product of two points doesn't make sense for a point but it does for a vector.Ray
. Definitely need one of these for a ray tracer... Consists of a point representing the origin of the ray and a vector indicating the direction. I've normalised the vector to make things a bit easier to work with.Object
. Base class for all solid objects. Has an abstract method to determine the nearest intersection point with a ray. We will use this method in the ray tracer to work out what objects are in the path of our rays.Sphere
. Implementation of Object
representing a sphere which consists of a centre point and a radius.I also thought it was about time I started writing some tests so I created a new project to hold tests written with the Catch unit test framework. I chose this one as it isn't platform specific, is really easy to include (just add a single header file) and is supported by ReSharper's unit test runner.
We need a bit of maths to work out the intersection point of a sphere and a ray. We can represent a ray with the following equation:
\[ \mathbf{R} = \mathbf{O} + d \hat{\mathbf{D}} \]
Where:
We can represent a sphere with the equation:
\[ (\mathbf{S}\mathbf{C}) . (\mathbf{S}\mathbf{C}) = r^2 \]
The intersection(s) happen when \( \mathbf{S} = \mathbf{R} \) or:
\[ (\mathbf{O} + d \hat{\mathbf{D}} - \mathbf{C}) . (\mathbf{O} + d \hat{\mathbf{D}} - \mathbf{C}) = r^2 \]
If we let \( \mathbf{X} = \mathbf{O} - \mathbf{C} \) and then expand and rearrange we get:
\[ d^2 \hat{\mathbf{D}} . \hat{\mathbf{D}} + 2 d \hat{\mathbf{D}} . \mathbf{M} + \mathbf{M} . \mathbf{M} = r^2 \]
Which is a quadratic equation of the form \( a d^2 + bd + c = 0 \) where:
Substituting in and simplifying gives us the solutions:
\[ d = -\hat{\mathbf{D}} . \mathbf{M} \pm \sqrt{ (\hat{\mathbf{D}} . \mathbf{M})^2 - \mathbf{M} . \mathbf{M} + r^2} \]
If the value in the root is less than zero then there are no (real) solutions. If it is exactly zero then the ray hits the sphere only once at the tangent, otherwise there are two solutions where it hits one side of the sphere, passes through then hits the other side.
None of the above takes into account the origin of the ray however. Firstly it would be handy to know if the origin of the ray was inside the sphere. For the moment I will treat this as an error in the code as I don't want my spheres crossing the view. \( \mathbf{M} \) is the vector between the sphere's centre and the ray's origin. If the length of this vector, i.e. the distance from the sphere centre to the ray origin, is less than the sphere's radius then the ray must start inside the sphere. I.e. the ray starts inside the sphere if:
\[ \left| \mathbf{M} \right| \ - r \leq 0 \]
Given the length of a vector is \( \sqrt{ \hat{\mathbf{D}} . \mathbf{M} } \) we can rewrite this as \( \mathbf{M} . \mathbf{M} - r^2 \leq 0 \) or \( c \leq 0 \).
Secondly we need to know which is the intersection closest to the ray's origin if there are two solutions. From the equation of the ray this is simply the smallest value of \( d \), i.e. the smallest of the two solutions. As the result of the square root will always be positive the smallest solution will be the one that subtracts the square root.
Thirdly what if the sphere is entirely behind the ray? For this to happen \( d \) would have to be negative - we can therefore discard our solution if it's less than zero. And we can ignore one solution being less than zero and one greater than zero as that would imply we're inside the sphere, which we've already discarded.
Now we can actually work out where a sphere and a ray intersect it's possible to do some ray tracing. To do this I needed a few more types:
SceneObject
. This is an object in a scene to be rendered. Our Object
class is just pure geometry; this extends takes an Object
and adds some real world properties. Only one so far - colour.Scene
. Details of a scene. Currently comprises a background colour and a vector of SceneObject
s.RayTracer
. This is our implementation of the Algorithm
class for ray tracing. It takes a Scene
in the constructor. When RenderPoint
is called it creates a ray from that point directed along the positive z-axis, or into the screen. It then loops through all the objects in the scene, finds the one with the closest intersection point and uses it's colour for the point. If it does intersect with any of the objects then it uses the background colour for the scene instead.With all this in place I was able to create my first image, of a scene comprising a single sphere in the centre of the view:
Impressive huh? Well no, not really. But it's a start...
unique_ptr
s... I ended up having to use a shared_ptr
for my vector of scene objects when I really want unique_ptr
s as they are children of the scene. I'll read up on it all a bit further down the line and come back and refactor.std::optional
type whereas 2015 doesn't; I've had to roll my own Optional
type for the moment. (Used when returning the nearest intersection point - there might not be one)The code so far can be found on GitHub. Next step is to start look at some shading and lighting.
]]>I added a few types to the project to handle storing and creating images:
Colour
. Decided it would be best to have my own colour type to avoid tying me into Nana's drawing types. The colour is in RGB format with each component a double in the range 0 -> 1; should keep the maths a bit simpler than 0 -> 255 and it will allow me a greater precision.Image
. Represents an image, i.e. a square grid of pixels, each of which is simply a Colour
. Image
is immutable.MutableImage
. An extension to image that can be updated. Includes a method for setting a given pixel. Image
s can also be copied into MutableImage
s - allows us to reserve memory for a buffer for example, and then copy another image into it rather than repeatedly releasing and reallocating memory.The rendering itself consists of two main types:
Algorithm
. This is an abstract class with a single method, RenderPoint
. This takes (x, y) co-ordinates in the range (0, 0) -> (1, 1) and returns a Colour
object representing the colour at that point. The range is normalised so that the algorithm doesn't need know about the image size - whatever uses the algorithm can map the image pixels to the 0 -> 1 range. This might all need to change of course when I start thinking about cameras and the like, plus it currently assumes a square image, but it's good enough for now.
I've added two algorithms for testing: Random
. Returns a random colour each call to RenderPoint
.Gradient
. Renders a gradient from one colour in the top left to a second colour in the bottom right.Renderer
. This generates an image by running a given algorithm for each pixel on a background thread. It includes members for querying the state of the render, (e.g. the status or progress complete) cancelling and taking a snapshot of the current image.Just a single one here:
MainForm
. This displays the progress, has a cancel button and displays the current image in a preview window. A timer runs to periodically update the progress and image.The code so far can be found on GitHub. Next step is to start on an algorithm to render a scene.
]]>I've decided I'll need a GUI for the ray tracer so I can see what is being drawn as it's being drawn - I don't want to wait 10 hours for a render only to find I've got a blank image... Whilst I'll be doing the majority of development in Windows and Visual Studio (I'm a C# developer by day so I'll stick with the IDE I know) it would be nice if I could make it run on other platforms too as it would be handy to know how to write a Linux application too. For now I'll just get it running on Windows, but I'll use a cross-platform UI framework so I don't tie myself to Windows.
After looking around I plumped for nana. The main reasons being:
Downside is that the documentation seems to be a bit hit and miss but as I'm not planning on doing anything too complicated GUI wise hopefully that won't be an issue.
I managed to setup a Windows application using nana for the GUI fairly easily which surprised me given the amount of options C++ projects seem to have and my current knowledge of the language in general... Here are the steps I took:
build\vc2015\nana.vcxproj
project to the solution.$(SolutionDir)nana-1.4.1\include
) Without this the compiler will not recognise include directives such as #include <nana/gui.hpp>
.main
function - as this is a Win32 application I move that code into the wWinMain
function that already existed in the file, replacing everything there.return 0;
at the end of wWinMain
as it expects an int
to be returned.And that was it! Building and running the Win32 application gave me a nice little nana generated window. Find the source for the project on GitHub.
Next I'll try and actually draw something!
]]>A 64-bit long can be broken down into 8 bytes using bit shifting. If we simply case the long to a byte then we'll take the rightmost 8-bits and discard the rest. By repeatedly bit shifting the long 8 bits to the right and casting we can chop the long up into 8-bit segments and get ourselves a byte array containing the long:
private static byte[] SafeLongToBytes(long value)
{
var bytes = new byte[8];
bytes[0] = (byte) value;
bytes[1] = (byte) (value >> 8);
bytes[2] = (byte) (value >> 16);
bytes[3] = (byte) (value >> 24);
bytes[4] = (byte) (value >> 32);
bytes[5] = (byte) (value >> 40);
bytes[6] = (byte) (value >> 48);
bytes[7] = (byte) (value >> 56);
return bytes;
}
We can do this in a simpler way using unsafe code. If we take a pointer to the byte array then we can write our long directly to the memory taken up by the byte array:
private static unsafe byte[] UnsafeLongToBytes(long value)
{
var bytes = new byte[8];
fixed (byte* pointer = bytes)
{
*(long*)pointer = value;
}
return bytes;
}
The main difference between the code here and the enum conversion in the last post is the use of the fixed
block. This is necessary because the byte array will be allocated on the heap and we don't want the garbage collector moving it around whilst whilst we're playing with memory addresses. This wasn't necessary for the enum conversion because the enum was allocated on the stack.
Is this any quicker? Yes - the safe version was about 17% slower in rough tests on my laptop. The full code of the timing application can be found here.
Is that enough of an increase to warrant added unsafe code to your assembly? Well turns out that it doesn't matter... Whilst typing up this post I found out about the BitConverter class that has a long to byte array method. Flipping it open with ILSpy shows that its identical to the unsafe code above!
]]>To test I used the ConsoleColor enum and came up with the following equivalent functions:
private static byte SafeConvert(ConsoleColor value)
{
return (byte) value;
}
private unsafe static byte UnsafeConvert(ConsoleColor value)
{
return *(byte*) &value;
}
Ran some timing tests and there was no detectable difference... To be expected really - its a very simple conversion after all.
Full code of the timing application can be found here.
]]>Therefore to get a better feel for the size of the numbers involved for \( O(n^2) \) and various other common big O growth rates I've built a table comparing the values. Enjoy!
\( n \) | \( \log \log n \) | \( \log n \) | \( \sqrt n \) | \( n \log n \) | \( n^2 \) | \( n^3 \) | \( 2^n \) | \( n! \) |
---|---|---|---|---|---|---|---|---|
2 | 0 | 1 | 2 | 2 | 4 | 8 | 4 | 2 |
5 | 2 | 3 | 3 | 12 | 25 | 125 | 32 | 120 |
10 | 2 | 4 | 4 | 34 | 100 | 1,000 | 1,024 | 3,628,800 |
20 | 3 | 5 | 5 | 87 | 400 | 8,000 | 1,048,576 | 2.43x10^{18} |
50 | 3 | 6 | 8 | 283 | 2,500 | 125,000 | 1.12x10^{15} | 3.04x10^{64} |
100 | 3 | 7 | 10 | 665 | 10,000 | 1,000,000 | 1.26x10^{30} | 9.33x10^{157} |
200 | 3 | 8 | 15 | 1,529 | 40,000 | 8,000,000 | 1.60x10^{60} | 7.88x10^{374} |
500 | 4 | 9 | 23 | 4,483 | 250,000 | 1.25x10^{8} | 3.27x10^{150} | 1.22x10^{1134} |
1,000 | 4 | 10 | 32 | 9,966 | 1,000,000 | 1.00x10^{9} | 1.07x10^{301} | 4.02x10^{2567} |
2,000 | 4 | 11 | 45 | 21,932 | 4,000,000 | 8.00x10^{9} | 1.14x10^{602} | 3.31x10^{5735} |
5,000 | 4 | 13 | 71 | 61,439 | 25,000,000 | 1.25x10^{11} | 1.41x10^{1505} | 4.22x10^{16325} |
10,000 | 4 | 14 | 100 | 132,878 | 1.00x10^{8} | 1.00x10^{12} | 1.99x10^{3010} | 2.84x10^{35659} |
20,000 | 4 | 15 | 142 | 285,755 | 4.00x10^{8} | 8.00x10^{12} | 3.98x10^{6020} | 1.81x10^{77337} |
50,000 | 4 | 16 | 224 | 780,483 | 2.50x10^{9} | 1.25x10^{14} | 3.16x10^{15051} | 3.34x10^{213236} |
100,000 | 5 | 17 | 317 | 1,660,965 | 1.00x10^{10} | 1.00x10^{15} | 9.99x10^{30102} | 2.82x10^{456573} |
200,000 | 5 | 18 | 448 | 3,521,929 | 4.00x10^{10} | 8.00x10^{15} | 9.98x10^{60205} | 1.42x10^{973350} |
500,000 | 5 | 19 | 708 | 9,465,785 | 2.50x10^{11} | 1.25x10^{17} | 9.95x10^{150514} | 1.02x10^{2632341} |
1,000,000 | 5 | 20 | 1,000 | 19,931,569 | 1.00x10^{12} | 1.00x10^{18} | 9.90x10^{301029} | 8.26x10^{5565708} |