What Goes Into a Lighting

So, it seems the perfectionist in me didn't let me sleep, before the lighting system was ridded of fakery and mere illusion-based shenanigans!

After finishing a couple of iterations of improvements, including a revamp from a sort of 2.5D to fully 3D representation of the global illumination, I'm quite happy with the results and no longer haunted by ray marching related ideas at nights.

It's time to recap some of the techniques put into use in the current lighting model of Payback Time.

Below is a short sequence of screenshots representing each major step of the lighting computations:

Lighting Steps

Pipeline Basics

The rendering pipeline is completely deferred, for now. This means that the scene geometry is rendered independently of lights and the final lighting is produced per-pixel in a deferred shading pass computing the lighting equation for a fullscreen quad.

The current pipeline could be summarized with the following general steps: geometry, SSAO, lighting, bloom, color grading, anti-aliasing. Here's what each of the steps involve, in short:

  • Geometry: Taking all of the visible geometry (vertices, indices) and producing a g-buffer consisting of buffers with depth, normal, albedo and lightmap data.

  • SSAO: Screen-space ambient occlusion step, working on the previously produced g-buffer data and producing object-local / high-frequency shadows.

  • Lighting: The main lighting pass, producing all global and directional light shading.

  • Bloom: Bloom pass, extracting high-intensity areas from the HDR color buffer and producing glowing thingies.

  • Color grading: Producing LDR out of HDR color values, using a tone mapping function. Filmic in this case. Also, computes luminosity values required by the next anti-aliasing step.

  • Anti-aliasing: Smoothes the frame by looking for aliased edges. FXAA is used, for now.

Global Illumination

The global illumination (GI) part is what I have been spending most time with lately. I am planning to make lighting a significant part of the gameplay itself, so having something that both looks alright and is controllable is quite important.

While SSAO handles local shadows nicely, I wanted to have a smooth / low-frequency shadows produced by the GI. Although there are various ways this can be achieved, I'll describe the approach I am using here.

First of all, the scene (i.e. the entire playable area) is sub-divided to cells. The cell size can be adjusted - I am using 8x8x32 sized cells. This means the XY-plane is higher resolution and the Z-axis (from ground towards skies) is less accurate. To give some kind of indication of this resolution, the floor object seen in the screenshots is 16x16 in XY, so it contains 2x2 cells for GI. Having a low resolution for GI makes having smooth shadows easier but on the other hand makes having very sharp shadows unobtainable. In addition to using a low-resolution lightmap for GI, I am sampling it with a cubic sampler for extra smoothness...

To produce even remotely realistic GI, a couple of factors should be taken into account:

  • Line-of-sight (LOS). Objects should occlude the path of light from an emitter to a receiver object.
  • Distance / falloff. Lights should illuminate the scene objects according to their distance from the light.
  • Light direction. For producing correct diffuse and specular light components, the light direction in relation to the receiver object is required.

To tackle the three requirements, I settled for an approach that uses two 3D-textures as the output of the GI computation phase: a HDR RGB lightmap and an incidence map. The latter being a term I coined up for my purposes (there might be something similar out there, who knows).

The HDR lightmap is merely an RGB value which represents each scene cell's light intensity and color at that specific location. The incidence map is quite a bit more interesting: it represents the 3D light incidence vector of the cell, i.e. answering the question "from which direction was most of the light incoming to the cell?" The incidence vector is accumulated to contain a attenuation-weighted vector from each of the nearby emitting cells affecting the receiver. While this approach works very nicely, it has the edge case of multiple light sources creating a boundary area where the incidence vector is cancelled out - this can be worked around by examining the resultant vector length and scaling the incidence effect accordingly.

To add some actual algorithmic meat to this dev post, here's the outline of the GI computation:

  • Pre-compute density for objects, i.e. a value describing per-cell blocking of light. 1 = no light gets through, 0 = all lights is allowed. Only done once per object modification (e.g. on destruction).
  • Pre-compute emission for objects, i.e. an RGB-triplet describing per-cell emission of light. Again, only done per object modification (e.g. on destruction). The emission data is originally from the lightmap images which the engine uses while creating its 3D meshes.
  • Pre-compute the above two properties for the scene where objects are placed in. Only done per scene modification (object addition / removal).

Compute the lightmap and incidence map for the scene. The algorithm in high level:

For each cell:

  • Find cells with emissive light and within distance of falloff.
  • Compute visibility from light source cell to current cell by using density pre-calculated before. This done by computing line-of-sight in cell-resolution, starting with visibility of 1 and subtracting each cells density on path.
  • Determine attenuation based on light source distance. Computing a constant+linear+quadratic falloff equation will do.
  • Determine incidence vector by summing all emissive cell direction vectors weighted with attenuation together.
  • Store computed lightmap and indicent vector values to their respective low-resolution 3D textures.

After the above steps the two 3D textures are ready to be consumed by the deferred lighting shader.

Wrap up

It's worth noticing that using this system has the advantage of light/emitter sources being very cheap - in fact, after the pre-computation, each light source has zero cost to the engine.

If you are interested in the minute details of the actual shading process, don't hesitate to ask about them!

Let There be Light

Hah, got ya! You probably thought already: "he's gone, with all those valuables generated by his extravagant indie game scheme, to lead a life of Riley".

No Sir/Madam. On the contrary, I've been a busy bee. And there's no Riley.

92 commits later: the scene and lighting system starts to work properly.

Test Scene

Highlights of the lighting model:

  • There are no fake lights. The scene is only lit by the objects it consists of.
  • Each object may include 4-channel (ARGB) lightmaps in Voxely-fashion (projections for each side). The blue channel controls the amount of emission of the object's material.
  • The emissive materials will contribute to the illumination of the scene. Albedo map of the object will control the emitted light final color.
  • 2.5D line-of-sight is applied for light distribution from emissive parts of the scene radially outwards. The volumes of the objects are sampled for approximating their density (or rather, how much light is allowed to pass-through).
  • As objects will be destroyable later, the light sources may be damaged/wiped out in the process.
  • Everything in the above video is still 100% based on 2D-images.

User Interface

I've also worked on adding a primitive UI for map editing purposes (see panel on the right).

So far, the UI allows one to select the active object from those which were picked up from disk.

Editing of the map itself is straightforward: point & click for adding and removing objects to and from the scene. The lazy bum in me didn't add a save/load feature yet, so one will need to reconstruct the map from scratch for each run.

Into the Future

H'OKAY! What's next? Cleaning up the slight mess left by the progress on many fronts: optimizing scene/object/model code. Adding warriors/characters!

That'll be the part where I may have to admit to myself, that the image-based voxel modeling falls a little short. If so, the characters will end up somewhat cube-headed/armed/toed... Stay tuned!

The Birth and Death of Voxely Thingies

Background

As briefly mentioned earlier, Payback Time's graphics are based on voxels. What exactly is a voxel, then? Even Wikipedia is quite vague about the origins of these nifty little chunks.

This late to Friday, it suffices to say that voxels are cells of a 3D volume. So, pictures come with pixels and volumes with voxels (pi-xels and vo-xels).

Pros and Cons

Besides their cubic looks, voxels offer some pretty great advantages:

  • Voxel volumes are easy to manipulate, similarly to their 2D cousins (images).
  • Voxels are quite straightforward to visualize.

On the minus side, there are some shortcomings to voxels:

  • Naive volume storage approaches can consume a lot of memory (width x height x depth).
  • Content creation can be painful, often requiring custom tools.
  • The cubistic visuals aren't appreciated by everyone.

Polygons Won't Cut it

Early on, when I was thinking of different ways to approach the game graphics, I made a mental note about not wanting to deal with actual 3D polygon models while creating graphical assets.

Polygons simply felt too static - also, I'm just not that great in 3D modeling.

Instead, I wanted something that was simple to create graphics with; something that would offer easy extensibility and would produce voxely objects, ready to be destroyed by various kinds of in-game incidents.

Here's Where We Derail

Photo hulls. Space carving. I'm reading all of it.

I'll decide to try something simple, albeit absurd. What if we imagined a group of orthogonal views from all of the six sides of an object and tried to form an understanding of the 3D volume bounded by the views?

This has been done before, in fact. Sure, it's a pretty simplistic approach and will not cover all of my goals, but it's something to be inspired by.

Bring Out The Voxely

I'll explain my weird approach to turning a set of plain 2D images into voxely thingies below.

It all starts with the old idea of using an image to represent a heightmap. Greyscale images work the best, 8-bits per pixel PNG-files is what I currently use.

Here's an example heightmap (16x16 px) visualized as a 3D mesh. All of the pixels of the image have been set to maximum value of 255, so the heightmap is just a square floating about in space.

Heightmap

If only you could read the small text in the picture, you'd see the axis of depth pointing down, i.e. the smaller the pixel value in the image the deeper down would the heightmap be mapped to.

In the following 2nd picture, the heightmap is altered by making the centermost area slightly darker in the image, lowering the mapped mesh at that neighbourhood.

Heightmap

It's a good time for a quick'n'easy math lesson.

To tell whether a 3D point within the heightmap box (x and y: 0-15, z: 0-255) is either above or below the heightmap surface, one might use the following function:

f(x, y, z) = z < heightmap(x, y)

So, the function evaluates to true when the 3D point is under the heightmap surface, aka. inside.

Another Day, Another Dimension

In order to be able to create objects other than Perlin-noisy grass, we need to be able to carve the bounded volume in more sophisticated ways. This is achieved by combining further orthogonal views (heightmaps) into the volume mapping phase.

So, we'll simply create another image representing another view or side of the volumetric box we're observing.

Heightmap

Now, we must somehow take the new heightmap into account while determining whether a 3D point is inside the volume. Back to the math class with us:

f(x, y, z) = z < heightmap_top(x, y) && y < heightmap_front(x, z)

Holy axis swap. You can probably see where this is going:

By simply defining the direction of depth for each individual (up to 6) heightmaps of a box-bounded volume, one can easily test whether a 3D point is inside the intersection formed by all of the heightmaps.

The dimensions of the box-bounded volume can be easily determined from the heightmap image dimensions.

After setting up one more heightmap, a simple, fully enclosed and rendered object with its source images can be seen here:

Heightmap

There's Symmetry in Everything

I've noticed that it is easy to setup heightmap symmetry rules for the voxely generation. In case there are fewer than the full set of 6 heightmap images available for an object, one can simply re-use the available ones following priority rules, e.g.:

  • if back is missing and front is available, mirror it.
  • If right is missing and left is available, mirror it.
  • Etc.

This way one can create a solid 3D cube by simply drawing a single greyscale image in MS Paint. Wicked, no? Typically though, I've seen that simple objects require 2-3 images (e.g. front, top, left).

Must Meshify

Okay, so everything's fine in our fantasy land, where voxelies magically appear onto the screen. There, where reality still rules, there are things to be done yet. We cannot simply render the functions yielding us boolean results - the information about 3D volume properties needs to be turned into something drawable, 3D primitives, that is.

The simplest way is to take an existing meshing algorithm and plug the previously coined up volume sampling function into it.

For smooth forms without too many sharp features, one might use Marching Cubes or a more modern approach of isosurface extraction. A collection of information can be found here.

For sharp features and Minecrafty looks, you should take a look at. Greedy Meshing, for instance.

Personally I use an algorithm inspired by Greedy Meshing, enhanced with support for 45-degree slopes. It may be that I'll use another algorithm for smooth features (e.g. game characters) later.

With any of the above algorithms, the basic principles of meshing are the same: walk over the box-bounded area and generate vertices and indices for the geometry. In addition, one typically collects normals and UV-coordinates for the purposes of lighting and texture mapping.

Voxelies, What Are They Good For?

All in all, the above approach for generating voxel objects from images has the following advantages:

  • Low memory footprint of volume data storage: width x height x 6, assuming all sides to be equivalent in their dimensions (cube).
  • Easy manipulation of volume data: just manipulate the 2D images in-memory (e.g. draw decals, etc.) and re-mesh the volume.
  • Very easy creation of game assets: open up a couple of images in an image editor and draw away.

There are some caveats too, surely:

  • Round objects are missing from the universe, or are quite tedious to produce, at least.
  • Some objects are impossible to represent, e.g. consider a diagonal hole going through a cube, corner-to-corner.
  • Wrapping one's head around 3D volumes compressed into greyscale images can be fun, at times.

Wrapping up

That's all for tonight - if/when everything about this post is unclear, don't hesitate to contact me! Next up will be Adventures in UI Land and map editor progress.

I Made Fog (and an Infinite Grid, too)

While my next post was destined to be about the voxely thingies Payback Time's world consists of, I got derailed into fixing a small issue that's been on my TODO for a while now.

So, my map editor has a grid that should work as visual aid to the user. The grid has traditionally been a static mesh with known proportions: vertices, indices, lines and all that jazz - very inflexible and tedious to manage, to say the least. Also, so far the background of the scene has been completely void of color.

Enter new backdrop, with an infinite grid and a subtle, low-frequency textured skydome:

Test Scene

A couple of words about the implementation. While rendering a casual grid on the screen seems very trivial, there's a little more (or less) going on here, actually.

The backdrop as a whole is done by per-pixel raycasting with a single quad. In simple terms:

  • Render a quad covering the screen.
  • For each pixel, find the intersection of the view ray with the ground plane and with an infinitely far-away sphere surrounding the scene.
  • Use a periodic function, such as sin, for generating grid colors per-pixel.
  • Use a function or a texture for the skysphere color.

In case you're interested in some more details on the above approach, have a peeky here:

The shader comments/docs are scarce, so drop me a line in case you'd want to try it out yourself!

Next time it'll be about voxels and meshes. And about how UV-mapping can warp one's mind in ways unexpected.

Roll Camera, Action!

Test Scene

Besides working on the simulation & rendering system for the map editor this week, I finally added a properly controllable camera model to the codebase.

Technobabble begins

The camera has some physical properties, such as linear and angular velocity/acceleration. The simulation handles the integration of the properties over time and the renderer draws the current state of the camera.

The camera view model itself is a simple one, but nicely suitable for isometric game purposes, I think.

In fact, all that is needed for the orbital camera model shown above can be condensed into these components:

  • Target, vector3 - the point in space where the camera points at.
  • Distance, scalar - the distance from the target point to camera origin.
  • Yaw/pitch, scalars - the Euler-angles representing rotation of the camera, around our target point.
  • FOV, scalar - the field-of-view of the camera.
  • Aspect-ratio, scalar - camera view width/height ratio.
  • Z-plane near/far, scalars - the camera view near and far plane distances.

Controlling the described parameters, it's rather easy to make the camera follow physically plausible trajectories. The user can manipulate the camera with both keyboard and mouse - the UI controls are still to be polished into their final shape, but the scene is already quite easy to navigate about.

Finally, the camera model is able to convert itself into view- and projection-matrices directly compatible with further processing done by the graphics pipeline.

Technobabble ends

Next up in the dev blog: The Birth and Death of Voxely Objects - stay tuned!