Monday, January 30, 2017

Prettier, Faster Terrains

We will be updating the terrain systems in Voxel Farm soon. Hopefully, it will get a lot prettier and faster. It is not often you get improvements in these two, for the most part, opposite directions. In this case, it seems we got lucky.

It was thanks to a synergy between two existing aspects of the engine that get to play together really well. One is UV-mapped voxels, the other is meta-materials.

Here is how it works: A single meta-materials describes a type of terrain. For instance, mountain cliff. Within this single meta-material you may find different materials. In the case of a cliff that could be exposed rock, mossy rock, grass, dislodged stones, dirt, etc. An artist gets to create how the meta-material surface is broken down into these sub-materials. The meta-material also has a volumetric definition, which is a displacement map and can be carefully tied to the sub-material map.

When you are close to the meta-material's surface, it must be rendered as full geometry. This is because features in the meta-material, let's say a rock that sticks out, can measure up to dozens of meters. This content must be made of actual voxels so it can be harvested, destroyed, etc. It is not just a GPU displacement trick.

As you are farther away from these features, using geometry to capture detail becomes expensive. You face the hard choice of keeping a high geometry density or dial down geometry and loose detail.

The new terrain system can dial down geometry, but keep the appearance of detail by using automatically generated textures for the metamaterial. For the close range, it still uses geometry to capture detail, but at a certain distance the meta-material displacement can be represented with just a normal map. High resolution sub material textures for grass, rock, etc. are not needed anymore. A single color map is able to capture the look of the metamaterial from this distance. These are only a few extra maps that can be reused anywhere in the scene where the meta-material appears.

The following image shows a single meta-material that uses geometry for the close range and texture maps for the medium-far ranges:


The colors in the wireframe view show where each method is applied. Just by comparing the triangle densities you can see this saves a massive amount of geometry:


This method is not new in terrain rendering, however, it is quite new in a voxel terrain. It is all possible thanks to the fact our voxels can have UV coordinates. Voxels output by the terrain component in the green area have UV coordinates. These coordinates make sure the normal, diffuse and other maps created to render the meta-material at this distance match the volumetric profile and sub-material patterns in the meta-material up close.

The beauty of it is that this work with any type of terrain topology, not just heightmaps. If you are doing caves, cave walls, ceilings and ground are very distinct meta-materials and they would all benefit from this method. And it should be all automatic, we can turn this system on, and it won't require artists to create any new assets.

We are still figuring out how to solve some kinks in the system, but so far I am very pleased with the results. I will be posting more pictures and videos eventually.

8 comments:

  1. Impressive, does this new system seamlessly transition between images and geometry? Also does it handle Geometry being modified in the image mapped areas well ?

    ReplyDelete
    Replies
    1. Thanks!

      The geometry is seamless, that is, vertices from one LOD are connected to vertices in the next LOD by special triangle strips we call seams. The texturing on top, however, is a different story.

      Like any discrete LOD transition, the texturing transition is not necessarily seamless to the eye. The methods employed on each side (green versus orange areas in picture above) are different when it comes to deciding which sub-material becomes predominant. In the green zone is all image based, so it operates over pixels. In the orange zone it is triangle based. It is essential to this approach that both methods produce matching results.

      As usual you can minimize this effect by pushing LODs farther from the camera, or by having double data and blending based on distance.

      Modifications behave quite nicely in this approach. It is no different than making edits in UV-mapped voxelized assets. That is, voxels with UVs can coexist with voxels without UVs, or with UVs that belong to a different parametrization.

      Delete
  2. Have you ever played around with quantity of LODs? What happens if you crank up the quantity to a really high number? That way the net difference between each LOD is minimized thus allowing for smoother transitions without having to push the LOD away from the camera or by having double data blending. Is there a practical maximum to the number of LOD's you can use?

    ReplyDelete
    Replies
    1. Yes, we have played heavily with that. There are some controls in Voxel Studio rendering interface that allow you to reconfigure LODs for your scene and see the results right away.

      There are some axioms here. The main one is each subsequent LOD will reduce voxel sampling by half. These are discrete jumps. So in LOD0 a voxel is 4 cm, in LOD1 it will be 8 cm.

      Then there are practical concerns. If the draw call count is a concern, which still is for most engines and platforms, the LOD configuration you choose will determine how many world chunks (or cells as we call them) are required to draw the scene. If bandwidth to GPU is still a concern, you will also want sufficient spatial coherence between scenes at different camera positions so you can use many of the chunks you have already uploaded to GPU. This also affects how LODs are distributed across the scene.

      These two concerns are usually at odds. That is, you have greater coherence by having many littler chunks, but then you end up needing a larger number of draw calls to render your scene.

      I believe there is no single solution for scene/LOD configuration and that it largely depends on the game or application.

      Delete
  3. Correct me if I am over simplifying things, but wouldn't the amount of data to be processed in each call be reduced every time you increase the number of calls? Up until the point that each call only generates a single voxel layer shell around the previous/closer lod. The reduced processing per call should offset having to make a greater number of calls.

    This all said, as you have described the doubling effect you mention with your current software would be far too coarse for this methodology to work.

    Thinking out loud now...

    These worlds are a composite between equation driven procedural terrain/geometry and user created geometry. Correct? Also, the user doesn't actually see voxels so much as they see the automatically generated polygons that are created from the voxel data. From seeing your videos it looks like you have your voxel elements as mostly static and defined by their position in the world. That works. But what if you flipped everything?

    What if your voxel data were to be defined to the camera instead? When I first envisioned this I imagined how slow it would be to have a sweeping set of voxels being generated anew every frame that your camera rotates.

    But then I thought, wait a sec, lets say you simplify everything down to a single voxel located 1m right centered in front of the camera at all times. The calculation of that imaginary point could be simplified immensely if you included a couple checks, the first checks to see if the voxel is within the equation of procedural content or withing the data set of handmade content. If it is not, the call would end. Without calculating anything else. If it was, the next calculation would compare the relative distance between that point in space, and the already generated polygons you can see. If that distance were to exceed a certain value, then that would cue the algorithm to generate revised geometry to accomodate the variance. The closer that point is to the camera, the tighter of a tolerance you would use on the variance calculation.

    The geometry creation would use arbitrary data points stored each time it was cued for a rebuild. And to keep these saved points from overloading. You could routinely perform cleansing operations that delete data points if the camera got closer and the allowed tolerance became too tight. Or if the camera leaves the set of data points, you could wipe points that exist too close to each other for that LOD that is farther away.

    Why go through all of this for a system that would technically generate different geometry any time you looked at it? I believe you could arbitrarily use any set of voxel points around the camera, patterned in squares, circles, etc, and get an extreme amount of control on lod distances without having to rely on calls.

    It is late, and I am NOT a programer, so this could be all junk. Or it could help you think outside the box. If you have any questions please ask.

    ReplyDelete
    Replies
    1. The amount of data per cell is fairly constant, each cell holds the same amount of voxels. Each subsequent LOD doubles the dimensions of the cell, hence each voxel becomes twice as larger.

      The approach you describe already exists. You can do everything in camera space. This is how raytraced voxel engine swork (see Euclideon's or Atomontage). In fact you do not need voxels at all if you can describe surfaces as distance fields. You will see hundreds of running examples of this in ShaderToy.com

      The problem with working on camera space is that it addresses one aspect of your application which is rendering. There are other systems in any application that must also access the world like AI, physics, etc. You could convert these systems as well, but then you really cannot integrate with traditional engines like UE4 and Unity.

      Even just rendering gets complicated when you start bringing the feature set on par to what traditional game engines offer. For instance when you have multiple lights, reflections, the number of operations you need to perform for each camera pixel begins to mount.

      Delete
    2. Hmm, at first I thought rendering, pathing, collisions/etc would be based off of the polygons that your meshing algorithm creates. All the camera based voxel generation would do is create a cloud of statistical points generate around rather than a full world based data set. But again, if it's relying on that geometry for things like pathing, you would have issues trying to path through areas the camera has not yet explored.

      Delete
    3. Yes, once you have done the work to generate polygons for the scene, you may as well use them to render.

      Delete