Sunday, October 6, 2013

Volume Height Maps and Triplanar Bump Mapping

I know it has been a while but I thought I would like to emphasize an easily missed contribution in my paper Bump Mapping Unparametrized Surfaces on the GPU. The new surface gradient based formulation (eq. 4) of Jim Blinn's perturbed normal unifies conventional bump mapping and bump mapping from height maps defined on a volume. In other words you do not have to bake such heights to a 2D texture first to generate
Jim Blinn's perturbed normal. The formula also suggests that a well known method proposed by Ken Perlin is wrong. For a more direct visual confirmation a difference is shown in figure 2. The black spots on the left are caused by subtracting the volume gradient which causes the normal to implode. The result on the right is when using the new surface gradient based formulation and as you see the errors are gone.

To give an example of how the theoretical math might apply to a real world practical case
let us take a look at triplanar projection. In this case we have a parallel projection
along each axis and we use a derivative/normal map for each projection.

Let's imagine we have three 2D height maps.

H_x: (s,t) --> R
H_y: (s,t) --> R
H_z: (s,t) --> R

The corresponding two channel derivative maps are represented as (dH_x/ds, dH_x/dt) and similar for the other two height maps. Using the blend weights described on page 64 in the presentation by Nvidia on triplanar projections, let us call them w0-w2, the mixed height function defined on a volume can now be given as

H(x,y,z) = H_z(x,y)*w0 + H_x(z,y)*w1 + H_y(z,x)*w2

The goal is to use equations 2 and 4 on it to get the perturbed normal.
However, notice the blend weights w0, w1 and w2. These are not really constant.
They depend on the initial surface normal. In my paper I mention that the height map
defined on a volume does not have to be global. It can be a local extension on an open
subset of R^3. What that means (roughly) is at every given point we just need to know some local height map function defined on an arbitrarily small volume which contains the point.

At every given point we can assume the initial normal is nearly constant within some small neighborhood. This is the already applied approximation by Jim Blinn himself.
Constant "enough" at least that its contributions, as a varying function, to the gradient
of H will be negligible. In other words in the local height map we treat w0, w1 and w2 as constants.

Now to evaluate equation 2 using H as input function we need to find the regular gradient of H

Grad(H) = w0 * Grad(H_z) + w1 * Grad(H_x) + w2 * Grad(H_y)

=  w0*[ dH_z/ds   dH_z/dt        0 ]^T +
     w1*[ 0       dH_x/dt       dH_x/ds  ]^T +
     w2*[ dH_y/dt        0       dH_y/ds ]^T

             [w0*dH_z/ds + w2*dH_y/dt]
         =  [w0*dH_z/dt + w1*dH_x/dt]
            [w1*dH_x/ds + w2*dH_y/ds]

Now that we have the gradient this can be used in equation 2 and subsequently equation 4. Another way to describe it is to use this gradient with Listing 3 in the paper.

The derivation in this post is according to OpenGL conventions. In other words assuming a right hand coordinate system and Y is up. Furthermore, using lower left corner as texture space origin. Let me know if you need help reordering things to D3D.

Sunday, March 10, 2013

Links to my papers and some Summaries

The server appears to be down still. So I have decided to write a post listing some of my papers and with links to their alternative urls.

One paper some have been asking for recently is my skin rendering paper from 2010:

Skin Rendering by Pseudo-Separable Cross Bilateral Filtering

In this paper I show how to convert the work of Eugene d'Eon accurately into screen-space using the cross bilateral filter. The 2D filter region is chosen on a per pixel basis such that it represent a rectangular bounding box of a projected disc. The disc has a constant radius in mm and is tangent to the surface at the pixel. Also as mentioned in the paper I never bothered to use more than one separable Gaussian pass though Eugene suggests using six Gaussians. There is also a video available here.

Next there is the bump mapping paper and the accompanying source and binary.

Bump Mapping Unparametrized Surfaces on the GPU

In this paper I describe bump mapping on the gpu in a broader context which allows combining perturbations from all kinds of multiple unwraps and procedural fields. Since the subject has already been covered extensively in all my previous posts I will not go over further details here.

A different paper I wrote though somewhat more academic is the paper:

Microfacet Based Bidirectional Reflectance Distribution Function

The point to me when I did this paper was developing a good solid understanding of what physically based specular reflection really is. The way I did this was by deriving the Torrance-Sparrow model from scratch. There is a frequent trend in the graphics community to consider physically based specular as essentially Fresnel and using the Beckmann surface distribution function. This leads me to believe a lot of people never read the Torrance-Sparrow paper. The paper does not attribute any major significance to the underlying choice of isotropic surface distribution function. An interesting fact which is less known is that you can remap from a beckmann to a normalized phong and you won't be able to tell them apart. The observation is made in my section 2.4 but is also an observation made on page 7 in a paper written by Walter B. et al..
In practice I have found for a normalized phong specular power of 8.0 and above there is no visible difference between the two.

In regards to Fresnel, though relevant, the point to the Torrance-Sparrow paper was to dispute a previous model which attributes off-specular peaks to Fresnel exclusively. As is pointed out this model would not work for metals since in this case the Fresnel reflectance is closer to constant.

To add to the irony a term which is often marginalized in the graphics community is the shadow/masking term also sometimes referred to as the geometry factor. It's ironic because this term is essentially the fruit of the Torrance-Sparrow paper. This term combined with the division by the dot product between the view vector and the normal is how the model predicts off-specular peaks.

I also wrote a long paper in which I set out to understand the work of Henrik Wann Jensen and the work of Craig Donner:

Skin Rendering: Reflectance and Integration

It was a very long and hard road and I am having trouble coming up with something brief to say about the contents of the paper. It was very interesting to me academically yet at the same time I think though I achieved my goal I still arrived at the conclusion it wasn't necessary to understand the model to its core to do good skin rendering. Important things to know is that the reflectance profile should exhibit exponential decay so a spike followed by a broad base is important. Further we need most of the bleed in red, less in green and even less in blue (and of course no bleed in the specular).
That being said if you are interested in all the details that lead to the multi-pole based bssrdf this paper is a very good walk-through of the underlying details.

I wrote my masters thesis at DIKU (University of Copenhagen).

Simulation of Wrinkled Surfaces Revisited

It's a rather extensive analysis of bump mapping in its original form. Many things are studied but a core element component is normal mapping on low resolution geometry today and why we get unwanted lighting seams/discontinuities in our results. This is due to tangent space not being calculated the same way in game engines and bake tools which causes bad errors in practice. Ideally, baking tools must allow developers to customize them such that the game developer can ensure the tool does the exact inverse of what the game engine and shader does. Such functionality has since then been added to the very popular baking tool xNormal.

Finally some oldies:

DOT3 Normal Mapping on the PS2

Separating Plane Perspective Shadow Mapping.