출처 : http://blog.selfshadow.com/publications/blending-in-detail/
Blending in Detail
+ | = | |||
By Colin Barré-Brisebois and Stephen Hill |
The x, why, z
It’s a seemly simple problem: given two normal maps, how do you combine them? In particular, how do you add detail to a base normal map in a consistent way? We’ll be examining several popular methods as well as covering a new approach, Reoriented Normal Mapping, that does things a little differently.
This isn’t an exhaustive survey with all the answers, but hopefully we’ll encourage you to re-examine what you’re currently doing, whether it’s at run time or in the creation process itself.
보기엔 간단한 문제입니다. : 주어진 두개의 노멀맵, 이걸 어떻게 조합하죠? 특히, 당신이 일관된 방식으로 기본 노멀맵에 디테일을 추가하는 방법은 무엇입니까? 우리는 잘 알려진 일반적인 방법 몇가지와 기존의 노멀맵핑과는 조금 다른 새로운 접근을 설명하려 합니다.
Does it Blend?
그거 섞여요?
Texture blending crops up time and again in video game rendering. Common uses include: transitioning between materials, breaking up tiling patterns, simulating local deformation through wrinkle maps, and adding micro details to surfaces. We’ll be focusing on the last scenario here.
The exact method of blending depends on the context; for albedo maps, linear interpolation typically makes sense, but normal maps are a different story. Since the data represents directions, we can’t simply treat the channels independently as we do for colours. Sometimes this is disregarded for speed or convenience, but doing so can lead to poor results.
텍스쳐 블렌딩은 비디오 게임 렌더링에서 종종 발생하는 일이다. 일반적인 용도는 매터리얼들 사이의 타일 패턴을 전환할때, 타일 패턴을 깬다던지, 링클 맵을 통해 지역의 변형을 시뮬레이션하고, 표면에 마이크로 디테일을 추가 한다던지 말이죠. 우리는 여기서 마지막 시나리오에 초점을 맞출것입니다.
문맥에서 따라 블렌딩의 정확한 방법은 알베도 맵에서 선형보간은 일반적으로 의미가 있지만 노멀맵에서는 다른 이야기 입니다. 데이터가 방향을 나타내기 때문에 우리는 간단하게 독립적으로 각 채널의 컬러를 처리할 수 없습니다. 가끔 속도와 편의를 위해 무시하기도 하지만 이럴경우 좋지 않은 결과를 이끌어 내기도 합니다.
Linear Blending
To see this in action, let’s take look at a simple case of adding high-frequency detail to a base normal map (a cone) in a naive way:
+ | = |
Figure 1: (From left to right) base map, detail map and the result of linear blending
Here we’re just unpacking the normal maps, adding them together, renormalising and finally repacking for visualisation purposes:
1 2 3 4 |
|
The output is similar to averaging, and because the textures are
quite different, we end up ‘flattening’ both the base orientations and
the details. This leads to unintuitive behaviour even in simple
situations such as when one of the inputs is flat: we expect it to have
no effect, but instead we get a shift towards
.
Overlay Blending
A common alternative on the art side is the Overlay blend mode:
+ | = |
Figure 2: Overlay blending
Here’s the reference code:
1 2 3 4 5 |
|
Unity Example code
BlendOverlayf(base, blend) (base < 0.5 ? (2.0 * base * blend) : (1.0 - 2.0 * (1.0 - base) * (1.0 - blend)))
float4 norm = tex2D(_BumpMap, IN.uv_BumpMap);
float4 norm2 = tex2D(_BumpMap2, IN.uv_BumpMap2);
dest = norm2 < 0.5 ? 2 * norm * norm2 : 1-2 * (1 - norm) * (1 - norm2);
dest = lerp(norm2, dest, _Opacity);
o.Normal = UnpackNormal(dest);
While there does appear to be an overall improvement, the combined normals still look incorrect. That’s hardly surprising though, because we’re still processing the channels independently! In fact there’s no rationale for using Overlay except that it tends to behave a little better than the other Photoshop blend modes, which is why it’s favoured by some artists.
Partial Derivative Blending
Things would be a lot more sane if we could work with height instead of normal maps, since standard operations would function predictably. Sadly, height maps are not always available during the creation process, and can be impractical to use directly for shading.
Fortunately, equivalent results can be achieved by using the partial derivatives (PDs) instead, which are trivially computed from the normal maps themselves. We won’t go into the theory here, since Jörn Loviscach has already covered the topic in some depth [1]. Instead, let’s go right ahead and apply this approach to the problem at hand:
+ | = |
Figure 3: Partial derivative blending
Again, here’s some reference code:
1 2 3 4 5 |
|
In practice, the 3rd and 4th lines should be replaced with the following for robustness:
1
|
|
Looking at Figure 3, the output is clearly much better than before. The combined map now resembles a perturbed version of the base, as one would expect. By simply adding the partial derivatives together, the flat normal case is handled correctly as well.
Alas, the process isn’t perfect, because detail remains subdued over the surface of the cone. That said, it does work well when used to fade between materials instead (see [1] or [2] for examples):
1 2 |
|
Whiteout Blending
At SIGGRAPH’07, Christopher Oat described the approach used by the AMD Ruby: Whiteout demo [3] for the purpose of adding wrinkles:
+ | = |
Figure 4: Whiteout blending
The code looks a lot like the PD one in its second form, except that there’s no scaling by z
for the xy
components:
1
|
|
With this modification, detail is more apparent over the cone, while flat normals still act intuitively.
UDN Blending
Finally, an even simpler form appears on the Unreal Developer Network [4].
+ | = |
Figure 5: UDN blending
The only change from the last technique is that it drops the multiplication by n2.z
:
1
|
|
Another way to view this is that it’s linear blending, except that we only add
from the detail map.
As we’ll see later, this can save some shader instructions over Whiteout, which is always useful for lower-end platforms. However, it also leads to some detail reduction over flatter base normals – see the corners of the output for the worst case – although this may go unnoticed. In fact, on the whole, the visual difference over Whiteout is hard to detect here. See Figure 5 in the next section for a better visual comparison.
Detail Oriented
Now for our own method. We were looking for the following properties in order to provide intuitive behaviour to artists:
- Logical: the operation has a clear mathematical basis (e.g. geometric interpretation)
- Handles identity: if one of the normal maps is flat, the output matches the other normal map
- No flattening: the strength of both normal maps is preserved
Although the Whiteout solution appears to work well, it’s a bit fuzzy on the first and last points.
To meet these goals, our strategy involves rotating (or reorienting) the detail map so that it follows the ‘surface’ of the base normal map, just as tangent-space normals are transformed by the underlying geometry when lighting in object or world space. We’ll refer to this as Reoriented Normal Mapping (RNM). Here’s the result compared to the last two techniques:
The difference in detail is noticeable, and this shows through in the final shading (see demos at the end).
To be clear, we’re not the only ones to think of this. Essentially the same idea – developed for adding pores to skin as part of a Unity tech demo – was recently presented at GDC by Renaldas Zioma [5]. There are probably earlier examples too, although we’ve struggled to find any so far. That said, there are some advantages to our approach over the Unity one, as we’ll explain once we’ve dived into the implementation.
The Nitty Gritty
Okay, brace yourself for some maths. Let’s say that we have a geometric normal
:
Figure 7: Reorienting a detail normal u (left) so it follows the base normal map (right)
We can achieve this transform via the shortest arc quaternion [6]:
The rotation of
can then be performed in the standard way [7]:
As shown by [8], this reduces to:
Since we are operating in tangent space, by convention
and simplify, we obtain:
Which further reduces to:
Here is the HLSL implementation of
u
and t
are above:
1 2 3 4 |
|
A potentially neat property of this method is that the length of
is also unit length then no normalisation is required! However, this is unlikely to hold true in practice due to quantisation, compression, mipmapping and filtering. You may not see a significant impact on diffuse shading, but it can really affect energy-conserving specular. Given that, we recommend normalising the result:
1
|
|
Devil in the Details
Whilst we were preparing this article, we learned of an upcoming paper by Jeppe Revall Frisvad [9] that uses the same strategy for rotating a local vector. Here’s the relevant code adapted to HLSL:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
Our version is written with GPUs in mind and requires less ALU
operations in this context, whereas Jeppe’s implementation is
appropriate for situations where the basis can be reused, such as Monte
Carlo sampling. Another thing to note is that there’s a singularity when
the
. Jeppe checks for this, but in our case we can guard against it within the art pipeline instead.
More importantly, we could argue that
and renormalise prior to compression.
As for shading, we haven’t seen any adverse affects from negative
values – i.e., where the reoriented normal points into the surface – but this is certainly something to bear in mind. We’re interested in hearing your experiences.
Unity Blending
Let’s return now to the approach taken for the Unity tech demo. Like
Jeppe, Renaldas also uses a basis to transform the secondary normal.
This is created by rotating the base normal around the
axes to generate the other two rows of the matrix:
1 2 3 4 5 6 7 8 9 10 |
|
Note: This code differs slightly from the version in the “Mastering DirectX 11 with Unity” slides. The first row of the basis has been corrected.
However, the basis is only othonormal when n1
is
n1
towards the ) in place of n2
:
Figure 8: Unity basis (top row) vs quaternion transform (bottom row)
With Unity, the points collapse to a circle as n1
reaches the
axis because the basis goes to:
In contrast, there’s no such issue for the quaternion transform. This is also reflected in the blended output:
Figure 9: Reoriented Normal Mapping (left) vs Unity method (right)
Start Your Engines
Giving representative performance figures is an impossible task, as it very much depends on a number of factors: the platform, surrounding code, choice of normal map encoding (which might be unique to your game), and possibly even the shader compiler.
As a guide, we’ve taken the core of the various techniques – minus texture reads and repacking – and optimised them for the Shader Model 3.0 virtual instruction set (see Appendix). Here’s how they fare in terms of instruction count:
Method | SM3.0 ALU Inst. |
---|---|
Linear | 5 |
Overlay | 9 |
PD | 7 |
Whiteout | 7 |
UDN | 5 |
RNM * | 8 |
Unity | 8 |
Table 1: Instruction costs for the different methods
* This includes normalisation. If it turns out that you don’t need it, then RNM is 6 ALU instructions.
In reality the GPU may be able to pair some instructions, and certain
operations could be more expensive than others. In particular, normalize
expands to dot
rcp
mul
here, but a certain console provides a single instruction nrm
at half precision.
For space (and time!), we haven’t included code and stats for two-component normal map encodings, but the
component of the detail normal isn’t used, making the technique particularly attractive in this case.
A Light Demo
By now, I’m sure you’re wondering how these methods compare under lighting, so here is a simple WebGL demo with a moving light source. We’ve also put together a RenderMonkey project, so you can easily test things out with your own textures.
Conclusions
Based on the analysis and results, it’s clear to us that Linear and Overlay blending have no redeeming value when it comes to detail normal mapping. Even when GPU cycles are at a premium, UDN represents a better option, and it should be easy to replicate in Photoshop as well.
Whether you see any benefit from Whiteout over UDN could depend on your textures and shading model – in our example, there’s very little separating them. Beyond these, RNM can make a difference in terms of retaining more detail, and at a similar instruction cost, so we hope you find it a compelling alternative.
In addition to two component formats, we also haven’t covered fading strategies, integration with parallax mapping, or specular anti-aliasing. These are topics we’d like to address in the future.
Acknowledgements
Firstly, credit should be given to Gabriel Lassonde for the initial idea of rotating normals using quaternions for the purpose of blending. Secondly, would like to thank Pierric Gimmig, Steve McAuley and Morgan McGuire for helpful comments, plus David Massicotte for creating the example normal maps.
References
[1] Loviscach, J., “Care and Feeding of Normal Vectors”, ShaderX^6, Charles River Media, 2008.
[2] Mikkelsen, M., “How to do more generic mixing of derivative maps?”, 2012.
[3] Oat, C., “Real-Time Wrinkles”, Advanced Real-Time Rendering in 3D Graphics and Games, SIGGRAPH Course, 2007.
[4] “Material Basics: Detail Normal Map”, Unreal Developer Network.
[5] Zioma, R., Green, S., “Mastering DirectX 11 with Unity”, GDC 2012.
[6] Melax, S., “The Shortest Arc Quaternion”, Game Programming Gems, Charles River Media, 2000.
[7] Akenine-Möller, T., Haines, E., Hoffman, N., Real-Time Rendering 3rd Edition, A. K. Peters, Ltd., 2008.
[8] Watt, A., Watt, M., Advanced Animation and Rendering Techniques, Addison-Wesley, 1992.
[9] Frisvad, R. J., “Building an Orthonormal Basis from a 3D Unit Vector Without Normalization”, Journal of Graphics Tools 16(3), 2012.
Appendix
Optimised blending methods
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
Normal Blend simple code
float4 norm = tex2D (_BumpMap, IN.uv_BumpMap);
float4 norm2 = tex2D (_BumpMap2, IN.uv_BumpMap2);
o.Normal = normalize (float3 (norm.xy + norm2 .xy, norm.z));
보충> normalize, dot, inversesqrt 등의 운영자는 Unity가 최적의 코드로 변환하기 때문에 자신 만의 것을 사용하지 마십시오. pow, exp, log, cos, sin, tan 등의 계산 기능은 매우 무겁기 때문에 가급적 텍스처 참조 (예 : 컬러 곡선을 텍스처로 제공 한 것) 등을 이용하는 것을들 수 있습니다.
'Technical Report > Graphics Tech Reports' 카테고리의 다른 글
Android에서 ETC2 format 사용 (0) | 2017.02.02 |
---|---|
Unity Special folder Name in Assets Folder (0) | 2016.06.13 |
Improved Alpha-Tested Magnification for Vector Textures and Special Effects (0) | 2015.05.16 |
splatting texture (0) | 2015.05.13 |
Mobile BumpMapping test (0) | 2015.04.30 |