Self Shadow

SIGGRAPH 2014 Links

As before, I’m collecting links to SIGGRAPH content: courses, talks, posters, etc. I’ll continue to update the post as new stuff appears; if you’ve seen anything that’s not here, please let me know in the comments.

Update: In a welcome change this year, conference content is freely available from the ACM Digital Library (albeit via Author-Izer, so there’s a tedious countdown timer for each link). Here are the most relevant pages:

Physically Based Shading at SIGGRAPH 2014

2pm, Wednesday 13th August. Located in the west building, rooms 211-214.

We’re back once again with the Physically Based Shading (in Theory and Practice) course at SIGGRAPH! You can find the details on the new course page, but I’ll copy the schedule here, for your convenience:

14:00 Physics and Math of Shading (Naty Hoffman)
14:20 Understanding the Masking-Shadowing Function (Eric Heitz)
14:40 Antialiasing Physically Based Shading with LEADR Mapping (Jonathan Dupuy)
15:00 Designing Reflectance Models for New Consoles (Yoshiharu Gotanda)
15:30 Break
15:45 Moving Frostbite to PBR (Sébastien Lagarde and Charles de Rousiers)
16:15 Physically Based Shader Design in Arnold (Anders Langlands)
16:35 Art Direction within Pixar’s Physically Based Lighting System (Ian Megibben and Farhez Rayani)

As you can see, the composition of this year’s lineup is a little different than in previous years. To start with, we’ve incorporated a bit more theory into the first half of the course, beyond Naty Hoffman’s established and superlative introduction; Eric Heitz will be summarising his excellent JGCT paper on microfacet masking-shadowing functions, and Jonathan Dupuy will be distilling their recent work on LEADR Mapping. Jonathan also discusses a number of practical issues in his accompanying course notes.

Either side of the break, we have two game industry speakers, Yoshiharu Gotanda and Sébastien Lagarde. Yoshiharu will be covering his latest R&D at tri-Ace, targeting next-gen hardware; Sébastien will also be presenting some advances, along with sharing the Frostbite team’s experiences in bringing physically based rendering principles to their engine and a number of titles. Séb and Charles de Rousiers have also compiled a highly detailed and extensive set of course notes, which should be available in the coming days.

Arnold has fast become a (physically based) force to be reckoned with inside the VFX industry, so it’s high time that the renderer receive attention in the course. With that in mind, we have Anders Langlands (Solid Angle) talking about what makes his open-source shader library alShaders tick, the design decisions behind it, and how it plays to the strengths of Arnold.

Rounding off the session, we have Ian Megibben and Farhez Rayani from Pixar recounting the evolution of lighting over previous Toy Story films from an art perspective, as well as the challenges and benefits brought about by the switch to physically based rendering for Toy Story OF TERROR!

I hope to see you there!

One last thing…

We’ve been fortunate to have some really excellent presentations in the course over the past few years. One of the most enduring and influential has been Brent Burley’s Physically Based Shading at Disney, in 2012. Two years on, Brent has taken the time to update his course notes with a few additional details, complementing the shading model implementation that was added to Disney’s BRDF Explorer last November. Brent also revisits his “Smith G” roughness remapping, following the findings of Eric Heitz’ aforementioned paper. You can find Brent’s updated notes on the 2012 course page here.

SIGGRAPH 2013 Links

Once again, I’m collecting links to SIGGRAPH content: courses, talks, posters, etc. I’ll continue to update the post as new stuff appears.

Counting Quads

This is a DX11 followup to an earlier article on quad ‘overshading’. If you’ve already read that, then feel free to skip to the meat of this post.

Recount

As you likely know, modern GPUs shade triangles in blocks of 2x2 pixels, or quads. Consequently, redundant processing can happen along the edges where there’s partial coverage, since only some of the pixels will end up contributing to the final image. Normally this isn’t a problem, but – depending on the complexity of the pixel shader – it can significantly increase, or even dominate, the cost of rendering meshes with lots of very small or thin triangles.

Figure 1: Quad overshading, the silent performance killer

For more information, see Fabian Giesen’s post, plus his excellent series in general.

It’s hardly surprising, then, that IHVs have been advising for years to avoid triangles smaller than a certain size, but that’s somewhat at odds with game developers – artists in particular – wanting to increase visual fidelity and believability, through greater surface detail, smoother silhouettes, more complex shading, etc. (As a 3D programmer, part of my job involves the thrill of being stuck in the middle of these kinds of arguments!)

Traditionally, mesh LODs have helped to keep triangle density in check. More recently, deferred rendering methods have sidestepped a large chunk of the redundant shading work, by writing out surface attributes and then processing lighting more coherently via volumes or tiles. However, these are by no means definitive solutions, and nascent techniques such as DX11 tessellation and tile-based forward shading not only challenge the status quo, but also bring new relevancy to the problem of quad shading overhead.

Knowing about this issue is one thing, but, as they say: seeing is believing. In a previous article, I showed how to display hi-z and quad overshading on Xbox 360, via some plaform-specific tricks. That’s all well and good, but it would be great to have the same sort of visualisation on PC, built into the game editor. It would also be helpful to have some overall stats on shading efficiency, without having to link against a library (GPUPerfAPI, PerfKit) or run a separate tool.

There are several ways of reaching these modest goals, which I’ll cover next. What I’ve settled on so far is admittedly a hack: a compromise between efficiency, memory usage, correctness and simplicity. Still, it fulfils my needs so far and I hope you find it useful as well.

Blending in Detail

  +     =  


I’ve added a new article, Blending in Detail, written together with Colin Barré-Brisebois, on the topic of blending normal maps. We go through various techniques that are out there, as well as a neat alternative (“Reoriented Normal Mapping”) from Colin that I helped to optimise.

This is by no means a complete analysis – particularly as we focus on detail mapping – so we might return to the subject at a later date and tie up some loose ends. In the meantime, I hope you find the article useful. Please let us know in the comments!

Travelling Without Moving

Lately I’d been getting increasingly frustrated with the limitations of WordPress(.com), so I longed for a change. With the Easter weekend, I finally had a little extra time and energy to make the switch to Octopress, plus a dedicated web host. Hopefully that’ll encourage me to start posting again, or at least remove one major grumble. I’m also looking forward to such liberties as the ability to embed WebGL, though I can’t entirely promise that I’ll wield such power responsibly.

Existing post URLs remain the same, but if you’re one of the illustrious few who subscribe to the blog via RSS, I’m guessing that you’ll need to change over to the new feed. Update: I’m redirecting the old feed URL now, so everything should be back to normal! Speaking of RSS, as I’m now using MathJax for $\LaTeX$, it appears that I’ll need to implement a fallback there, in addition to tracking down a rendering issue with Chrome. Please let me know if you spot any other oddities.

Righting Wrap (Part 2)

Wrapping Paper

I first tinkered with SH wrap shading (as described in part 1) for Splinter Cell: Conviction, since we were using a couple of models [1][2] for some character-specific materials. Unfortunately, due to the way that indirect character lighting was performed, it would have required additional memory that we couldn’t really justify at that point in development. Consequently, this work was left on the cutting room floor and I only got as far as testing out Green’s model [1].

Recently, however, I spotted that Irradiance Rigs [3] covers similar ground. At the very end of the short paper, they briefly present a generalisation of Valve’s Half Lambert model [2] and the SH convolution terms for the first three bands:

Righting Wrap (Part 1)

A while back, Steve McAuley and I were discussing physically based rendering miscellanea over a quiet pint – hardly a stretch of the imagination, since we’re English 3D programmers after all! Anyway, it turned out that we both had plans to write up a few thoughts in relation to wrap shading, and, following some gentle arm-twisting, Steve has posted his. I suggest that you go and read that first if you haven’t already, then return here for a continuation of the subject.

Bad Wrap

Wrap shading has its uses when more accurate techniques are too expensive, or simply to achieve a certain aesthetic, but common models [1][2] have some deficiencies out of the box. Neither of these is energy conserving and they don’t really play well with shadows either. On top of that, Valve’s Half Lambert model [2] has a fixed amount of wrap, so it can’t be tuned to suit different materials (or, perhaps, to limit shadow oddities). I’ll come back to the point about flexibility in part 2, but first I’d like to discuss another factor that’s easily overlooked: environmental lighting.

Perpendicular Possibilities

Figure 1: Major axes for original (left), swizzle (mid) and perpendicular (right) vectors

Introduction

Two months ago, there was a question (and subsequent discussion) on Twitter as to how to go about generating a perpendicular unit vector, preferably without branching. It seemed about time that I finally post something more complete on the subject, since there are various ways to go about doing this, as well as a few traps awaiting the unwary programmer.