Marco Salvi of Heavenly Sword on High Dynamic Range Lighting

001002 003

Ninja Theory‘s Marco Salvi sets the record straight on High Dynamic Range lighting or HDR, in an interview with psinext’s Carl Bender.

According to Salvi, the term has become very common nowadays and is often misused. He clarified that when rendering an HDR image, you are basically storing all of the information needed to represent the amount of light that passes through every pixel in the picture. This captures a scene in all its color and luminance.

“Unfortunately, current low-cost display technologies (common LCDs, CRTs, etc..) can’t properly display an HDR image, so we need to go through a process called ‘tone mapping’ to remap our image to an LDR (Low Dynamic Range) image in order to display it on a screen,” says Salvi. Tone mapping he says, can be used to simulate the way our eyes slowly adapt to different light conditions.

When asked what kind of scenery within a game would benefit from HDR he quickly answered, “Compared to LDR rendering, every scene that requires a very low or very high global luminance image, or an image with a high contrast ratio, would benefit from HDR lighting.”

It’s a good way to avoid images being too bright or vice versa, that makes gamers lose precious lighting information that makes the scene pop out of the screen.

If you want the full 101 on HDR just click on the read link below this article. But for the meantime here’s an excerpt:

PSINext:  People have come to associate high dynamic range lighting with on-chip hardware support for either FP16 or FP32 HDR rendering.  How does Team Ninja implement NAO32, and can it be considered ‘real’ HDR?

Marco:  The FP16 and FP32 rendering formats give a developer the opportunity to collect per pixel information (respectively 8 and 16 bytes per pixel); hence they easily enable us to render and to store an HDR image. Unfortunately, these framebuffer formats are inherently slow because they require more memory bandwidth and increased memory space: an FP16 720p image with 4X anti-aliasing requires about 30 MBytes of memory!

At the same time it’s important to understand that it does not matter how we store our HDR images so long as we find a way to encode them without losing too much information.

The RGB color space is not very efficient at encoding HDR images, so after a bit of research we found another color space that is far more efficient at representing HDR images.  Its name is CIE Luv, and it splits a color into 3 components: one is not normalized and represents how intense a color is (luminance), while the other 2 components are normalized between 0 and 1.

Gregory Ward, a pioneer of HDR imaging, exploited this color space many years ago to store HDR images in a file format he called LogLuv, so we built upon that work and we customized it to our purposes.

001002 003

Ninja Theory‘s Marco Salvi sets the record straight on High Dynamic Range lighting or HDR, in an interview with psinext’s Carl Bender.

According to Salvi, the term has become very common nowadays and is often misused. He clarified that when rendering an HDR image, you are basically storing all of the information needed to represent the amount of light that passes through every pixel in the picture. This captures a scene in all its color and luminance.

“Unfortunately, current low-cost display technologies (common LCDs, CRTs, etc..) can’t properly display an HDR image, so we need to go through a process called ‘tone mapping’ to remap our image to an LDR (Low Dynamic Range) image in order to display it on a screen,” says Salvi. Tone mapping he says, can be used to simulate the way our eyes slowly adapt to different light conditions.

When asked what kind of scenery within a game would benefit from HDR he quickly answered, “Compared to LDR rendering, every scene that requires a very low or very high global luminance image, or an image with a high contrast ratio, would benefit from HDR lighting.”

It’s a good way to avoid images being too bright or vice versa, that makes gamers lose precious lighting information that makes the scene pop out of the screen.

If you want the full 101 on HDR just click on the read link below this article. But for the meantime here’s an excerpt:

PSINext:  People have come to associate high dynamic range lighting with on-chip hardware support for either FP16 or FP32 HDR rendering.  How does Team Ninja implement NAO32, and can it be considered ‘real’ HDR?

Marco:  The FP16 and FP32 rendering formats give a developer the opportunity to collect per pixel information (respectively 8 and 16 bytes per pixel); hence they easily enable us to render and to store an HDR image. Unfortunately, these framebuffer formats are inherently slow because they require more memory bandwidth and increased memory space: an FP16 720p image with 4X anti-aliasing requires about 30 MBytes of memory!

At the same time it’s important to understand that it does not matter how we store our HDR images so long as we find a way to encode them without losing too much information.

The RGB color space is not very efficient at encoding HDR images, so after a bit of research we found another color space that is far more efficient at representing HDR images.  Its name is CIE Luv, and it splits a color into 3 components: one is not normalized and represents how intense a color is (luminance), while the other 2 components are normalized between 0 and 1.

Gregory Ward, a pioneer of HDR imaging, exploited this color space many years ago to store HDR images in a file format he called LogLuv, so we built upon that work and we customized it to our purposes.

Add a Comment

Your email address will not be published. Required fields are marked *