RGBE or Radiance HDR is an image format invented by Gregory Ward Larson for the Radiance rendering system. It stores pixels as one byte each for RGB (red, green, and blue) values with a one byte shared exponent. Thus it stores four bytes per pixel.
|Internet media type|
|Magic number||23 3f 52 41 44 49 41 4e 43 45 0a|
|Type of format||lossless image format|
RGBE allows pixels to have the dynamic range and precision of floating point values in a relatively compact data structure (32bits per pixel) - often when images are generated from light simulations, the range of per-pixel color intensity values are much greater than will nicely fit into the standard 0..255 (8-bit) range of standard 24-bit image formats. As a result, the bright pixels are either clipped to 255 or end up losing all their precision in dimmer pixels.
By using a shared exponent, the RGBE format gains some of the advantages of floating point values whilst using less than the 32 or 16 bits per color component that would be needed for single precision or half precision data in the IEEE floating-point format; and with a higher dynamic range than half precision. An exponent value of 128 maps integer colors [0..255] into [0..1) floating point space.
A second variant of the format uses the XYZ color model with a shared exponent. The mime type and file extension is identical, thus applications reading this file format need to interpret the embedded information on the color model.
Greg Ward provides code to handle RGBE files in his Radiance renderer.
JPEG XT Part 2 (Dolby JPEG-HDR) and Part 7 Profile A are based on the RGBE format.
In computing, half precision is a binary floating-point computer number format that occupies 16 bits (two bytes in modern computers) in computer memory.
In the IEEE 754-2008 standard, the 16-bit base-2 format is referred to as binary16. It is intended for storage of floating-point values in applications where higher precision is not essential for performing arithmetic computations.
Although implementations of the IEEE Half-precision floating point are relatively new, several earlier 16-bit floating point formats have existed including that of Hitachi's HD61810 DSP of 1982, Scott's WIF and the 3dfx Voodoo Graphics processor.Nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. ILM was searching for an image format that could handle a wide dynamic range, but without the hard drive and memory cost of floating-point representations that are commonly used for floating-point computation (single and double precision). The hardware-accelerated programmable shading group led by John Airey at SGI (Silicon Graphics) invented the s10e5 data type in 1997 as part of the 'bali' design effort. This is described in a SIGGRAPH 2000 paper (see section 4.3) and further documented in US patent 7518615.This format is used in several computer graphics environments including OpenEXR, JPEG XR, GIMP, OpenGL, Cg, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the increased dynamic range allows for more detail to be preserved in highlights and shadows for images. The advantage over 32-bit single-precision binary formats is that it requires half the storage and bandwidth (at the expense of precision and range).The F16C extension allows x86 processors to convert half-precision floats to and from single-precision floats.High-dynamic-range imaging
High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.
HDR images can represent a greater range of luminance levels than can be achieved using more traditional methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.
The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.
Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).JPEG XT
JPEG XT (ISO/IEC 18477) is an image compression standard which specifies backward-compatible extensions of the base JPEG standard (ISO/IEC 10918-1 and ITU Rec. T.81).
JPEG XT extends JPEG with support for higher integer bit depths, high dynamic range imaging and floating-point coding, lossless coding, alpha channel coding, and an extensible file format based on JFIF. It also includes reference software implementation and conformance testing specification.
JPEG XT extensions are backward compatible with base JPEG/JFIF file format - existing software is forward compatible and can read the JPEG XT binary stream, though it would only decode the base 8-bit lossy image.Logluv TIFF
Logluv TIFF is an encoding used for storing high dynamic range imaging data inside a TIFF image. It was originally developed by Greg Ward for storing HDR-output of his Radiance-photonmapper at a time where storage space was a crucial factor. Its implementation in TIFF also allowed the combination with image-compression algorithms without great programming effort. As such it has to be considered a smart compromise between the imposed limitations. It is slightly related to RGBE, the most successful HDRI storage format, an earlier invention of Greg Ward.RGBE
RGBE – RGB (Red, Green, Blue) + E, may refer to:
RGBE filter – RGB + Emerald
RGBE image format – RGB + ExponentRadiance (software)
Radiance is a suite of tools for performing lighting simulation originally written by Greg Ward. It includes a renderer as well as many other tools for measuring the simulated light levels. It uses ray tracing to perform all lighting calculations, accelerated by the use of an octree data structure. It pioneered the concept of high dynamic range imaging, where light levels are (theoretically) open-ended values instead of a decimal proportion of a maximum (e.g. 0.0 to 1.0) or integer fraction of a maximum (0 to 255 / 255). It also implements global illumination using the Monte Carlo method to sample light falling on a point.
Greg Ward started developing Radiance in 1985 while at Lawrence Berkeley National Laboratory. The source code was distributed under a license forbidding further redistribution. In January 2002 Radiance 3.4 was relicensed under a less restrictive license.
One study found Radiance to be the most generally useful software package for architectural lighting simulation. The study also noted that Radiance often serves as the underlying simulation engine for many other packages.