Luminance

Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted or reflected from a particular area, and falls within a given solid angle. The SI unit for luminance is candela per square metre (cd/m2). A non-SI term for the same unit is the nit. The CGS unit of luminance is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2.

Explanation

Luminance is often used to characterize emission or reflection from flat, diffuse surfaces. The luminance indicates how much luminous power will be detected by an eye looking at the surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil. Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between 50 and 300 cd/m2. The sun has a luminance of about 1.6×109 cd/m2 at noon.[1]

Luminance is invariant in geometric optics.[2] This means that for an ideal optical system, the luminance at the output is the same as the input luminance. For real, passive, optical systems, the output luminance is at most equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source.

Definition

Etendue-Free space
Parameters for defining the luminance

The luminance of a specified point of a light source, in a specified direction, is defined by the derivative

where

  • Lv is the luminance (cd/m2),
  • d2Φv is the luminous flux (lm) leaving the area dΣ in any direction contained inside the solid angle dΩΣ,
  • dΣ is an infinitesimal area (m2) of the source containing the specified point,
  • dΩΣ is an infinitesimal solid angle (sr) containing the specified direction,
  • θΣ is the angle between the normal nΣ to the surface dΣ and the specified direction.[3]

If light travels through a lossless medium, the luminance does not change along a given light ray. As the ray crosses an arbitrary surface S, the luminance is given by

where

  • dS is the infinitesimal area of S seen from the source inside the solid angle dΩΣ,
  • dΩS is the infinitesimal solid angle subtended by dΣ as seen from dS,
  • θS is the angle between the normal nS to dS and the direction of the light.

More generally, the luminance along a light ray can be defined as

where

  • dG is the etendue of an infinitesimally narrow beam containing the specified ray,
  • dΦv is the luminous flux carried by this beam,
  • n is the index of refraction of the medium.

Relation to Illuminance

The luminance of a reflecting surface is related to the illuminance it receives:

where the integral covers all the directions of emission ΩΣ, and

In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. Then the relationship is simply

Units

A variety of units have been used for luminance, besides the candela per square metre.

One candela per square metre is equal to:

Health effects

Retinal damage can occur when the eye is exposed to high luminance. Damage can occur due to local heating of the retina. Photochemical effects can also cause damage, especially at short wavelengths.

Luminance meter

A luminance meter is a device used in photometry that can measure the luminance in a particular direction and with a particular solid angle. The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera records color images.[4]

See also

SI photometry quantities
Quantity Unit Dimension Notes
Name Symbol[nb 1] Name Symbol Symbol[nb 2]
Luminous energy Qv [nb 3] lumen second lm⋅s TJ The lumen second is sometimes called the talbot.
Luminous flux, luminous power Φv [nb 3] lumen (= candela steradians) lm (= cd⋅sr) J Luminous energy per unit time
Luminous intensity Iv candela (= lumen per steradian) cd (= lm/sr) J Luminous flux per unit solid angle
Luminance Lv candela per square metre cd/m2 L−2J Luminous flux per unit solid angle per unit projected source area. The candela per square metre is sometimes called the nit.
Illuminance Ev lux (= lumen per square metre) lx (= lm/m2) L−2J Luminous flux incident on a surface
Luminous exitance, luminous emittance Mv lux lx L−2J Luminous flux emitted from a surface
Luminous exposure Hv lux second lx⋅s L−2TJ Time-integrated illuminance
Luminous energy density ωv lumen second per cubic metre lm⋅s⋅m−3 L−3TJ
Luminous efficacy η [nb 3] lumen per watt lm/W M−1L−2T3J Ratio of luminous flux to radiant flux or power consumption, depending on context
Luminous efficiency, luminous coefficient V 1 Luminous efficacy normalized by the maximum possible efficacy
See also: SI · Photometry · Radiometry
  1. ^ Standards organizations recommend that photometric quantities be denoted with a suffix "v" (for "visual") to avoid confusion with radiometric or photon quantities. For example: USA Standard Letter Symbols for Illuminating Engineering USAS Z7.1-1967, Y10.18-1967
  2. ^ The symbols in this column denote dimensions; "L", "T" and "J" are for length, time and luminous intensity respectively, not the symbols for the units litre, tesla and joule.
  3. ^ a b c Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and ρ or K for luminous efficacy.

References

  1. ^ "Luminance". Lighting Design Glossary. Retrieved Apr 13, 2009.
  2. ^ Dörband, Bernd; Gross, Herbert; Müller, Henriette (2012). Gross, Herbert, ed. Handbook of Optical Systems. 5, Metrology of Optical Components and Systems. Wiley. p. 326. ISBN 978-3-527-40381-3.
  3. ^ Chaves, Julio (2015). Introduction to Nonimaging Optics, Second Edition. CRC Press. p. 679. ISBN 978-1482206739. Archived from the original on 2016-02-18.
  4. ^ "e-ILV : Luminance meter". CIE. Retrieved 20 February 2013.

External links

APEX system

APEX stands for Additive System of Photographic Exposure, which

was proposed in the 1960 ASA standard

for monochrome film speed, ASA PH2.5-1960,

as a means of simplifying exposure computation.

Adobe RGB color space

The Adobe RGB (1998) color space is an RGB color space developed by Adobe Systems, Inc. in 1998. It was designed to encompass most of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a computer display. The Adobe RGB (1998) color space encompasses roughly 50% of the visible colors specified by the CIELAB color space – improving upon the gamut of the sRGB color space, primarily in cyan-green hues.

Brightness

Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target. It is not necessarily proportional to luminance. This is a subjective attribute/property of an object being observed and one of the color appearance parameters of color appearance models. Brightness refers to an absolute term and should not be confused with Lightness.

The adjective bright derives from an Old English beorht with the same meaning via metathesis giving Middle English briht. The word is from a Common Germanic *berhtaz, ultimately from a PIE root with a closely related meaning, *bhereg- "white, bright". "Brightness" was formerly used as a synonym for the photometric term luminance and (incorrectly) for the radiometric term radiance. As defined by the US Federal Glossary of Telecommunication Terms (FS-1037C), "brightness" should now be used only for non-quantitative references to physiological sensations and perceptions of light.

A given target luminance can elicit different perceptions of brightness in different contexts; see, for example, White's illusion.

In the RGB color space, brightness can be thought of as the arithmetic mean μ of the red, green, and blue color coordinates (although some of the three components make the light seem brighter than others, which, again, may be compensated by some display systems automatically):

Brightness is also a color coordinate in HSL color space : hue, saturation, and lightness, meaning here brightness.

With regard to stars, brightness is quantified as apparent magnitude and absolute magnitude.

Brightness is, at least in some respects, the antonym of darkness.

Candela per square metre

The candela per square metre (cd/m2) is the derived SI unit of luminance. The unit is based on the candela, the SI unit of luminous intensity, and the square metre, the SI unit of area.

Nit (nt) is a non-SI name also used for this unit (1 nt = 1 cd/m2). The term nit is believed to come from the Latin word nitere, to shine.As a measure of light emitted per unit area, this unit is frequently used to specify the brightness of a display device. The sRGB spec for monitors targets 80 cd/m2. Typically, calibrated monitors should have a brightness of 120 cd/m2. Most consumer desktop liquid crystal displays have luminances of 200 to 300 cd/m2. High-definition televisions range from 450 to about 1500 cd/m2.

Chroma subsampling

Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.It is used in many video encoding schemes – both analog and digital – and also in JPEG encoding.

Chromaticity

Chromaticity is an objective specification of the quality of a color regardless of its luminance. Chromaticity consists of two independent parameters, often specified as hue (h) and colorfulness (s), where the latter is alternatively called saturation, chroma, intensity, or excitation purity. This number of parameters follows from trichromacy of vision of most humans, which is assumed by most models in color science.

Contrast (vision)

Contrast is the difference in luminance or colour that makes an object (or its representation in an image or display) distinguishable. In visual perception of the real world, contrast is determined by the difference in the color and brightness of the object and other objects within the same field of view. The human visual system is more sensitive to contrast than absolute luminance; we can perceive the world similarly regardless of the huge changes in illumination over the day or from place to place. The maximum contrast of an image is the contrast ratio or dynamic range.

Dot crawl

Dot crawl is the popular name for a visual defect of color analog video standards when signals are transmitted as composite video, as in terrestrial broadcast television. It consists of animated checkerboard patterns which appear along horizontal color transitions (vertical edges). It results from intermodulation or crosstalk between chrominance and luminance components of the signal, which are imperfectly multiplexed in the frequency domain.

This takes two forms: chroma interference in luma (chroma being interpreted as luma), and luma interference in chroma.

Dot crawl is most visible when the chrominance is transmitted with a high bandwidth, so that its spectrum reaches well into the band of frequencies used by the luminance signal in the composite video signal. This causes high-frequency chrominance detail at color transitions to be interpreted as luminance detail.

Some (mostly older) video game consoles and computers use nonstandard color burst phases and may produce dot crawl quite different from that seen in broadcast NTSC or PAL.

The opposite problem, luminance interference in chroma, is the appearance of a colored noise in image areas with high levels of detail. This results from high-frequency luminance detail crossing into the frequencies used by the chrominance channel and producing false coloration, known as color bleed. Bleed can also make narrowly spaced text difficult to read. Some computers, such as the Apple II, utilized this to generate color.

Dot crawl has long been recognized as a problem by professionals since the creation of composite video, but was first widely noticed by the general public with the advent of Laserdiscs.

Dot crawl can be greatly reduced by using a good comb filter in the receiver to separate the encoded chrominance signal from the luminance signal. When the NTSC standard was adopted in the 1950s, TV engineers realized that it should theoretically be possible to design a filter to properly separate the luminance and chroma signals. However, the vacuum tube-based electronics of the time did not permit any cost-effective method of implementing a comb filter. Thus, the early color TVs used only notch filters, which cut the luminance off at 3.5 MHz. This effectively reduced the luminance bandwidth (normally 4 MHz) to that of the chroma, causing considerable color bleed. By the 1970s, TVs had begun using solid-state electronics and the first comb filters appeared. However, they were expensive and only high-end models used them, while most color sets continued to use notch filters.

By the 1990s, a further development took place with the advent of three-line digital ("3D") comb filters. This type of filter uses a computer to analyze the NTSC signal three scan lines at a time and determine the best place to put the chroma and luminance. During this period, digital filters became standard in high-end TVs while the older analog filter began appearing in cheaper models (although notch filters were still widely used).

However, no comb filter can totally eliminate NTSC artifacts and the only complete solutions to dot crawl are not to use NTSC or PAL composite video, maintaining the signals separately by using S-Video or component video connections instead, or encoding the chrominance signal differently as in SECAM or any modern digital video standard as long as the source video has never been processed using any video system vulnerable to dot crawl.

Monochrome film recordings of color television programs may exhibit dot crawl, and starting in 2008 it has been used to recover the original color information in a process called color recovery.

Exposure value

In photography, exposure value (EV) is a number that represents a combination of a camera's shutter speed and f-number, such that all combinations that yield the same exposure have the same EV (for any fixed scene luminance). Exposure value is also used to indicate an interval on the photographic exposure scale, with a difference of 1 EV corresponding to a standard power-of-2 exposure step, commonly referred to as a stop.The EV concept was developed by the German shutter manufacturer Friedrich Deckel in the 1950s (Gebele 1958; Ray 2000, 318). Its intent was to simplify choosing among equivalent camera exposure settings by replacing combinations of shutter speed and f-number (e.g., 1/125 s at f/16) with a single number (e.g., 15).

On some lenses with leaf shutters, the process was further simplified by allowing the shutter and aperture controls to be linked such that, when one was changed, the other was automatically adjusted to maintain the same exposure. This was especially helpful to beginners with limited understanding of the effects of shutter speed and aperture and the relationship between them. But it was also useful for experienced photographers who might choose a shutter speed to stop motion or an f-number for depth of field, because it allowed for faster adjustment—without the need for mental calculations—and reduced the chance of error when making the adjustment.

The concept became known as the Light Value System (LVS) in Europe; it was generally known as the Exposure Value System (EVS) when the features became available on cameras in the United States (Desfor 1957).

Because of mechanical considerations, the coupling of shutter and aperture was limited to lenses with leaf shutters; however, various automatic exposure modes now work to somewhat the same effect in cameras with focal-plane shutters.

The proper EV was determined by the scene luminance and film speed; it was intended that the system also include adjustment for filters, exposure compensation, and other variables. With all of these elements included, the camera would be set by transferring the single number thus determined.

Exposure value has been indicated in various ways. The ASA and ANSI standards used the quantity symbol Ev, with the subscript v indicating the logarithmic value; this symbol continues to be used in ISO standards, but the acronym EV is more common elsewhere. The Exif standard uses Ev (CIPA 2016).

Although all camera settings with the same EV nominally give the same exposure, they do not necessarily give the same picture. The f-number (relative aperture) determines the depth of field, and the shutter speed (exposure time) determines the amount of motion blur, as illustrated by the two images at the right (and at long exposure times, as a second-order effect, the light-sensitive medium may exhibit reciprocity failure, which is a change of light sensitivity dependent on the irradiance at the film).

Gamma correction

Gamma correction, or often simply gamma, is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

where the non-negative real input value is raised to the power and multiplied by the constant A, to get the output value . In the common case of A = 1, inputs and outputs are typically in the range 0–1.

A gamma value is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely a gamma value is called a decoding gamma and the application of the expansive power-law nonlinearity is called gamma expansion.

Glare (vision)

Glare is difficulty of seeing in the presence of bright light such as direct or reflected sunlight or artificial light such as car headlamps at night. Because of this, some cars include mirrors with automatic anti-glare functions.

Glare is caused by a significant ratio of luminance between the task (that which is being looked at) and the glare source. Factors such as the angle between the task and the glare source and eye adaptation have significant impacts on the experience of glare.

Grayscale

In digital photography, computer-generated imagery, and colorimetry, a grayscale or greyscale image is one in which the value of each pixel is a single sample representing only an amount of light, that is, it carries only intensity information. Grayscale images, a kind of black-and-white or gray monochrome, are composed exclusively of shades of gray. The contrast ranges from black at the weakest intensity to white at the strongest.Grayscale images are distinct from one-bit bi-tonal black-and-white images which, in the context of computer imaging, are images with only two colors: black and white (also called bilevel or binary images). Grayscale images have many shades of gray in between.

Grayscale images can be the result of measuring the intensity of light at each pixel according to a particular weighted combination of frequencies (or wavelengths), and in such cases they are monochromatic proper when only a single frequency (in practice, a narrow band of frequencies) is captured. The frequencies can in principle be from anywhere in the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.).

A colorimetric (or more specifically photometric) grayscale image is an image that has a defined grayscale colorspace, which maps the stored numeric sample values to the achromatic channel of a standard colorspace, which itself is based on measured properties of human vision.

If the original color image has no defined colorspace, or if the grayscale image is not intended to have the same human-perceived achromatic intensity as the color image, then there is no unique mapping from such a color image to a grayscale image.

HSL and HSV

HSL (hue, saturation, lightness) and HSV (hue, saturation, value) are alternative representations of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes. In these models, colors of each hue are arranged in a radial slice, around a central axis of neutral colors which ranges from black at the bottom to white at the top. The HSV representation models the way paints of different colors mix together, with the saturation dimension resembling various shades of brightly colored paint, and the value dimension resembling the mixture of those paints with varying amounts of black or white paint. The HSL model attempts to resemble more perceptual color models such as the Natural Color System (NCS) or Munsell color system, placing fully saturated colors around a circle at a lightness value of ​1⁄2, where a lightness value of 0 or 1 is fully black or white, respectively.

High-dynamic-range imaging

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

HDR images can represent a greater range of luminance levels than can be achieved using more traditional methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

Luma (video)

In video, luma represents the brightness in an image (the "black-and-white" or achromatic portion of the image). Luma is typically paired with chrominance. Luma represents the achromatic image, while the chroma components represent the color information. Converting R′G′B′ sources (such as the output of a three-CCD camera) into luma and chroma allows for chroma subsampling: because human vision has finer spatial sensitivity to luminance ("black and white") differences than chromatic differences, video systems can store and transmit chromatic information at lower resolution, optimizing perceived detail at a particular bandwidth.

Orders of magnitude (luminance)

This page lists examples of luminances, measured in candelas per square metre and grouped by order of magnitude.

Peripheral drift illusion

The peripheral drift illusion (PDI) refers to a motion illusion generated by the presentation of a sawtooth luminance grating in the visual periphery. This illusion was first described by Faubert and Herbert (1999), although a similar effect called the "escalator illusion" was reported by Fraser and Wilcox (1979). A variant of the PDI was created by Kitaoka Akiyoshi and Ashida (2003) who took the continuous sawtooth luminance change, and reversed the intermediate greys. Kitaoka has created numerous variants of the PDI, and one called "rotating snakes" has become very popular. The latter demonstration has kindled great interest in the PDI.

The illusion is easily seen when fixating off to the side of it, and then blinking as fast as possible. Most observers can see the illusion easily when reading text with the illusion figure in the periphery. The motion of such illusions is consistently perceived in a dark-to-light direction.

Two papers have been published examining the neural mechanisms involved in seeing the PDI (Backus & Oruç, 2005; Conway et al., 2005). Faubert and Herbert (1999) suggested the illusion was based on temporal differences in luminance processing producing a signal that tricks the motion system. Both of the articles from 2005 are broadly consistent with those ideas, although contrast appears to be an important factor (Backus & Oruç, 2005).

Relative luminance

Relative luminance follows the photometric definition of luminance, but with the values normalized to 1 or 100 for a reference white. Like the photometric definition, it is related to the luminous flux density in a particular direction, which is radiant flux density weighted by the luminosity function y(λ) of the CIE Standard Observer.

The use of relative values is useful in systems where absolute reproduction is impractical. For example, in prepress for print media, the absolute luminance of light reflecting off the print depends on the illumination and therefore absolute reproduction cannot be assured.

YUV

YUV is a color encoding system typically used as part of a color image pipeline. It encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a "direct" RGB-representation. Other color encodings have similar properties, and the main reason to implement or investigate properties of Y′UV would be for interfacing with analog or digital television or photographic equipment that conforms to certain Y′UV standards.

The scope of the terms Y′UV, YUV, YCbCr, YPbPr, etc., is sometimes ambiguous and overlapping. Historically, the terms YUV and Y′UV were used for a specific analog encoding of color information in television systems, while YCbCr was used for digital encoding of color information suited for video and still-image compression and transmission such as MPEG and JPEG. Today, the term YUV is commonly used in the computer industry to describe file-formats that are encoded using YCbCr.

The Y′UV model defines a color space in terms of one luma component (Y′) and two chrominance (UV) components. The Y′UV color model is used in the PAL composite color video (excluding PAL-N) standard. Previous black-and-white systems used only luma (Y′) information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.

Y′ stands for the luma component (the brightness) and U and V are the chrominance (color) components; luminance is denoted by Y and luma by Y′ – the prime symbols (') denote gamma compression, with "luminance" meaning physical linear-space brightness, while "luma" is (nonlinear) perceptual brightness.

The YPbPr color model used in analog component video and its digital version YCbCr used in digital video are more or less derived from it, and are sometimes called Y′UV. (CB/PB and CR/PR are deviations from grey on blue–yellow and red–cyan axes, whereas U and V are blue–luminance and red–luminance differences respectively.) The Y′IQ color space used in the analog NTSC television broadcasting system is related to it, although in a more complex way. The YDbDr color space used in the analog SECAM and PAL-N television broadcasting systems, are also related.

As for etymology, Y, Y′, U, and V are not abbreviations. The use of the letter Y for luminance can be traced back to the choice of XYZ primaries. This lends itself naturally to the usage of the same letter in luma (Y′), which approximates a perceptually uniform correlate of luminance. Likewise, U and V were chosen to differentiate the U and V axes from those in other spaces, such as the x and y chromaticity space. See the equations below or compare the historical development of the math.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.