Interferometry is a family of techniques in which waves, usually electromagnetic waves, are superimposed, causing the phenomenon of interference, which is used to extract information. Interferometry is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy (and its applications to chemistry), quantum mechanics, nuclear and particle physics, plasma physics, remote sensing, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, and optometry.:1–2
Interferometers are widely used in science and industry for the measurement of small displacements, refractive index changes and surface irregularities. In most interferometers, light from a single source is split into two beams that travel in different optical paths, which are then combined again to produce interference; however, under some circumstances, two incoherent sources can also be made to interfere. The resulting interference fringes give information about the difference in optical path lengths. In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision; they are the highest precision length measuring instruments existing. In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements.
Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property that is diagnostic of the original state of the waves. This works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference. Waves which are not completely in phase nor completely out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference. Most interferometers use light or some other form of electromagnetic wave.:3–12
Typically (see Fig. 1, the well-known Michelson configuration) a single incoming beam of coherent light will be split into two identical beams by a beam splitter (a partially reflecting mirror). Each of these beams travels a different route, called a path, and they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them. It is this introduced phase difference that creates the interference pattern between the initially identical waves.:14–17 If a single beam has been split along two paths, then the phase difference is diagnostic of anything that changes the phase along the paths. This could be a physical change in the path length itself or a change in the refractive index along the path.:93–103
As seen in Fig. 2a and 2b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image M′2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images S′1 and S′2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 2a, the optical elements are oriented so that S′1 and S′2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2. If, as in Fig. 2b, M1 and M′2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if M′1 and M′2 overlap, the fringes near the axis will be straight, parallel, and equally spaced. If S is an extended source rather than a point source as illustrated, the fringes of Fig. 2a must be observed with a telescope set at infinity, while the fringes of Fig. 2b will be localized on the mirrors.:17
Use of white light will result in a pattern of colored fringes (see Fig. 3).:26 The central fringe representing equal path length may be light or dark depending on the number of phase inversions experienced by the two beams as they traverse the optical system.:26,171–172 (See Michelson interferometer for a discussion of this.)
Interferometers and interferometric techniques may be categorized by a variety of criteria:
In homodyne detection, the interference occurs between two beams at the same wavelength (or carrier frequency). The phase difference between the two beams results in a change in the intensity of the light on the detector. The resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. Most of the interferometers discussed in this article fall into this category.
The heterodyne technique is used for (1) shifting an input signal into a new frequency range as well as (2) amplifying a weak input signal (assuming use of an active mixer). A weak input signal of frequency f1 is mixed with a strong reference frequency f2 from a local oscillator (LO). The nonlinear combination of the input signals creates two new signals, one at the sum f1 + f2 of the two frequencies, and the other at the difference f1 − f2. These new frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer. The output signal will have an intensity proportional to the product of the amplitudes of the input signals.
The most important and widely used application of the heterodyne technique is in the superheterodyne receiver (superhet), invented by U.S. engineer Edwin Howard Armstrong in 1918. In this circuit, the incoming radio frequency signal from the antenna is mixed with a signal from a local oscillator (LO) and converted by the heterodyne technique to a lower fixed frequency signal called the intermediate frequency (IF). This IF is amplified and filtered, before being applied to a detector which extracts the audio signal, which is sent to the loudspeaker.
While optical heterodyne interferometry is usually done at a single point it is also possible to perform this widefield.
A double path interferometer is one in which the reference beam and sample beam travel along divergent paths. Examples include the Michelson interferometer, the Twyman–Green interferometer, and the Mach–Zehnder interferometer. After being perturbed by interaction with the sample under test, the sample beam is recombined with the reference beam to create an interference pattern which can then be interpreted.:13–22
A common-path interferometer is a class of interferometer in which the reference beam and sample beam travel along the same path. Fig. 4 illustrates the Sagnac interferometer, the fibre optic gyroscope, the point diffraction interferometer, and the lateral shearing interferometer. Other examples of common path interferometer include the Zernike phase-contrast microscope, Fresnel's biprism, the zero-area Sagnac, and the scatterplate interferometer.
A wavefront splitting interferometer divides a light wavefront emerging from a point or a narrow slit (i.e. spatially coherent light) and, after allowing the two parts of the wavefront to travel through different paths, allows them to recombine. Fig. 5 illustrates Young's interference experiment and Lloyd's mirror. Other examples of wavefront splitting interferometer include the Fresnel biprism, the Billet Bi-Lens, and the Rayleigh interferometer.
In 1803, Young's interference experiment played a major role in the general acceptance of the wave theory of light. If white light is used in Young's experiment, the result is a white central band of constructive interference corresponding to equal path length from the two slits, surrounded by a symmetrical pattern of colored fringes of diminishing intensity. In addition to continuous electromagnetic radiation, Young's experiment has been performed with individual photons, with electrons, and with buckyball molecules large enough to be seen under an electron microscope.
Lloyd's mirror generates interference fringes by combining direct light from a source (blue lines) and light from the source's reflected image (red lines) from a mirror held at grazing incidence. The result is an asymmetrical pattern of fringes. The band of equal path length, nearest the mirror, is dark rather than bright. In 1834, Humphrey Lloyd interpreted this effect as proof that the phase of a front-surface reflected beam is inverted.
An amplitude splitting interferometer uses a partial reflector to divide the amplitude of the incident wave into separate beams which are separated and recombined. Fig. 6 illustrates the Fizeau, Mach–Zehnder and Fabry–Pérot interferometers. Other examples of amplitude splitting interferometer include the Michelson, Twyman–Green, Laser Unequal Path, and Linnik interferometer.
The Fizeau interferometer is shown as it might be set up to test an optical flat. A precisely figured reference flat is placed on top of the flat being tested, separated by narrow spacers. The reference flat is slightly beveled (only a fraction of a degree of beveling is necessary) to prevent the rear surface of the flat from producing interference fringes. Separating the test and reference flats allows the two flats to be tilted with respect to each other. By adjusting the tilt, which adds a controlled phase gradient to the fringe pattern, one can control the spacing and direction of the fringes, so that one may obtain an easily interpreted series of nearly parallel fringes rather than a complex swirl of contour lines. Separating the plates, however, necessitates that the illuminating light be collimated. Fig 6 shows a collimated beam of monochromatic light illuminating the two flats and a beam splitter allowing the fringes to be viewed on-axis.
The Mach–Zehnder interferometer is a more versatile instrument than the Michelson interferometer. Each of the well separated light paths is traversed only once, and the fringes can be adjusted so that they are localized in any desired plane.:18 Typically, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together. If it is decided to produce fringes in white light, then, since white light has a limited coherence length, on the order of micrometers, great care must be taken to equalize the optical paths or no fringes will be visible. As illustrated in Fig. 6, a compensating cell would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light traveling an equal optical path length in the test and reference beams produces a white light fringe of constructive interference.
The heart of the Fabry–Pérot interferometer is a pair of partially silvered glass optical flats spaced several millimeters to centimeters apart with the silvered surfaces facing each other. (Alternatively, a Fabry–Pérot etalon uses a transparent plate with two parallel reflecting surfaces.):35–36 As with the Fizeau interferometer, the flats are slightly beveled. In a typical system, illumination is provided by a diffuse source set at the focal plane of a collimating lens. A focusing lens produces what would be an inverted image of the source if the paired flats were not present; i.e. in the absence of the paired flats, all light emitted from point A passing through the optical system would be focused at point A'. In Fig. 6, only one ray emitted from point A on the source is traced. As the ray passes through the paired flats, it is multiply reflected to produce multiple transmitted rays which are collected by the focusing lens and brought to point A' on the screen. The complete interference pattern takes the appearance of a set of concentric rings. The sharpness of the rings depends on the reflectivity of the flats. If the reflectivity is high, resulting in a high Q factor (i.e. high finesse), monochromatic light produces a set of narrow bright rings against a dark background. In Fig. 6, the low-finesse image corresponds to a reflectivity of 0.04 (i.e. unsilvered surfaces) versus a reflectivity of 0.95 for the high-finesse image.
Michelson and Morley (1887) and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even though the interferometer might be set up in a basement. Since the fringes would occasionally disappear due to vibrations by passing horse traffic, distant thunderstorms and the like, it would be easy for an observer to "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. This was an early example of the use of white light to resolve the "2 pi ambiguity".
In physics, one of the most important experiments of the late 19th century was the famous "failed experiment" of Michelson and Morley which provided evidence for special relativity. Recent repetitions of the Michelson–Morley experiment perform heterodyne measurements of beat frequencies of crossed cryogenic optical resonators. Fig 7 illustrates a resonator experiment performed by Müller et al. in 2003. Two optical resonators constructed from crystalline sapphire, controlling the frequencies of two lasers, were set at right angles within a helium cryostat. A frequency comparator measured the beat frequency of the combined outputs of the two resonators. As of 2009, the precision by which anisotropy of the speed of light can be excluded in resonator experiments is at the 10−17 level.
Figure 7. Michelson–Morley experiment with
cryogenic optical resonators
Figure 8. Fourier transform spectroscopy
Figure 9. A picture of the solar corona taken
with the LASCO C1 coronagraph
When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range and require use of prefilters which restrict transmittance.
Fig. 8 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum.
Fig. 9 shows a doppler image of the solar corona made using a tunable Fabry-Pérot interferometer to recover scans of the solar corona at a number of wavelengths near the FeXIV green line. The picture is a color-coded image of the doppler shift of the line, which may be associated with the coronal plasma velocity towards or away from the satellite camera.
Fabry–Pérot thin-film etalons are used in narrow bandpass filters capable of selecting a single spectral line for imaging; for example, the H-alpha line or the Ca-K line of the Sun or stars. Fig. 10 shows an Extreme ultraviolet Imaging Telescope (EIT) image of the Sun at 195 Ångströms, corresponding to a spectral line of multiply-ionized iron atoms. EIT used multilayer coated reflective mirrors that were coated with alternate layers of a light "spacer" element (such as silicon), and a heavy "scatterer" element (such as molybdenum). Approximately 100 layers of each type were placed on each mirror, with a thickness of around 10 nm each. The layer thicknesses were tightly controlled so that at the desired wavelength, reflected photons from each layer interfered constructively.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses two 4-km Michelson–Fabry–Pérot interferometers for the detection of gravitational waves. In this application, the Fabry–Pérot cavity is used to store photons for almost a millisecond while they bounce up and down between the mirrors. This increases the time a gravitational wave can interact with the light, which results in a better sensitivity at low frequencies. Smaller cavities, usually called mode cleaners, are used for spatial filtering and frequency stabilization of the main laser. The first observation of gravitational waves occurred on September 14, 2015.
The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels, and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.:18,93–95
An astronomical interferometer achieves high-resolution observations using the technique of aperture synthesis, mixing signals from a cluster of comparatively small telescopes rather than a single very expensive monolithic telescope.
Early radio telescope interferometers used a single baseline for measurement. Later astronomical interferometers, such as the Very Large Array illustrated in Fig 11, used arrays of telescopes arranged in a pattern on the ground. A limited number of baselines will result in insufficient coverage. This was alleviated by using the rotation of the Earth to rotate the array relative to the sky. Thus, a single baseline could measure information in multiple orientations by taking repeated measurements, a technique called Earth-rotation synthesis. Baselines thousands of kilometers long were achieved using very long baseline interferometry.
Astronomical optical interferometry has had to overcome a number of technical issues not shared by radio telescope interferometry. The short wavelengths of light necessitate extreme precision and stability of construction. For example, spatial resolution of 1 milliarcsecond requires 0.5 µm stability in a 100 m baseline. Optical interferometric measurements require high sensitivity, low noise detectors that did not become available until the late 1990s. Astronomical "seeing", the turbulence that causes stars to twinkle, introduces rapid, random phase changes in the incoming light, requiring kilohertz data collection rates to be faster than the rate of turbulence. Despite these technical difficulties, roughly a dozen astronomical optical interferometers are now in operation offering resolutions down to the fractional milliarcsecond range. This linked video shows a movie assembled from aperture synthesis images of the Beta Lyrae system, a binary star system approximately 960 light-years (290 parsecs) away in the constellation Lyra, as observed by the CHARA array with the MIRC instrument. The brighter component is the primary star, or the mass donor. The fainter component is the thick disk surrounding the secondary star, or the mass gainer. The two components are separated by 1 milli-arcsecond. Tidal distortions of the mass donor and the mass gainer are both clearly visible.
The wave character of matter can be exploited to build interferometers. The first examples of matter interferometers were electron interferometers, later followed by neutron interferometers. Around 1990 the first atom interferometers were demonstrated, later followed by interferometers employing molecules.
Electron holography is an imaging technique that photographically records the electron interference pattern of an object, which is then reconstructed to yield a greatly magnified image of the original object. This technique was developed to enable greater resolution in electron microscopy than is possible using conventional imaging techniques. The resolution of conventional electron microscopy is not limited by electron wavelength, but by the large aberrations of electron lenses.
Neutron interferometry has been used to investigate the Aharonov–Bohm effect, to examine the effects of gravity acting on an elementary particle, and to demonstrate a strange behavior of fermions that is at the basis of the Pauli exclusion principle: Unlike macroscopic objects, when fermions are rotated by 360° about any axis, they do not return to their original state, but develop a minus sign in their wave function. In other words, a fermion needs to be rotated 720° before returning to its original state.
Interferometers are used in atmospheric physics for high-precision measurements of trace gases via remote sounding of the atmosphere. There are several examples of interferometers that utilize either absorption or emission features of trace gases. A typical use would be in continual monitoring of the column concentration of trace gases such as ozone and carbon monoxide above the instrument.
Newton (test plate) interferometry is frequently used in the optical industry for testing the quality of surfaces as they are being shaped and figured. Fig. 13 shows photos of reference flats being used to check two test flats at different stages of completion, showing the different patterns of interference fringes. The reference flats are resting with their bottom surfaces in contact with the test flats, and they are illuminated by a monochromatic light source. The light waves reflected from both surfaces interfere, resulting in a pattern of bright and dark bands. The surface in the left photo is nearly flat, indicated by a pattern of straight parallel interference fringes at equal intervals. The surface in the right photo is uneven, resulting in a pattern of curved fringes. Each pair of adjacent fringes represents a difference in surface elevation of half a wavelength of the light used, so differences in elevation can be measured by counting the fringes. The flatness of the surfaces can be measured to millionths of an inch by this method. To determine whether the surface being tested is concave or convex with respect to the reference optical flat, any of several procedures may be adopted. One can observe how the fringes are displaced when one presses gently on the top flat. If one observes the fringes in white light, the sequence of colors becomes familiar with experience and aids in interpretation. Finally one may compare the appearance of the fringes as one moves ones head from a normal to an oblique viewing position. These sorts of maneuvers, while common in the optical shop, are not suitable in a formal testing environment. When the flats are ready for sale, they will typically be mounted in a Fizeau interferometer for formal testing and certification.
Fabry-Pérot etalons are widely used in telecommunications, lasers and spectroscopy to control and measure the wavelengths of light. Dichroic filters are multiple layer thin-film etalons. In telecommunications, wavelength-division multiplexing, the technology that enables the use of multiple wavelengths of light through a single optical fiber, depends on filtering devices that are thin-film etalons. Single-mode lasers employ etalons to suppress all optical cavity modes except the single one of interest.:42
The Twyman–Green interferometer, invented by Twyman and Green in 1916, is a variant of the Michelson interferometer widely used to test optical components. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman–Green configuration as being unsuitable for the testing of large optical components, since the light sources available at the time had limited coherence length. Michelson pointed out that constraints on geometry forced by limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman–Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. (A Twyman–reen interferometer using a laser light source and unequal path length is known as a Laser Unequal Path Interferometer, or LUPI.) Fig. 14 illustrates a Twyman–Green interferometer set up to test a lens. Light from a monochromatic point source is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis.
Mach–Zehnder interferometers are being used in integrated optical circuits, in which light interferes between two branches of a waveguide that are externally modulated to vary their relative phase. A slight tilt of one of the beam splitters will result in a path difference and a change in the interference pattern. Mach–Zehnder interferometers are the basis of a wide variety of devices, from RF modulators to sensors to optical switches.
The latest proposed extremely large astronomical telescopes, such as the Thirty Meter Telescope and the Extremely Large Telescope, will be of segmented design. Their primary mirrors will be built from hundreds of hexagonal mirror segments. Polishing and figuring these highly aspheric and non-rotationally symmetric mirror segments presents a major challenge. Traditional means of optical testing compares a surface against a spherical reference with the aid of a null corrector. In recent years, computer-generated holograms (CGHs) have begun to supplement null correctors in test setups for complex aspheric surfaces. Fig. 15 illustrates how this is done. Unlike the figure, actual CGHs have line spacing on the order of 1 to 10 µm. When laser light is passed through the CGH, the zero-order diffracted beam experiences no wavefront modification. The wavefront of the first-order diffracted beam, however, is modified to match the desired shape of the test surface. In the illustrated Fizeau interferometer test setup, the zero-order diffracted beam is directed towards the spherical reference surface, and the first-order diffracted beam is directed towards the test surface in such a way that the two reflected beams combine to form interference fringes. The same test setup can be used for the innermost mirrors as for the outermost, with only the CGH needing to be exchanged.
Ring laser gyroscopes (RLGs) and fibre optic gyroscopes (FOGs) are interferometers used in navigation systems. They operate on the principle of the Sagnac effect. The distinction between RLGs and FOGs is that in a RLG, the entire ring is part of the laser while in a FOG, an external laser injects counter-propagating beams into an optical fiber ring, and rotation of the system then causes a relative phase shift between those beams. In a RLG, the observed phase shift is proportional to the accumulated rotation, while in a FOG, the observed phase shift is proportional to the angular velocity.
In telecommunication networks, heterodyning is used to move frequencies of individual signals to different channels which may share a single physical transmission line. This is called frequency division multiplexing (FDM). For example, a coaxial cable used by a cable television system can carry 500 television channels at the same time because each one is given a different frequency, so they don't interfere with one another. Continuous wave (CW) doppler radar detectors are basically heterodyne detection devices that compare transmitted and reflected beams.
Optical heterodyne detection is used for coherent Doppler lidar measurements capable of detecting very weak light scattered in the atmosphere and monitoring wind speeds with high accuracy. It has application in optical fiber communications, in various high resolution spectroscopic techniques, and the self-heterodyne method can be used to measure the linewidth of a laser.
Optical heterodyne detection is an essential technique used in high-accuracy measurements of the frequencies of optical sources, as well as in the stabilization of their frequencies. Until a relatively few years ago, lengthy frequency chains were needed to connect the microwave frequency of a cesium or other atomic time source to optical frequencies. At each step of the chain, a frequency multiplier would be used to produce a harmonic of the frequency of that step, which would be compared by heterodyne detection with the next step (the output of a microwave source, far infrared laser, infrared laser, or visible laser). Each measurement of a single spectral line required several years of effort in the construction of a custom frequency chain. Currently, optical frequency combs have provided a much simpler method of measuring optical frequencies. If a mode-locked laser is modulated to form a train of pulses, its spectrum is seen to consist of the carrier frequency surrounded by a closely spaced comb of optical sideband frequencies with a spacing equal to the pulse repetition frequency (Fig. 16). The pulse repetition frequency is locked to that of the frequency standard, and the frequencies of the comb elements at the red end of the spectrum are doubled and heterodyned with the frequencies of the comb elements at the blue end of the spectrum, thus allowing the comb to serve as its own reference. In this manner, locking of the frequency comb output to an atomic standard can be performed in a single step. To measure an unknown frequency, the frequency comb output is dispersed into a spectrum. The unknown frequency is overlapped with the appropriate spectral segment of the comb and the frequency of the resultant heterodyne beats is measured.
One of the most common industrial applications of optical interferometry is as a versatile measurement tool for the high precision examination of surface topography. Popular interferometric measurement techniques include Phase Shifting Interferometry (PSI), and Vertical Scanning Interferometry(VSI), also known as scanning white light interferometry (SWLI) or by the ISO term Coherence Scanning Interferometry (CSI), CSI exploits coherence to extend the range of capabilities for interference microscopy. These techniques are widely used in micro-electronic and micro-optic fabrication. PSI uses monochromatic light and provides very precise measurements; however it is only usable for surfaces that are very smooth. CSI often uses white light and high numerical apertures, and rather than looking at the phase of the fringes, as does PSI, looks for best position of maximum fringe contrast or some other feature of the overall fringe pattern. In its simplest form, CSI provides less precise measurements than PSI but can be used on rough surfaces. Some configurations of CSI, variously known as Enhanced VSI (EVSI), high-resolution SWLI or Frequency Domain Analysis (FDA), use coherence effects in combination with interference phase to enhance precision.
Phase Shifting Interferometry addresses several issues associated with the classical analysis of static interferograms. Classically, one measures the positions of the fringe centers. As seen in Fig. 13, fringe deviations from straightness and equal spacing provide a measure of the aberration. Errors in determining the location of the fringe centers provide the inherent limit to precision of the classical analysis, and any intensity variations across the interferogram will also introduce error. There is a trade-off between precision and number of data points: closely spaced fringes provide many data points of low precision, while widely spaced fringes provide a low number of high precision data points. Since fringe center data is all that one uses in the classical analysis, all of the other information that might theoretically be obtained by detailed analysis of the intensity variations in an interferogram is thrown away. Finally, with static interferograms, additional information is needed to determine the polarity of the wavefront: In Fig. 13, one can see that the tested surface on the right deviates from flatness, but one cannot tell from this single image whether this deviation from flatness is concave or convex. Traditionally, this information would be obtained using non-automated means, such as by observing the direction that the fringes move when the reference surface is pushed.
Phase shifting interferometry overcomes these limitations by not relying on finding fringe centers, but rather by collecting intensity data from every point of the CCD image sensor. As seen in Fig. 17, multiple interferograms (at least three) are analyzed with the reference optical surface shifted by a precise fraction of a wavelength between each exposure using a piezoelectric transducer (PZT). Alternatively, precise phase shifts can be introduced by modulating the laser frequency. The captured images are processed by a computer to calculate the optical wavefront errors. The precision and reproducibility of PSI is far greater than possible in static interferogram analysis, with measurement repeatabilities of a hundredth of a wavelength being routine. Phase shifting technology has been adapted to a variety of interferometer types such as Twyman–Green, Mach–Zehnder, laser Fizeau, and even common path configurations such as point diffraction and lateral shearing interferometers. More generally, phase shifting techniques can be adapted to almost any system that uses fringes for measurement, such as holographic and speckle interferometry.
In coherence scanning interferometry, interference is only achieved when the path length delays of the interferometer are matched within the coherence time of the light source. CSI monitors the fringe contrast rather than the phase of the fringes.:105 Fig. 17 illustrates a CSI microscope using a Mirau interferometer in the objective; other forms of interferometer used with white light include the Michelson interferometer (for low magnification objectives, where the reference mirror in a Mirau objective would interrupt too much of the aperture) and the Linnik interferometer (for high magnification objectives with limited working distance). The sample (or alternatively, the objective) is moved vertically over the full height range of the sample, and the position of maximum fringe contrast is found for each pixel. The chief benefit of coherence scanning interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, and as seen in Fig. 18, which scans a 180μm x 140μm x 10μm volume, it is well suited to profiling steps and rough surfaces. The axial resolution of the system is determined in part by the coherence length of the light source. Industrial applications include in-process surface metrology, roughness measurement, 3D surface metrology in hard-to-reach spaces and in hostile environments, profilometry of surfaces with high aspect ratio features (grooves, channels, holes), and film thickness measurement (semi-conductor and optical industries, etc.).
Fig. 19 illustrates a Twyman–Green interferometer set up for white light scanning of a macroscopic object.
Holographic interferometry is a technique which uses holography to monitor small deformations in single wavelength implementations. In multi-wavelength implementations, it is used to perform dimensional metrology of large parts and assemblies and to detect larger surface defects.:111–120
Holographic interferometry was discovered by accident as a result of mistakes committed during the making of holograms. Early lasers were relatively weak and photographic plates were insensitive, necessitating long exposures during which vibrations or minute shifts might occur in the optical system. The resultant holograms, which showed the holographic subject covered with fringes, were considered ruined.
Eventually, several independent groups of experimenters in the mid-60s realized that the fringes encoded important information about dimensional changes occurring in the subject, and began intentionally producing holographic double exposures. The main Holographic interferometry article covers the disputes over priority of discovery that occurred during the issuance of the patent for this method.
Double- and multi- exposure holography is one of three methods used to create holographic interferograms. A first exposure records the object in an unstressed state. Subsequent exposures on the same photographic plate are made while the object is subjected to some stress. The composite image depicts the difference between the stressed and unstressed states.
Real-time holography is a second method of creating holographic interferograms. A holograph of the unstressed object is created. This holograph is illuminated with a reference beam to generate a hologram image of the object directly superimposed over the original object itself while the object is being subjected to some stress. The object waves from this hologram image will interfere with new waves coming from the object. This technique allows real time monitoring of shape changes.
The third method, time-average holography, involves creating a holograph while the object is subjected to a periodic stress or vibration. This yields a visual image of the vibration pattern.
Interferometric synthetic aperture radar (InSAR) is a radar technique used in geodesy and remote sensing. Satellite synthetic aperture radar images of a geographic feature are taken on separate days, and changes that have taken place between radar images taken on the separate days are recorded as fringes similar to those obtained in holographic interferometry. The technique can monitor centimeter- to millimeter-scale deformation resulting from earthquakes, volcanoes and landslides, and also has uses in structural engineering, in particular for the monitoring of subsidence and structural stability. Fig 20 shows Kilauea, an active volcano in Hawaii. Data acquired using the space shuttle Endeavour's X-band Synthetic Aperture Radar on April 13, 1994 and October 4, 1994 were used to generate interferometric fringes, which were overlaid on the X-SAR image of Kilauea.
Electronic speckle pattern interferometry (ESPI), also known as TV holography, uses video detection and recording to produce an image of the object upon which is superimposed a fringe pattern which represents the displacement of the object between recordings. (see Fig. 21) The fringes are similar to those obtained in holographic interferometry.:111–120
When lasers were first invented, laser speckle was considered to be a severe drawback in using lasers to illuminate objects, particularly in holographic imaging because of the grainy image produced. It was later realized that speckle patterns could carry information about the object's surface deformations. Butters and Leendertz developed the technique of speckle pattern interferometry in 1970, and since then, speckle has been exploited in a variety of other applications. A photograph is made of the speckle pattern before deformation, and a second photograph is made of the speckle pattern after deformation. Digital subtraction of the two images results in a correlation fringe pattern, where the fringes represent lines of equal deformation. Short laser pulses in the nanosecond range can be used to capture very fast transient events. A phase problem exists: In the absence of other information, one cannot tell the difference between contour lines indicating a peak versus contour lines indicating a trough. To resolve the issue of phase ambiguity, ESPI may be combined with phase shifting methods.
A method of establishing precise geodetic baselines, invented by Yrjö Väisälä, exploited the low coherence length of white light. Initially, white light was split in two, with the reference beam "folded", bouncing back-and-forth six times between a mirror pair spaced precisely 1 m apart. Only if the test path was precisely 6 times the reference path would fringes be seen. Repeated applications of this procedure allowed precise measurement of distances up to 864 meters. Baselines thus established were used to calibrate geodetic distance measurement equipment, leading to a metrologically traceable scale for geodetic networks measured by these instruments. (This method has been superseded by GPS.)
Other uses of interferometers have been to study dispersion of materials, measurement of complex indices of refraction, and thermal properties. They are also used for three-dimensional motion mapping including mapping vibrational patterns of structures.
Optical interferometry, applied to biology and medicine, provides sensitive metrology capabilities for the measurement of biomolecules, subcellular components, cells and tissues. Many forms of label-free biosensors rely on interferometry because the direct interaction of electromagnetic fields with local molecular polarizability eliminates the need for fluorescent tags or nanoparticle markers. At a larger scale, cellular interferometry shares aspects with phase-contrast microscopy, but comprises a much larger class of phase-sensitive optical configurations that rely on optical interference among cellular constituents through refraction and diffraction. At the tissue scale, partially-coherent forward-scattered light propagation through the micro aberrations and heterogeneity of tissue structure provides opportunities to use phase-sensitive gating (optical coherence tomography) as well as phase-sensitive fluctuation spectroscopy to image subtle structural and dynamical properties.
Figure 22. Typical optical setup of single point OCT
Figure 23. Central serous retinopathy,imaged using
optical coherence tomography
Optical coherence tomography (OCT) is a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 22, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry.
Phase contrast and differential interference contrast (DIC) microscopy are important tools in biology and medicine. Most animal cells and single-celled organisms have very little color, and their intracellular organelles are almost totally invisible under simple bright field illumination. These structures can be made visible by staining the specimens, but staining procedures are time-consuming and kill the cells. As seen in Figs. 24 and 25, phase contrast and DIC microscopes allow unstained, living cells to be studied. DIC also has non-biological applications, for example in the analysis of planar silicon semiconductor processing.
Angle-resolved low-coherence interferometry (a/LCI) uses scattered light to measure the sizes of subcellular objects, including cell nuclei. This allows interferometry depth measurements to be combined with density measurements. Various correlations have been found between the state of tissue health and the measurements of subcellular objects. For example, it has been found that as tissue changes from normal to cancerous, the average cell nuclei size increases.
Phase-contrast X-ray imaging (Fig. 26) refers to a variety of techniques that use phase information of a coherent x-ray beam to image soft tissues. (For an elementary discussion, see Phase-contrast x-ray imaging (introduction). For a more in-depth review, see Phase-contrast X-ray imaging.) It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for x-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the x-rays emerging from an object into intensity variations. These include propagation-based phase contrast, talbot interferometry, moiré-based far-field interferometry, refraction-enhanced imaging, and x-ray interferometry. These methods provide higher contrast compared to normal absorption-contrast x-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus x-ray sources, x-ray optics, or high resolution x-ray detectors.
White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings.
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection. At each separation and orientation, the lobe-pattern of the interferometer produces an output which is one component of the Fourier transform of the spatial distribution of the brightness of the observed object. The image (or "map") of the source is produced from these measurements. Astronomical interferometers are commonly used for high-resolution optical, infrared, submillimetre and radio astronomy observations. For example, the Event Horizon Telescope project derived the first image of a black hole using aperture synthesis.Astronomical interferometer
An astronomical interferometer is an array of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array.Interferometry is most widely used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry (VLBI) radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter. At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring very precise optics. Practical infrared and optical astronomical interferometers have only recently been developed, and are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics.
Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, and image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths.
One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a partially complete reflecting telescope but with a "sparse" or "dilute" aperture. In fact the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner (focus) are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, and Labeyrie's hypertelescope will use a spherical geometry.Atom interferometer
An atom interferometer is an interferometer which uses the wave character of atoms. Similar to optical interferometers, atom interferometers measure the difference in phase between atomic matter waves along different paths. Atom interferometers have many uses in fundamental physics including measurements of the gravitational constant, the fine-structure constant, the universality of free fall, and have been proposed as a method to detect gravitational waves. They also have applied uses as accelerometers, rotation sensors, and gravity gradiometers.Dual-polarization interferometry
Dual-polarization interferometry (DPI) is an analytical technique that probes molecular layers adsorbed to the surface of a waveguide using the evanescent wave of a laser beam. It is used to measure the conformational change in proteins, or other biomolecules, as they function (referred to as the conformation activity relationship).Infrared Processing and Analysis Center
The Infrared Processing and Analysis Center (IPAC) provides science operations, data management, data archives and community support for astronomy and planetary science missions. IPAC has a historical emphasis on infrared-submillimeter astronomy and exoplanet science. IPAC has supported NASA, NSF and privately funded projects and missions. It is located on the campus of the California Institute of Technology in Pasadena, California.IPAC was established in 1986 to provide support for the joint European-American orbiting infrared telescope, the Infrared Astronomical Satellite, or IRAS. The IRAS mission performed an unbiased, sensitive all-sky survey at 12, 25, 60 and 100 µm during 1983. After the mission ended, IPAC started the Infrared Science Archive (IRSA) to make the data available to anyone who needed it.
Later, NASA designated IPAC as the U.S. science support center for the European Infrared Space Observatory (ISO), which ceased operations in 1998. About that same time, IPAC was designated as the science center for the Space Infrared Telescope Facility (SIRTF) -- renamed the Spitzer Space Telescope after launch. IPAC also assumed the lead role in various other infrared space missions, including the Wide-field Infrared Explorer (WIRE) and the Midcourse Space Experiment (MSX). IPAC also expanded its support to include ground-based missions with the assumption of science support responsibilities for the Two-Micron All-Sky Survey (2MASS), a near-infrared survey of the entire sky conducted by twin observatories in the Northern and Southern hemispheres.
In 1999, IPAC formed an interferometry science center, originally called the Michelson Science Center (MSC) after interferometry pioneer Albert A. Michelson. MSC was renamed the NASA Exoplanet Science Institute (NExScI) in 2008.
Today, the greater IPAC includes the Spitzer Science Center, the NASA Exoplanet Science Institute and the NASA Herschel Science Center. In 2014, NASA established the Euclid NASA Science Center at IPAC (ENSCI) in order to support US-based investigations using Euclid data. The combined efforts of these centers support more than a dozen science missions and archives. IPAC is also a participating organization in the Virtual Astronomical Observatory (VAO).Interferometric synthetic-aperture radar
Interferometric synthetic aperture radar, abbreviated InSAR (or deprecated IfSAR), is a radar technique used in geodesy and remote sensing. This geodetic method uses two or more synthetic aperture radar (SAR) images to generate maps of surface deformation or digital elevation, using differences in the phase of the waves returning to the satellite or aircraft. The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineering, in particular monitoring of subsidence and structural stability.John E. Baldwin
John Evan Baldwin FRS (6 December 1931 – 7 December 2010) was a British Astronomer who worked at the Cavendish Astrophysics Group (formerly Mullard Radio Astronomy Observatory) from 1954. He played a role in the development of interferometry in Radio Astronomy, and later astronomical optical interferometry and lucky imaging. He made the first maps of the radio emission from the Andromeda Galaxy and the Perseus Cluster, and measured the properties of many active galaxies. In 1985 he performed the first Aperture Masking Interferometry observations, and then led the construction and operation of the Cambridge Optical Aperture Synthesis Telescope, and helped develop the lucky imaging method. In 2001 he was awarded the Jackson-Gwilt Medal for his technical contributions to the fields of interferometry and aperture synthesis.He matriculated as a member of Queens' College, Cambridge in 1949 and has been a Life Fellow of the College since 1999.Laser Interferometer Space Antenna
The Laser Interferometer Space Antenna (LISA) is a European Space Agency mission designed to detect and accurately measure gravitational waves—tiny ripples in the fabric of space-time—from astronomical sources. LISA would be the first dedicated space-based gravitational wave detector. It aims to measure gravitational waves directly by using laser interferometry. The LISA concept has a constellation of three spacecraft, arranged in an equilateral triangle with sides 2.5 million km long, flying along an Earth-like heliocentric orbit. The distance between the satellites is precisely monitored to detect a passing gravitational wave.The LISA project started out as a joint effort between the United States space agency NASA and the European Space Agency ESA. However, in 2011, NASA announced that it would be unable to continue its LISA partnership with the European Space Agency due to funding limitations. A scaled down design initially known as the New Gravitational-wave Observatory (NGO) was proposed for ESA's Cosmic Vision L1 mission selection. In 2013, ESA selected 'The Gravitational Universe' as the theme for its L3 mission in the early 2030s. whereby it committed to launch a space based gravitational wave observatory.
In January 2017, LISA was proposed as the candidate mission. On June 20, 2017 the suggested mission received its clearance goal for the 2030s, and was approved as one of the main research missions of ESA.The LISA mission is designed for direct observation of gravitational waves, which are distortions of space-time travelling at the speed of light. Passing gravitational waves alternately squeeze and stretch objects by a tiny amount. Gravitational waves are caused by energetic events in the universe and, unlike any other radiation, can pass unhindered by intervening mass. Launching LISA will add a new sense to scientists' perception of the universe and enable them to study phenomena that are invisible in normal light.Potential sources for signals are merging massive black holes at the centre of galaxies, massive black holes orbited by small compact objects, known as extreme mass ratio inspirals, binaries of compact stars in our Galaxy, and possibly other sources of cosmological origin, such as the very early phase of the Big Bang, and speculative astrophysical objects like cosmic strings and domain boundaries.Neutron interferometer
In physics, a neutron interferometer is an interferometer capable of diffracting neutrons, allowing the wave-like nature of neutrons, and other related phenomena, to be explored.Optical coherence tomography
Optical coherence tomography (OCT) is an imaging technique that uses low-coherence light to capture micrometer-resolution, two- and three-dimensional images from within optical scattering media (e.g., biological tissue). It is used for medical imaging and industrial nondestructive testing (NDT). Optical coherence tomography is based on low-coherence interferometry, typically employing near-infrared light. The use of relatively long wavelength light allows it to penetrate into the scattering medium. Confocal microscopy, another optical technique, typically penetrates less deeply into the sample but with higher resolution.
Depending on the properties of the light source (superluminescent diodes, ultrashort pulsed lasers, and supercontinuum lasers have been employed), optical coherence tomography has achieved sub-micrometer resolution (with very wide-spectrum sources emitting over a ~100 nm wavelength range).Optical coherence tomography is one of a class of optical tomographic techniques. Commercially available optical coherence tomography systems are employed in diverse applications, including art conservation and diagnostic medicine, notably in ophthalmology and optometry where it can be used to obtain detailed images from within the retina. Recently, it has also begun to be used in interventional cardiology to help diagnose coronary artery disease, and in dermatology to improve diagnosis. A relatively recent implementation of optical coherence tomography, frequency-domain optical coherence tomography, provides advantages in the signal-to-noise ratio provided, thus permitting faster signal acquisition.Radio astronomy
Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies. The first detection of radio waves from an astronomical object was in 1932, when Karl Jansky at Bell Telephone Laboratories observed radiation coming from the Milky Way. Subsequent observations have identified a number of different sources of radio emission. These include stars and galaxies, as well as entirely new classes of objects, such as radio galaxies, quasars, pulsars, and masers. The discovery of the cosmic microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy.
Radio astronomy is conducted using large radio antennas referred to as radio telescopes, that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis. The use of interferometry allows radio astronomy to achieve high angular resolution, as the resolving power of an interferometer is set by the distance between its components, rather than the size of its components.Radio telescope
A radio telescope is a specialized antenna and radio receiver used to receive radio waves from astronomical radio sources in the sky. Radio telescopes are the main observing instrument used in radio astronomy, which studies the radio frequency portion of the electromagnetic spectrum emitted by astronomical objects, just as optical telescopes are the main observing instrument used in traditional optical astronomy which studies the light wave portion of the spectrum coming from astronomical objects. Radio telescopes are typically large parabolic ("dish") antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used singly or linked together electronically in an array. Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night. Since astronomical radio sources such as planets, stars, nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television, radar, motor vehicles, and other man-made electronic devices.
Radio waves from space were first detected by engineer Karl Guthe Jansky in 1932 at Bell Telephone Laboratories in Holmdel, New Jersey using an antenna built to study noise in radio receivers. The first purpose-built radio telescope was a 9-meter parabolic dish constructed by radio amateur Grote Reber in his back yard in Wheaton, Illinois in 1937. The sky survey he did with it is often considered the beginning of the field of radio astronomy.Scintillometer
A scintillometer is a scientific device used to measure small fluctuations of the refractive index of air caused by variations in temperature, humidity, and pressure. It consists of an optical or radio wave transmitter and a receiver at opposite ends of an atmospheric propagation path. The receiver detects and evaluates the intensity fluctuations of the transmitted signal, called scintillation.
The magnitude of the refractive index fluctuations is usually measured in terms of , the structure constant of refractive index fluctuations, which is the spectral amplitude of refractive index fluctuations in the inertial subrange of turbulence. Some types of scintillometers, such as displaced-beam scintillometers, can also measure the inner scale of refractive index fluctuations, which is the smallest size of eddies in the inertial subrange.
Scintillometers also allow measurements of the transfer of heat between the Earth's surface and the air above, called the sensible heat flux. Inner-scale scintillometers can also measure the dissipation rate of turbulent kinetic energy and the momentum flux.Space Interferometry Mission
The Space Interferometry Mission, or SIM, also known as SIM Lite (formerly known as SIM PlanetQuest), was a planned space telescope proposed by the U.S. National Aeronautics and Space Administration (NASA), in conjunction with contractor Northrop Grumman. One of the main goals of the mission was the hunt for Earth-sized planets orbiting in the habitable zones of nearby stars other than the Sun. SIM was postponed several times and finally cancelled in 2010. In addition to detecting extrasolar planets, SIM would have helped astronomers construct a map of the Milky Way galaxy. Other important tasks would have included collecting data to help pinpoint stellar masses for specific types of stars, assisting in the determination of the spatial distribution of dark matter in the Milky Way and in the local group of galaxies and using the gravitational microlensing effect to measure the mass of stars. The spacecraft would have used optical interferometry to accomplish these and other scientific goals.
The initial contracts for SIM Lite were awarded in 1998, totaling US$200 million. Work on the SIM project required scientists and engineers to move through eight specific new technology milestones, and by November 2006, all eight had been completed. SIM Lite was originally proposed for a 2005 launch, aboard an Evolved Expendable Launch Vehicle (EELV). As a result of continued budget cuts, the launch date was pushed back at least five times. NASA had set a preliminary launch date for 2015. As of February 2007, many of the engineers working on the SIM program had moved on to other areas and projects, and NASA directed the project to allocate its resources toward engineering risk reduction. However, the preliminary budget for NASA for 2008 included zero dollars for SIM.In 2007, the Congress restored funding for fiscal year 2008 as part of an omnibus appropriations bill which the President later signed. At the same time the Congress directed NASA to move the mission forward to the development phase. In 2009 the project continued its risk reduction work while waiting for the findings and recommendations of the Astronomy and Astrophysics Decadal Survey, Astro2010, performed by the National Academy of Sciences, which would determine the project's future.
In 2010, the Astro2010 Decadal Report was released and did not recommend that NASA continue the development of the SIM Lite Astrometric Observatory. This prompted NASA Astronomy and Physics Director, Jon Morse, to issue a letter on 24 September 2010 to the SIM Lite project manager, informing him that NASA was discontinuing its sponsorship of the SIM Lite mission and directing the project to discontinue Phase B activities immediately or as soon as practical. Accordingly, all SIM Lite activities were closed down by the end of calendar year 2010.Speckle imaging
Speckle imaging describes a range of high-resolution astronomical imaging techniques based on the analysis of large numbers of short exposures that freeze the variation of atmospheric turbulence. They can be divided into the shift-and-add ("image stacking") method and the speckle interferometry methods. These techniques can dramatically increase the resolution of ground-based telescopes, but are limited to bright targets.Spectral phase interferometry for direct electric-field reconstruction
In ultrafast optics, spectral phase interferometry for direct electric-field reconstruction (SPIDER) is an ultrashort pulse measurement technique originally developed by Chris Iaconis and Ian Walmsley.Synthetic-aperture radar
Synthetic-aperture radar (SAR) is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional beam-scanning radars. SAR is typically mounted on a moving platform, such as an aircraft or spacecraft, and has its origins in an advanced form of side looking airborne radar (SLAR). The distance the SAR device travels over a target in the time taken for the radar pulses to return to the antenna creates the large synthetic antenna aperture (the size of the antenna). Typically, the larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical (a large antenna) or synthetic (a moving antenna) – this allows SAR to create high-resolution images with comparatively small physical antennas.
To create a SAR image, successive pulses of radio waves are transmitted to "illuminate" a target scene, and the echo of each pulse is received and recorded. The pulses are transmitted and the echoes received using a single beam-forming antenna, with wavelengths of a meter down to several millimeters. As the SAR device on board the aircraft or spacecraft moves, the antenna location relative to the target changes with time. Signal processing of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna positions. This process forms the synthetic antenna aperture and allows the creation of higher-resolution images than would otherwise be possible with a given physical antenna.As of 2010, airborne systems provide resolutions of about 10 cm, ultra-wideband systems provide resolutions of a few millimeters, and experimental terahertz SAR has provided sub-millimeter resolution in the laboratory.Very-long-baseline interferometry
Very-long-baseline interferometry (VLBI) is a type of astronomical interferometry used in radio astronomy. In VLBI a signal from an astronomical radio source, such as a quasar, is collected at multiple radio telescopes on Earth. The distance between the radio telescopes is then calculated using the time difference between the arrivals of the radio signal at different telescopes. This allows observations of an object that are made simultaneously by many radio telescopes to be combined, emulating a telescope with a size equal to the maximum separation between the telescopes.
Data received at each antenna in the array include arrival times from a local atomic clock, such as a hydrogen maser. At a later time, the data are correlated with data from other antennas that recorded the same radio signal, to produce the resulting image. The resolution achievable using interferometry is proportional to the observing frequency. The VLBI technique enables the distance between telescopes to be much greater than that possible with conventional interferometry, which requires antennas to be physically connected by coaxial cable, waveguide, optical fiber, or other type of transmission line. The greater telescope separations are possible in VLBI due to the development of the closure phase imaging technique by Roger Jennison in the 1950s, allowing VLBI to produce images with superior resolution.VLBI is best known for imaging distant cosmic radio sources, spacecraft tracking, and for applications in astrometry. However, since the VLBI technique measures the time differences between the arrival of radio waves at separate antennas, it can also be used "in reverse" to perform earth rotation studies, map movements of tectonic plates very precisely (within millimetres), and perform other types of geodesy. Using VLBI in this manner requires large numbers of time difference measurements from distant sources (such as quasars) observed with a global network of antennas over a period of time.Wave interference
In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves. The resulting images or graphs are called interferograms.