Super-resolution imaging

Super-resolution imaging (SR) is a class of techniques that enhance the resolution of an imaging system. In some SR techniques—termed optical SR—the diffraction limit of systems is transcended, while in others—geometrical SR—the resolution of digital imaging sensors is enhanced.

In some radar and sonar imaging applications (e.g., magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g., MUSIC[1]) and compressed sensing-based algorithms (e.g., SAMV[2]) are employed to achieve SR over standard periodogram algorithm.

Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy.

Basic concepts

Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles.

Diffraction Limit The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light[3] or the Uncertainty Principle for photons in quantum mechanics.[4] Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it.[5] One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field)[6] remain fully consistent with Maxwell's equations.

A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands,[7][8] disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another.

Information When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant).

Resolution and localization True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process[9] but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution.

In summary: The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution utilize to the fullest but always stay within the bounds imposed by the laws of physics and information theory.

Techniques to which the term “super resolution” has been applied

Optical or diffractive super-resolution

Substituting spatial-frequency bands. Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis.

Structured Illumination Superresolution
The "structured illumination" technique of super-resolution is related to moiré patterns. The target, a band of fine fringes (top row), is beyond the diffraction limit. When a band of somewhat coarser resolvable fringes (second row) is artificially superimposed, the combination (third row) features moiré components that are within the diffraction limit and hence contained in the image (bottom row) allowing the presence of the fine fringes to be inferred even though they are not themselves represented in the image.
Multiplexing spatial-frequency bands such as structured illumination (see figure to left)
An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that is also within the passband, is superimposed on the target.[8] The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple, unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed.
Multiple parameter use within traditional diffraction limit
If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would utilize normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.
Probing near-field electromagnetic disturbance
The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source[6] which has superior resolution properties, see also evanescent waves and the development of the new Super lens.

Geometrical or image-processing super-resolution

Super-resolution example closeup
Compared to a single image marred by noise during its acquisition or transmission (left), the signal-to-noise ratio is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail.
Multi-exposure image noise reduction
When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right.
Single-frame deblurring
Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.
Localization Resolution
Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.
Sub-pixel image localization
The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. STORM, where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity.[10]
Bayesian induction beyond traditional diffraction limit
Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object.[11] The classical example is Toraldo di Francia's proposition[12] of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"
The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging.[13] More recently, a fast single image super-resolution algorithm based on a closed-form solution to problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly.[14]


Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed.[15]

In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion[16]), the presence of aliasing is still a necessary condition for SR reconstruction.

Technical implementations

There are both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without introducing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, [17] but researchers have found methods to adapt them to color camera images.[16] Recently, the use of super-resolution for 3D data has also been shown.[18]


There is promising research on using deep convolutional networks to perform super-resolution.[19]

See also


  1. ^ Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp.276-280.
  2. ^ Abeida, Habti; Zhang, Qilin; Li, Jian; Merabtine, Nadjim (2013). "Iterative Sparse Asymptotic Minimum Variance Based Approaches for Array Processing" (PDF). IEEE Transactions on Signal Processing. 61 (4): 933–944. doi:10.1109/tsp.2012.2231676. ISSN 1053-587X.
  3. ^ Born M, Wolf E, Principles of Optics, Cambridge Univ. Press , any edition
  4. ^ Fox M, 2007 Quantum Optics Oxford
  5. ^ Zalevsky Z, Mendlovic D. 2003 Optical Superresolution Springer
  6. ^ a b Betzig, E; Trautman, JK (1992). "Near-field optics: microscopy, spectroscopy, and surface modification beyond the diffraction limit". Science. 257 (5067): 189–195. Bibcode:1992Sci...257..189B. doi:10.1126/science.257.5067.189. PMID 17794749.
  7. ^ Lukosz, W., 1966. Optical systems with resolving power exceeding the classical limit. J. opt. soc. Am. 56, 1463–1472.
  8. ^ a b Gustaffsson, M., 2000. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microscopy 198, 82–87.
  9. ^ Cox, I.J., Sheppard, C.J.R., 1986. Information capacity and resolution in an optical system. J.opt. Soc. Am. A 3, 1152–1158
  10. ^ Westheimer, G (2012). "Optical superresolution and visual hyperacuity". Prog Retin Eye Res. 31 (5): 467–80. doi:10.1016/j.preteyeres.2012.05.001. PMID 22634484.
  11. ^ Harris, J.L., 1964. Resolving power and decision making. J. opt. soc. Am. 54, 606–611.
  12. ^ Toraldo di Francia, G., 1955. Resolving power and information. J. opt. soc. Am. 45, 497–501.
  13. ^ D. Poot, B. Jeurissen, Y. Bastiaensen, J. Veraart, W. Van Hecke, P. M. Parizel, and J. Sijbers, "Super-Resolution for Multislice Diffusion Tensor Imaging", Magnetic Resonance in Medicine, (2012)
  14. ^ N. Zhao, Q. Wei, A. Basarab, N. Dobigeon, D. Kouamé and J-Y. Tourneret, "Fast single image super-resolution using a new analytical solution for problems", IEEE Trans. Image Process., 2016, to appear.
  15. ^ J. Simpkins, R.L. Stevenson, "An Introduction to Super-Resolution Imaging." Mathematical Optics: Classical, Quantum, and Computational Methods, Ed. V. Lakshminarayanan, M. Calvo, and T. Alieva. CRC Press, 2012. 539-564.
  16. ^ a b S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Fast and Robust Multi-frame Super-resolution", IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1327–1344, October 2004.
  17. ^ P. Cheeseman, B. Kanefsky, R. Kraft, and J. Stutz, 1994
  18. ^ S. Schuon, C. Theobalt, J. Davis, and S. Thrun, "LidarBoost: Depth Superresolution for ToF 3D Shape Scanning", In Proceedings of IEEE CVPR 2009
  19. ^ Johnson, Justin; Alahi, Alexandre; Fei-Fei, Li (2016-03-26). "Perceptual Losses for Real-Time Style Transfer and Super-Resolution". arXiv:1603.08155 [cs.CV].

Other related work

Breakthrough Prize in Life Sciences

The Breakthrough Prize in Life Sciences is a scientific award, funded by internet entrepreneurs: Mark Zuckerberg and Priscilla Chan of Facebook; Sergey Brin of Google; entrepreneur and venture capitalist Yuri Milner; and Anne Wojcicki, one of the founders of the genetics company 23andMe. The Chairman of the Board is Arthur D. Levinson of Apple.The award of $3 million, the largest award in the sciences, is given to researchers who have made discoveries that extend human life. The Prize is awarded annually, beginning in 2013, with six awards given in each subsequent year. Winners are expected to give public lectures and form the committee to decide future winners.


FtsZ is a protein encoded by the ftsZ gene that assembles into a ring at the future site of the septum of bacterial cell division. This is a prokaryotic homologue to the eukaryotic protein tubulin. FtsZ has been named after "Filamenting temperature-sensitive mutant Z". The hypothesis was that cell division mutants of E. coli would grow as filaments due to the inability of the daughter cells to separate from one another.


IMAX is a system of high-resolution cameras, film formats and film projectors. Graeme Ferguson, Roman Kroitor, Robert Kerr, and William C. Shaw developed the first IMAX cinema projection standards in the late 1960s and early 1970s in Canada. Unlike conventional projectors, the film runs horizontally (see diagram sprocket holes) so that the image width is greater than the width of the film. Since 2002, some feature films have been converted into IMAX format for displaying in IMAX theatres, and some have also been (partially) shot in IMAX. IMAX is the most widely used system for special-venue film presentations. By late 2017, 1,302 IMAX theatre systems were installed in 1,203 commercial multiplexes, 13 commercial destinations, and 86 institutional settings in 75 countries.


Immunofluorescence is a technique used for light microscopy with a fluorescence microscope and is used primarily on microbiological samples. This technique uses the specificity of antibodies to their antigen to target fluorescent dyes to specific biomolecule targets within a cell, and therefore allows visualization of the distribution of the target molecule through the sample. The specific region an antibody recognizes on an antigen is called an epitope. There have been efforts in epitope mapping since many antibodies can bind the same epitope and levels of binding between antibodies that recognize the same epitope can vary. Additionally, the binding of the fluorophore to the antibody itself cannot interfere with the immunological specificity of the antibody or the binding capacity of its antigen. Immunofluorescence is a widely used example of immunostaining (using antibodies to stain proteins) and is a specific example of immunohistochemistry (the use of the antibody-antigen relationship in tissues). This technique primarily makes use of fluorophores to visualise the location of the antibodies.Immunofluorescence can be used on tissue sections, cultured cell lines, or individual cells, and may be used to analyze the distribution of proteins, glycans, and small biological and non-biological molecules. This technique can even be used to visualize structures such as intermediate-sized filaments. If the topology of a cell membrane has yet to be determined, epitope insertion into proteins can be used in conjunction with immunofluorescence to determine structures. Immunofluorescence can also be used as a "semi-quantitative" method to gain insight into the levels and localization patterns of DNA methylation since it is a more time consuming method than true quantitative methods and there is some subjectivity in the analysis of the levels of methylation. Immunofluorescence can be used in combination with other, non-antibody methods of fluorescent staining, for example, use of DAPI to label DNA. Several microscope designs can be used for analysis of immunofluorescence samples; the simplest is the epifluorescence microscope, and the confocal microscope is also widely used. Various super-resolution microscope designs that are capable of much higher resolution can also be used.

Jennifer Lippincott-Schwartz

Jennifer Lippincott-Schwartz is a Senior Group Leader at Howard Hughes Medical Institute's Janelia Research Campus and a founding member of the Neuronal Cell Biology Program at Janelia. Previously, she was the Chief of the Section on Organelle Biology in the Cell Biology and Metabolism Program, in the Division of Intramural Research in the Eunice Kennedy Shriver National Institute of Child Health and Human Development at the National Institutes of Health from 1993 to 2016. Lippincott-Schwartz received her Ph.D. from Johns Hopkins University, and performed post-doctoral training with Dr. Richard Klausner at the NICHD, NIH in Bethesda, Maryland.Lippincott-Schwartz's research revealed that the organelles of eukaryotic cells are dynamic, self-organized structures that constantly regenerate themselves through intracellular vesicle traffic, rather than static structures. She is also a pioneer in developing live cell imaging techniques to study the dynamic interactions of molecules in cells, including photobleaching and photoactivation techniques which allow investigation of subcellular localization, mobility, transport routes, and turnover of important cellular proteins related to membrane trafficking and compartmentalization. Lippincott-Schwartz's lab also tests mechanistic hypotheses related to protein and organelle functions and dynamics by utilizing quantitative measurements through kinetic modeling and simulation experiments. Along with Dr. Craig Blackstone, Lippincott-Schwartz utilized advanced imaging techniques to reveal a more accurate picture of how the peripheral endoplasmic reticulum is structured. Their findings may yield new insights for genetic diseases affecting proteins that help shape the endoplasmic reticulum. Additionally, Lippincott-Schwartz's laboratory demonstrated that Golgi enzymes constitutively recycle back to the endoplasmic reticulum and that such recycling plays a central role in the maintenance, biogenesis, and inheritance of the Golgi apparatus in mammalian cells.Within Lippincott-Schwartz lab, current projects include several cell biological areas. For example, protein transport and cytoskeleton interaction, organelle assembly and disassembly, and cell polarity generation. There are also projects analyzing the dynamics of proteins that have been fluorescently labeled. These proteins are labeled using several live cell imaging techniques such as FRAP, FCS, and photoactivation.Lippincott-Schwartz has dedicated her most recent lab research to photoactivation localization microscopy (PALM), which allows the viewing of molecular distributions of high densities at the nano-scale.

Multifocal plane microscopy

Multifocal plane microscopy (MUM) or Multiplane microscopy or Biplane microscopy is a form of light microscopy that allows the tracking of the 3D dynamics in live cells at high temporal and spatial resolution by simultaneously imaging different focal planes within the specimen. In this methodology, the light collected from the sample by an infinity-corrected objective lens is split into two paths. In each path the split light is focused onto a detector which is placed at a specific calibrated distance from the tube lens. In this way, each detector images a distinct plane within the sample. The first developed MUM setup was capable of imaging two distinct planes within the sample. However, the setup can be modified to image more than two planes by further splitting the light in each light path and focusing it onto detectors placed at specific calibrated distances. Another technique called multifocus microscopy (MFM) uses diffractive Fourier optics to image up to 25 focal planes. Presently, MUM setups are implemented that can image up to four distinct planes.

Nano/Bio Interface Center

The Nano/Bio Interface Center is a Nanoscale Science and Engineering Center at the University of Pennsylvania. It specializes in bionanotechnology, combining aspects of life sciences and engineering, with a particular focus in biomolecular optoelectronics and molecular motions, including developing new scanning probe microscopy techniques. It offers a master's degree in nanotechnology. The center was established in 2004 with a US$11.6 million grant from the National Science Foundation, and received an additional $11.9 million grant in 2009. By 2013 it had constructed a new facility, the Krishna P. Singh Center for Nanotechnology.


The NANOPSIS M is the world's first super-resolution wide-field optical microscope that can resolve objects down to 70nm.Powered by SMAL (Super Resolution Microsphere Amplifying Lens) The microscope is the first commercial system to use microspheres with no need for direct contact with the sample. The NANOPSIS M was launched on 29th of June 2017 in Manchester, United Kingdom at Citylabs 1.0.The concept was discovered by Prof Lin Li and Dr Wei Guo at The University of Manchester, and published for the first time in Nature Science & Applications in 2013. and developed by a team of researchers lead by Dr. Sorin Laurentiu Stanescu at LIG Nanowise in Manchester.

Peyman Milanfar

Peyman Milanfar is a professor of Electrical Engineering at University of California Santa Cruz, where he directs the Multi-Dimensional Signal Processing group. He was also Associate Dean for Research and Graduate Studies from 2010 to 2012. He is currently on leave from his professorship, working as a visiting scientist in Google X Lab, where he is working on Google's Project Glass.His work includes the development of fast and robust methods for super-resolution, statistical analysis of performance limits for inverse problems in imaging, and the development of adaptive non-parametric techniques (kernel regression) for image and video processing. He holds 7 US patents in the field of image and video processing.Milanfar did his undergraduate studies at University of California, Berkeley, graduating in 1988, with a joint degree in Mathematics and Electrical Engineering. Milanfar received his Ph.D. in Electrical Engineering and Computer Sciences from MIT in 1993, under the supervision of Alan S. Willsky. He was a research scientist at SRI International from 1994 to 1999 before moving to UC Santa Cruz.

Photoactivated localization microscopy

Photo-activated localization microscopy (PALM or FPALM)

and stochastic optical reconstruction microscopy (STORM) are widefield (as opposed to point scanning techniques such as laser scanning confocal microscopy) fluorescence microscopy imaging methods that allow obtaining images with a resolution beyond the diffraction limit. The methods were proposed in 2006 in the wake of a general emergence of optical super-resolution microscopy methods, and were featured as Methods of the Year for 2008 by the Nature Methods journal.

The development of PALM as a targeted biophysical imaging method was largely prompted by the discovery of new species and the engineering of mutants of fluorescent proteins displaying a controllable photochromism, such as photo-activatable GFP. However, the concomitant development of STORM, sharing the same fundamental principle, originally made use of paired cyanine dyes.

One molecule of the pair (called activator), when excited near its absorption maximum, serves to reactivate the other molecule (called reporter) to the fluorescent state.

A growing number of dyes are used for PALM, STORM and related techniques, both organic fluorophores and fluorescent proteins. Some are compatible with live cell imaging, others allow faster acquisition or denser labeling. The choice of a particular fluorophore ultimately depends on the application and on its underlying photophysical properties.Both techniques have undergone significant technical developments, in particular allowing multicolor imaging and the extension to three dimensions, with the best current axial resolution of 10 nm in the third dimension obtained using an interferometric approach with two opposing objectives collecting the fluorescence from the sample.

Protein G

Protein G is an immunoglobulin-binding protein expressed in group C and G Streptococcal bacteria much like Protein A but with differing binding specificities. It is a 65-kDa (G148 protein G) and a 58 kDa (C40 protein G) cell surface protein that has found application in purifying antibodies through its binding to the Fab and Fc region. The native molecule also binds albumin, but because serum albumin is a major contaminant of antibody sources, the albumin binding site has been removed from recombinant forms of Protein G. This recombinant Protein G, either labeled with a fluorophore or a single-stranded DNA strand, was used as a replacement for secondary antibodies in immunofluorescence and super-resolution imaging.

Subhasis Chaudhuri

Subhasis Chaudhuri (born 1963) is a Bengali electrical engineer and director at IIT Bombay. He is a former K. N. Bajaj Chair Professor at the Department of Electrical Engineering of the Indian Institute of Technology, Bombay. He is known for his pioneering studies on Computer vision and is an elected fellow of all the three major Indian science academies viz. the National Academy of Sciences, India, Indian Academy of Sciences, and Indian National Science Academy. He is also a fellow of Institute of Electrical and Electronics Engineers, and the Indian National Academy of Engineering. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 2004.

Super-resolution microscopy

Super-resolution microscopy, in light microscopy, is a term that gathers several techniques, which allow images to be taken with a higher resolution than the one imposed by the diffraction limit. Due to the diffraction of light, the resolution in conventional light microscopy is limited, as stated (for the special case of widefield illumination) by Ernst Abbe in 1873. In this context, a diffraction-limited microscope with numerical aperture N.A. and light with wavelength λ reaches a lateral resolution of d = λ/(2 N.A.) - a similar formalism can be followed for the axial resolution (along the optical axis, z-resolution, depth resolution). The resolution for a standard optical microscope in the visible light spectrum is about 200 nm laterally and 600 nm axially. Experimentally, the attained resolution can be measured from the full width at half maximum (FWHM) of the point spread function (PSF) using images of point-like objects. Although the resolving power of a microscope is not well defined, it is generally considered that a super-resolution microscopy technique offers a resolution better than the one stipulated by Abbe.

Super-resolution imaging techniques include single-molecule localization methods, photon tunneling microscopy as well as those that utilize the Pendry Superlens and near field scanning optical microscopy, the 4Pi Microscope, confocal microscope (with closed pinhole), or confocal microscopy aided with computational methods such as deconvolution or detector-based pixel reassignment (e.g. re-scan microscopy, pixel reassignment ), and also structured illumination microscopy technologies like SIM and SMI.

There are two major groups of methods for functional super-resolution microscopy:

Deterministic super-resolution: The most commonly used emitters in biological microscopy, fluorophores, show a nonlinear response to excitation, and this nonlinear response can be exploited to enhance resolution. These methods include STED, GSD, RESOLFT and SSIM.

Stochastic super-resolution: The chemical complexity of many molecular light sources gives them a complex temporal behavior, which can be used to make several close-by fluorophores emit light at separate times and thereby become resolvable in time. These methods include Super-resolution optical fluctuation imaging (SOFI) and all single-molecule localization methods (SMLM) such as SPDM, SPDMphymod, PALM, FPALM, STORM and dSTORM.On October 8, 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, W.E. Moerner and Stefan Hell for "the development of super-resolved fluorescence microscopy," which brings "optical microscopy into the nanodimension".

Super-resolution photoacoustic imaging

Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.


A superlens, or super lens, is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them.

Vanishing valentine experiment

The vanishing valentine experiment is a type of chemical reaction related to the blue bottle experiment. This reaction occurs when water, glucose, sodium hydroxide, and resazurin is mixed in a flask. When the solution is shaken, it turns from light blue to a redish color. The solution turns back to a light blue after being left to stand for a while. This reaction can be repeated several times.After mixing all the components, shake the bottle and the color will turn to red or pink depend on the amount of resazurin in the solution. More resazurin will result in more time needed for the solution to turn back the color and the intensity of the red color.

William E. Moerner

William Esco Moerner (born June 24, 1953) is an American physical chemist and chemical physicist with current work in the biophysics and imaging of single molecules. He is credited with achieving the first optical detection and spectroscopy of a single molecule in condensed phases, along with his postdoc, Lothar Kador. Optical study of single molecules has subsequently become a widely used single-molecule experiment in chemistry, physics and biology. In 2014, he was awarded the Nobel Prize in Chemistry.

Xiaowei Zhuang

Xiaowei Zhuang (simplified Chinese: 庄小威; traditional Chinese: 莊小威; pinyin: Zhuāng Xiǎowēi; born January 1972) is a Chinese-American biophysicist, and the David B. Arnold Jr. Professor of Science, Professor of Chemistry and Chemical Biology and Professor of Physics at Harvard University, and an Investigator at the Howard Hughes Medical Institute. She is best known for her work in the development of Stochastic Optical Reconstruction Microscopy (STORM) , a super-resolution fluorescence microscopy method, and the discoveries of novel cellular structures using STORM. She received a 2019 Breakthrough Prize in Life Sciences for developing super-resolution imaging techniques that get past the diffraction limits of traditional light microscopes, allowing scientists to visualize small structures within living cells.

Yucel Altunbasak

Professor Yucel Altunbasak was born in Kayseri, Turkey in 1971. He attended Izmir Science High School in Izmir, Turkey. He received his B.S. degree with high honors from the Department of Electrical and Electronics Engineering at Bilkent University, Ankara, in 1992. Afterward, he moved to the USA and studied at the Department of Electrical and Computer Engineering at the University of Rochester, New York, where he received his M.S. and Ph.D. degrees, in 1993 and 1996, respectively.

In 1997 and 1998, while employed at Hewlett-Packard's Palo Alto Research Laboratories in Silicon Valley, California as a research engineer, he also worked as a consultant assistant professor at Stanford University and a lecturer at San Jose State University. After three years in Silicon Valley, he returned to academic life as an assistant professor at the Georgia Institute of Technology Department of Electrical and Computer Engineering, where he became an associate professor with tenure in 2004 and a full professor in 2009. Prof Altunbasak has supervised 19 Ph.D. students and is the author of more than 200 papers and 50 patents/patent applications.

Prof. Altunbasak has served as an editor for several leading research journals and chaired many industrial associations. He was an associate editor for “IEEE Transactions on Image Processing”, “IEEE Transactions on Signal Processing”, “Signal Processing: Image Communications” ve “Journal of Circuits, Systems, and Signal Processing” journals. He also served as a guest editor for “Wireless video” and “Video networking” special issues of “Signal Processing : Image Communications” journal, “Network-aware multimedia processing and communications” special issue of "IEEE Journal on Selected Topics on Signal Processing” journal, and “Realizing the Vision of Immersive Communications” special issue of "IEEE Signal Processing Magazine". He was elected to IEEE Signal Processing Society Image and Multi-dimensional Signal Processing (IMDSP), IEEE Signal Processing Society Bio-Imaging and Signal Processing, IEEE Signal Processing Society Multimedia Signal Processing (MMSP) Technical Committees. He also served as the vice-president for the IEEE Communications Society Multimedia Communications Technical Committee. He served as the technical program chair for IEEE Int. Conf. on Image Processing (ICIP'06). He co-chaired “Advanced Signal Processing for Communications” Symposia on IEEE International Conference on Communications (ICC’03), chaired Multimedia Networking Technical Tracks on IEEE International Conference on Multimedia and Expo (ICME’04 and ICME'04), panel sessions on International Conference on Information Technology: Research and Education (ITRE’03), and Video Networking special session on IEEE Int. Conf. on Image Processing (ICIP’04).

In addition to his academic work, Prof Altunbasak has continuously worked in collaboration with industry. He licensed and successfully prototyped a MPEG video compression device for a satellite and cable TV company. He initiated and was the driving force behind an image processing technology called ‘Pixellence’, which received the Special Jury Award of the Turkish Industry and Business Association while working as a senior advisor to the company Vestel.

Between 2009 and 2011, he served as the Rector of TOBB University of Economics and Technology in Turkey. He was instrumental in founding six new departments, including the Departments of Psychology, Architecture, Political Science, Biomedical Engineering, Material Science and Nanotechnology, English Literature, and College of Law, hired 70 new faculty members, and improved the attractiveness of the university as measured by the National University Entrance Exam Department Scores.

Prof. Altunbasak was appointed as the President of TÜBİTAK between 2011 and 2015 by the Minister of Science, Industry, and Technology, the Prime Minister, and the President of Turkey. During his tenure, several initiatives to bolster the research, development, innovation and entrepreneurship ecosystems in Turkey have been instituted, including 1003, 1004, 1511, 1512, 1513, 1514, 1515, 4003, 4006 programs and technology roadmaps. For the first time in Turkey, special funds have been designed for priority areas and entrepreneurs. The total budget of externally-supported projects carried out at TÜBİTAK Institutes increased from 1.3 billion TL to 4.8 billion TL. He led the institutes to successfully complete and deploy such projects as the first domestic high-resolution LEO satellite (GÖKTÜRK-2) the first domestic locomotive (E1000), national digital ID card, national fuel marker, first cruise-missile (SOM), first ground-penetrating ammunition (NEB), precision guidance kit (HGK). He also initiated several externally-funded projects of national importance (e.g., national communications satellite (TURKSAT 6A), national hydro (MILHES), thermal (MILTES), wind (MILRES2), and solar (MILGES) power plants, and several defense projects (for example BOZOK, GÖKTUĞ, GEZGİN). He also helped raise the number of academic project applications from 5030 to 9613/year and the number of industrial project applications from 1725 to 3072/year. He introduced innovative incentives for faculty members (e.g., project performance awards) as well as universities (performance-based overhead rates of up to 50%). He successfully negotiated Turkey’s association to Horizon 2020 program. He initiated bilateral R&D agreements with 33 countries. He promoted the public understanding of science by organizing “science fairs” at all high schools.

Prof. Altunbasak served on the executive boards of ROKETSAN between 2012 and 2016, KOSGEB between 2011 and 2015, and Vocational Qualification Corporation between 2011 and 2012. He also served on the advisory board of the Department of Electrical Engineering, Bilkent University, Ankara, Turkey between 2003 and 2009. He also served as the TOBB ETU rector in The Interuniversity Council (UAK) between 2009 and 2011.

Prof. Altunbasak has received numerous awards. He received the National Science Foundation (NSF) CAREER award (2002). He has been a co-author for the article that received the “most cited paper award” of the Journal of Signal Processing: Image Communication in 2008. He was also a co-author for two conference papers that received the “best student paper award” at VCIP 2006 and ICIP 2003. He was also co-author for a conference paper that received the “second place award” at EMBS’04 Design Competition. He received “Outstanding Junior Faculty Award” at the School of Electrical and Computer Engineering, Georgia Tech (2003). He was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2012 for contributions to super-resolution imaging, color filter array interpolation, and error-resilient video communications.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.