When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers.
A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape. An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology.
Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of Planck's constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points with no magnitude, still having their finite masses. Classical mechanics also considers mathematically idealized extended materials as geometrically continuously substantial. Such idealizations are useful for most everyday calculations, but may fail entirely for molecules, atoms, photons, and other elementary particles. In many ways, classical mechanics can be considered a mainly macroscopic theory. On the much smaller scale of atoms and molecules, classical mechanics may fail, and the interactions of particles are then described by quantum mechanics. Near the absolute minimum of temperature, the Bose–Einstein condensate exhibits effects on macroscopic scale that demand description by quantum mechanics.
The term "megascopic" is a synonym. No word exists that specifically refers to features commonly portrayed at reduced scales for better understanding, such as geographic areas or astronomical objects. "Macroscopic" may also refer to a "larger view", namely a view available only from a large perspective. A macroscopic position could be considered the "big picture".
Particle physics, dealing with the smallest physical systems, is also known as high energy physics. Physics of larger length scales, including the macroscopic scale, is also known as low energy physics. Intuitively, it might seem incorrect to associate "high energy" with the physics of very small, low mass-energy systems, like subatomic particles. By comparison, one gram of hydrogen, a macroscopic system, has ~ 6×1023 times the mass-energy of a single proton, a central object of study in high energy physics. Even an entire beam of protons circulated in the Large Hadron Collider, a high energy physics experiment, contains ~ 3.23×1014 protons, each with 6.5×1012 eV of energy, for a total beam energy of ~ 2.1×1027 eV or ~ 336.4 MJ, which is still ~ 2.7×105 times lower than the mass-energy of a single gram of hydrogen. Yet, the macroscopic realm is "low energy physics", while that of quantum particles is "high energy physics".
The reason for this is that the "high energy" refers to energy at the quantum particle level. While macroscopic systems indeed have a larger total energy content than any of their constituent quantum particles, there can be no experiment or other observation of this total energy without extracting the respective amount of energy from each of the quantum particles – which is exactly the domain of high energy physics. Daily experiences of matter and the Universe are characterized by very low energy. For example, the photon energy of visible light is about 1.8 to 3.2 eV. Similarly, the bond-dissociation energy of a carbon-carbon bond is about 3.6 eV. This is the energy scale manifesting at the macroscopic level, such as in chemical reactions. Even photons with far higher energy, gamma rays of the kind produced in radioactive decay, have photon energy that is almost always between 105 eV and 107 eV – still two orders of magnitude lower than the mass-energy of a single proton. Radioactive decay gamma rays are considered as part of nuclear physics, rather than high energy physics.
Finally, when reaching the quantum particle level, the high energy domain is revealed. The proton has a mass-energy of ~ 9.4×108 eV; some other massive quantum particles, both elementary and hadronic, have yet higher mass-energies. Quantum particles with lower mass-energies are also part of high energy physics; they also have a mass-energy that is far higher than that at the macroscopic scale (such as electrons), or are equally involved in reactions at the particle level (such as neutrinos). Relativistic effects, as in particle accelerators and cosmic rays, can further increase the accelerated particles' energy by many orders of magnitude, as well as the total energy of the particles emanating from their collision and annihilation.
we shall call a system "macroscopic" (i.e., "large scale") when it is large enough to be visible in the ordinary sense (say greater than 1 micron, so that it can at least be observed with a microscope using ordinary light).
Condensation or condensed may refer to:
Condensation, the change in matter of a substance to a denser phase
DNA condensation, the process of compacting DNA molecules
Cloud condensation nuclei, airborne particles required for cloud formation
Condensation (aerosol dynamics), a phase transition from gas to liquid
Condensation cloud, observable at large explosions in humid air
Condensation reaction, in chemistry, a chemical reaction between two molecules or moieties
Condensation algorithm, in computer science, a computer vision algorithm
Condensation (graph theory), in mathematics, a directed acyclic graph formed by contracting the strongly connected components of another graph
Dodgson condensation, in mathematics, a method invented by Lewis Carroll for computing the determinants of square matrices
Condensed font, a typeface drawn narrower than normal width
Bose–Einstein condensation, a state of matter of a dilute gas in which quantum effects become apparent on a macroscopic scale
Condensation (psychology)Contact area
When two objects touch, only a certain portion of their surface areas will be in contact with each other. This area of true contact, most often constitutes only a very small fraction of the apparent or nominal contact area. In relation to two contacting objects, the term Contact area refers to the fraction of the nominal area that consists of atoms of one object in true contact with the atoms of the other object. Because objects are never perfectly flat due to asperities, the actual contact area (on a microscopic scale) is usually much less than the contact area apparent on a macroscopic scale. Contact area may depend on the normal force between the two objects due to deformation. The contact area depends on the geometry of the contacting bodies, the load, and the material properties. The contact area between the two parallel cylinders is a narrow rectangle. Two, non-parallel cylinders have an elliptical contact area, unless the cylinders are crossed at 90 degrees, in which case they have a circular contact area. Two spheres also have a circular contact area.Contact force
A contact force is any force that requires contact to occur. Contact forces are ubiquitous and are responsible for most visible interactions between macroscopic collections of matter. Moving a couch across a floor, pushing a car up a hill, kicking a ball or pushing a desk across a room are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied by the person on the car, while in the second case the force is delivered in a short impulse. Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force.In the Standard Model of modern physics, the four fundamental forces of nature are known to be non-contact forces. The strong and weak interaction primarily deal with forces within atoms, while gravitational effects are only obvious on an ultra-macroscopic scale. Molecular and quantum physics show that the electromagnetic force is the fundamental interaction responsible for contact forces. The interaction between macroscopic objects can be roughly described as resulting from the electromagnetic interactions between protons and electrons of the atomic constituents of these objects. Everyday objects do not actually touch; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects.Entropy (arrow of time)
Entropy is the only quantity in the physical sciences (apart from certain rare interactions in particle physics; see below) that requires a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Hence, from one perspective, entropy measurement is a way of distinguishing the past from the future. However, in thermodynamic systems that are not closed, entropy can decrease with time: many systems, including living systems, reduce local entropy at the expense of an environmental increase, resulting in a net increase in entropy. Examples of such systems and phenomena include the formation of typical crystals, the workings of a refrigerator and living organisms, used in thermodynamics.
Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block, played in reverse it would show that a puddle of water turned a cloud of smoke into unburnt wood and froze itself in the process. Surprisingly, in either case the vast majority of the laws of physics are not broken by these processes, a notable exception being the second law of thermodynamics. When a law of physics applies equally when time is reversed it is said to show T-symmetry, in this case entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics entropy prevents macroscopic processes showing T-symmetry.
When studying at a microscopic scale the above judgements can not be made. Watching a single smoke particle buffeted by air it would not be clear if a video was playing forwards or in reverse and in fact it would not be possible as the laws which apply show T-symmetry, as it drifts left or right qualitatively it looks no different. It is only when you study that gas at a macroscopic scale that the effects of entropy become noticeable. On average you would expect the smoke particles around a struck match to drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet you can not comment on the movement of any one smoke particle.
By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with our daily experience of time irreversibility.Frictional contact mechanics
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the direction perpendicular to the interface, and frictional forces in the tangential direction. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, whereas frictionless contact mechanics assumes the absence of such effects.
Frictional contact mechanics is concerned with a large range of different scales.
At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies (see Contact dynamics). For instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern.
At the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the macroscopic scale, or to investigate wear and damage of the contacting bodies' surfaces. Application areas of this scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis, etc.
Finally, at the microscopic and nano-scales, contact mechanics is used to increase our understanding of tribological systems (e.g., investigate the origin of friction) and for the engineering of advanced devices like atomic force microscopes and MEMS devices.This page is mainly concerned with the second scale: getting basic insight in the stresses and deformations in and near the contact patch, without paying too much attention to the detailed mechanisms by which they come about.Gravity
Gravity (from Latin gravitas, meaning 'weight'), or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light—are brought toward (or gravitate toward) one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing, forming stars – and for the stars to group together into galaxies – so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker on farther objects.
Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915) which describes gravity not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force which causes any two bodies to be attracted to each other, with the force proportional to the product of their masses and inversely proportional to the square of the distance between them.
Gravity is the weakest of the four fundamental forces of physics, approximately 1038 times weaker than the strong force, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, it has no significant influence at the level of subatomic particles. In contrast, it is the dominant force at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies. For example, gravity causes the Earth and the other planets to orbit the Sun, it also causes the Moon to orbit the Earth, and causes the formation of tides, the formation and evolution of the Solar System, stars and galaxies.
The earliest instance of gravity in the Universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner. Attempts to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three forces of physics, are a current area of research.Hydrodynamic quantum analogs
The hydrodynamic quantum analogs refer to experimentally observed phenomena involving bouncing fluid droplets over a vibrating fluid bath that behave analogously to several quantum mechanical systems. A droplet can be made to bounce indefinitely in a stationary position on a vibrating fluid surface. This is possible due to a pervading air layer that prevents the drop from coalescing into the bath. For certain combinations of bath surface acceleration, droplet size, and vibration frequency, a bouncing droplet will cease to stay in a stationary position, but instead “walk” in a rectilinear motion on top of the fluid bath. Walking droplet systems have been found to mimic several quantum mechanical phenomena including particle diffraction, quantum tunneling, quantized orbits, the Zeeman Effect, and the quantum corral.Besides being an interesting means to visualise phenomena that are typical of the quantum mechanical world, floating droplets on a vibrating bath have interesting analogies with the pilot wave theory, also known as the De Broglie–Bohm theory, or the causal interpretation, one of the many interpretations of quantum mechanics in its early stages of conception and development. The theory was initially proposed by Louis de Broglie in 1927,
and later developed by David Bohm. It suggests that all particles in motion are actually borne on a wave-like motion, similar to how an object moves on a tide. In this theory, it is the evolution of the carrier wave that is given by the Schrödinger equation. It is a deterministic theory and is entirely nonlocal. It is an example of a hidden variable theory, and all non-relativistic quantum mechanics can be accounted for in this theory. The theory was abandoned by de Broglie in 1932, and it gave way to the Copenhagen interpretation. The Copenhagen interpretation does not use the concept of the carrier wave or that a particle moves in definite paths until a measurement is made.John Daniel Rogers
John Daniel Rogers, Ph.D. (born October 30, 1954) is a Curator of Archaeology in the Department of Anthropology at the National Museum of Natural History (NMNH) at the Smithsonian Institution in Washington, DC. He is well known for his archaeological work with the Spiro Mounds in Oklahoma and other sites in the southeastern United States, and has studied the rise of chiefdoms and empires across the world.
His work has often focused on households as a bridge to understanding the structure of complex societies and the interrelatedness of settlement, subsistence and political structures on a macroscopic scale. He has also done significant research on interpreting the processes of culture contact and colonization at the edges of empires by comparing data from a variety of areas, including the Great Plains, Central Mexico, the Caribbean, and Inner Asia.
His recent work explores the human impact on the environment as evidenced by archaeology. Through National Science Foundation grants, Dr. Rogers and collaborators at George Mason University are using agent-based simulations to model the rise and fall of Inner Asian empires. Eventually, the team will explore long-term human impacts on the environment, especially the sustainability and resilience of different social systems.KT (energy)
kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on E / kT (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to e−ΔE / kT.
More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system, in natural units, by one nat. E / kT therefore represents an amount of entropy per molecule, measured in natural units.
In macroscopic scale systems, with large numbers of molecules, RT value is commonly used; its SI units are joules per mole (J/mol): (RT = kT ⋅ NA).Macroscopic quantum phenomena
Macroscopic quantum phenomena refer to processes showing quantum behavior at the macroscopic scale, rather than at the atomic scale where quantum effects are prevalent. The best-known examples of macroscopic quantum phenomena are superfluidity and superconductivity; other examples include the quantum Hall effect. Since 2000 there has been extensive experimental work on quantum gases, particularly Bose–Einstein Condensates.
Between 1996 and 2003 four Nobel Prizes were given for work related to macroscopic quantum phenomena. Macroscopic quantum phenomena can be observed in superfluid helium and in superconductors, but also in dilute quantum gases, dressed photons such as polaritons and in laser light. Although these media are very different, their behavior is very similar as they all show macroscopic quantum behavior and to such extent they all can be referred to as quantum fluids.
Quantum phenomena are generally classified as macroscopic when the quantum states are occupied by a large number of particles (typically Avogadro's number) or the quantum states involved are macroscopic in size (up to km size in superconducting wires).Maxwell–Wagner–Sillars polarization
In dielectric spectroscopy, large frequency dependent contributions to the dielectric response, especially at low frequencies, may come from build-ups of charge. This Maxwell–Wagner–Sillars polarization (or often just Maxwell-Wagner polarization), occurs either at inner dielectric boundary layers on a mesoscopic scale, or at the external electrode-sample interface on a macroscopic scale. In both cases this leads to a separation of charges (such as through a depletion layer). The charges are often separated over a considerable distance (relative to the atomic and molecular sizes), and the contribution to dielectric loss can therefore be orders of magnitude larger than the dielectric response due to molecular fluctuations.Microscopic scale
The microscopic scale (from Greek: μικρός, mikrós, "small" and σκοπέω, skopéō "look") is the scale of objects and events smaller than those that can easily be seen by the naked eye, requiring a lens or microscope to see them clearly. In physics, the microscopic scale is sometimes regarded as the scale between the macroscopic scale and the quantum scale. Microscopic units and measurements are used to classify and describe very small objects. One common microscopic length scale unit is the micrometre (also called a micron) (symbol: μm), which is one millionth of a metre.Morchella semilibera
Morchella semilibera, commonly called the half-free morel, is a species of fungus in the family Morchellaceae native to Europe and Asia.DNA analysis has shown that the half-free morels, which appear nearly identical on a macroscopic scale, are a cryptic species complex, consisting of at least three geographically isolated species. Because de Candolle originally described the species based on specimens from Europe, the scientific name M. semilibera should be restricted to the European species. In 2012, Morchella populiphila was described from western North America, while Peck's 1903 species name Morchella punctipes was reaffirmed for eastern North American half-free morels. M. semilibera and the other half-free morels are closely related to the black morels (M. elata and others).A proposal has been made to conserve the name Morchella semilibera against several earlier synonyms, including Phallus crassipes, P. gigas and P. undosus. These names, sanctioned by Elias Magnus Fries, have since been shown to be the same species as M. semilibera.Phased-array optics
Phased array optics is the technology of controlling the phase and amplitude of light waves transmitting, reflecting, or captured (received) by a two-dimensional surface using adjustable surface elements. A optical phased array (OPA) is the optical analog of a radio wave phased array. By dynamically controlling the optical properties of a surface on a microscopic scale, it is possible to steer the direction of light beams (in an OPA transmitter), or the view direction of sensors (in an OPA receiver), without any moving parts. Phased array beam steering is used for optical switching and multiplexing in optoelectronic devices, and for aiming laser beams on a macroscopic scale.
Complicated patterns of phase variation can be used to produce diffractive optical elements, such as dynamic virtual lenses, for beam focusing or splitting in addition to aiming. Dynamic phase variation can also produce real-time holograms. Devices permitting detailed addressable phase control over two dimensions are a type of spatial light modulator (SLM).Quantum mechanics
Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles.Classical physics, the physics existing before quantum mechanics, describes nature at ordinary (macroscopic) scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.
Quantum mechanics differs from classical physics in that energy, momentum, angular momentum and other quantities of a bound system are restricted to discrete values (quantization); objects have characteristics of both particles and waves (wave-particle duality); and there are limits to the precision with which quantities can be measured (uncertainty principle).Quantum mechanics gradually arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle.
Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.Quantum realm
The quantum realm (or quantum scale) in physics is the scale where quantum mechanical effects become important when studied as an isolated system. Typically, this means distances of 100 nanometers (10−9 meters) or less or at very low temperature. More precisely, it is where the action or angular momentum is quantized.
While originating on the nanometer scale, such effects can operate on a macro level generating some paradoxes like in the Schrödinger's cat thought experiment. Two classical examples are electron tunneling and the double-slit experiment. Most fundamental processes in molecular electronics, organic electronics, and organic semiconductors also originate in the quantum realm.
The quantum realm can also sometimes involve actions at long distances. A well-known example is David Bohm's (1951) version of the famous thought experiment that Albert Einstein, Boris Podolsky, and Nathan Rosen proposed in 1935, the EPR paradox. Pairs of particles are emitted from a source in the so-called spin singlet state and rush in opposite directions. When the particles are widely separated from each other, they each encounter a measuring apparatus that can be set to measure their spin components along with various directions. Although the measurement events are distant from each other, so that no slower-than-light or light signal can travel between them in time, the measurement outcomes are nonetheless entangled.Self-assembly
Self-assembly is a process in which a disordered system of pre-existing components forms an organized structure or pattern as a consequence of specific, local interactions among the components themselves, without external direction. When the constitutive components are molecules, the process is termed molecular self-assembly.
Self-assembly can be classified as either static or dynamic. In static self-assembly, the ordered state forms as a system approaches equilibrium, reducing its free energy. However, in dynamic self-assembly, patterns of pre-existing components organized by specific local interactions are not commonly described as "self-assembled" by scientists in the associated disciplines. These structures are better described as "self-organized", although these terms are often used interchangeably.Transparency and translucency
In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without being scattered. On a macroscopic scale (one where the dimensions investigated are much larger than the wavelength of the photons in question), the photons can be said to follow Snell's Law. Translucency (also called translucence or translucidity) is a superset of transparency: it allows light to pass through, but does not necessarily (again, on the macroscopic scale) follow Snell's law; the photons can be scattered at either of the two interfaces, or internally, where there is a change in index of refraction. In other words, a translucent medium allows the transport of light while a transparent medium not only allows the transport of light but allows for image formation. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity.
When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission.
Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission.
Materials which do not transmit light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering.Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent.UUNET
UUNET, founded in 1987, was one of the largest Internet service providers and one of the early Tier 1 networks. It was based in Northern Virginia and was one of the first commercial Internet service providers. Today, UUNET is an internal brand of Verizon Business (formerly MCI).