# Overline

An overline, overscore, or overbar, is a typographical feature of a horizontal line drawn immediately above the text. In mathematical notation, an overline has been used for a long time as a vinculum, a way of showing that certain symbols belong together. The original use in Ancient Greek was to indicate compositions of Greek letters as Greek numerals.[1] In Latin it indicates Roman numerals multiplied by a thousand and it forms medieval abbreviations (sigla). Marking one or more words with a continuous line above the characters is sometimes called overstriking, though overstriking generally refers to printing one character on top of an already-printed character.

An overline, that is, a single line above a chunk of text, should not be confused with the macron, a diacritical mark placed above (or sometimes below) individual letters. The macron is narrower than the character box.[2] Since ISO and Unicode Consortium assign names to characters in their unique fashion and often ignore the established typographical terminology, Unicode includes two characters U+00AF ¯ MACRON (formerly spacing macron) and U+203E OVERLINE that both look like an overlined space in most fonts, similar to a mirrored underscore symbol. An overline proper can be encoded as a Unicode diacritic; see below.

Description Sample Unicode CSS/HTML
Overline
(markup)
Xx N/A text-decoration: overline;
Overline
(character)
U+203E &oline;, &#8254;
X̅x̅ (combining) U+0305 &#773; (fails to render on Android)
Macron
(character)
¯ U+00AF &macr;, &#175;
Double overline
(markup)
Xx N/A text-decoration: overline; text-decoration-style: double;
Double overline
(character)
X̿x̿ (combining) U+033F &#831; (fails to render on Android)

## Uses

### Medicine

An overbar over a letter is a traditional way of specifying certain Latin abbreviations. For example, s̅ (s overbar) stands for Latin, sine (meaning without), c̅ (c overbar) is an abbreviation for Latin, cum (meaning with), ā (a overbar) stands for Latin, "ante" (meaning "before"), and p̄ (p overbar) stands for Latin, "post" (meaning "after").

### Math and science

#### Vinculum

In mathematics, an overline can be used as a vinculum.

The vinculum can indicate a line segment:

• ${\displaystyle {\overline {\rm {AB}}}}$

The vinculum can indicate a repeating decimal value:

• 17 = 0.142857 = 0.1428571428571428571...

When it is not possible to format the number so that the overline is over the digit(s) that repeat, one overline character is placed to the left of the digit(s) that repeat:

• 3.¯3 = 3.3 = 3.333333333333...
• 3.12¯34 = 3.1234 = 3.123434343434...

Historically, the vinculum was used to group together symbols so that they could be treated as a unit. Today, parentheses are more commonly used for this purpose.

#### Statistics

The overline is used to indicate a sample mean:

• ${\displaystyle {\overline {x}}}$ is the average value of ${\displaystyle x_{i}}$

Survival functions or complementary cumulative distribution functions are often denoted by placing an overline over the symbol for the cumulative:${\displaystyle {\overline {F}}(x)=1-F(x)}$

#### Negation

In set theory and some electrical engineering contexts, negation operators can be written as an overline above the term or expression to be negated, for example:

Common set theory notation:

{\displaystyle {\begin{aligned}{\overline {A\cup B}}&\equiv {\overline {A}}\cap {\overline {B}}\\{\overline {A\cap B}}&\equiv {\overline {A}}\cup {\overline {B}}\end{aligned}}}

Electrical engineering notation:

{\displaystyle {\begin{aligned}{\overline {A\cdot B}}&\equiv {\overline {A}}+{\overline {B}}\\{\overline {A+B}}&\equiv {\overline {A}}\cdot {\overline {B}}\end{aligned}}}

in which implied multiplication, the times (cross) and the dot all mean logical AND, and the plus sign means logical OR.

Both illustrate De Morgan's laws and its mnemonic, "break the line, change the sign".

#### Negative

In common logarithms a bar over the characteristic indicates that it is negative whilst the mantissa remains positive. This notation avoids the need for separate tables to convert positive and negative logarithms back to their original numbers.

${\displaystyle \log _{10}0.012\approx -2+0.07918={\bar {2}}.07918}$

#### Reciprocal

Rarely, a bar over a number or expression means its multiplicative inverse[3] which is more commonly shown as a fraction or negative exponent

${\displaystyle {\overline {2l}}=1/(2l)=(2l)^{-1}}$

#### Complex numbers

The overline notation can indicate a complex conjugate and analogous operations.

• if ${\displaystyle x=a+ib}$, then ${\displaystyle {\overline {x}}=a-ib.}$

#### Vector

In physics, an overline sometimes indicates a vector, although boldface and arrows are also commonly used:

• ${\displaystyle {\overline {x}}=|x|{\hat {x}}}$

#### Improper rotation

In crystallography, an overline indicates an improper rotation or a negative number:

• ${\displaystyle {\overline {3}}}$ is the Hermann–Mauguin notation for a threefold rotoinversion, used in crystallography.
• ${\displaystyle [{\overline {1}}1{\overline {2}}]}$ is the direction with Miller indices ${\displaystyle h=-1}$, ${\displaystyle k=1}$, ${\displaystyle l=-2}$.

#### Maximal conductance

In computational neuroscience, an overline is used to indicate the 'maximal' conductances in Hodgkin-Huxley models. This goes back to at least the landmark paper published by Nobel prize winners Alan Lloyd Hodgkin and Andrew Fielding Huxley around 1952.[4]

${\displaystyle I_{\mathrm {Na} }(t)={\bar {g}}_{\mathrm {Na} }m(V_{m})^{3}h(V_{m})(V_{m}-E_{\mathrm {Na} })}$

#### Antiparticles

Overlines are used in subatomic particle physics to denote antiparticles for some particles (with the alternate being distinguishing based on electric charge). For example, the proton is denoted as
p
, and its corresponding antiparticle is denoted as
p
.

### Engineering

An active low signal is designated by an overline, e.g. RESET, representing logical negation.

### Morse (CW)

Some Morse code prosigns can be expressed as two or three characters run together, and an overline is often used to signify this. The most famous is the distress signal, SOS.

### Writing

An overline-like symbol is traditionally used in Syriac text to mark abbreviations and numbers. It has dots at each end and the center. In German it is occasionally used to indicate a pair of letters which cannot both be fitted into the available space.[5][6]

When Morse code is written out as text, overlines are used to distinguish prosigns and other concatenated character groups from strings of individual characters.

### Linguistics

X-bar theory makes use of overbar notation to indicate differing levels of syntactic structure. Certain structures are represented by adding an overbar to the unit, as in X. Due to difficulty in typesetting the overbar, the prime symbol is often used instead, as in X′. Contemporary typesetting software, such as LaTeX, has made typesetting overbars considerably simpler; both prime and overbar markers are accepted usages. Some variants of X-bar notation use a double-bar (or double-prime) to represent phrasal-level units.

X-bar theory derives its name from the overbar. One of the core proposals of the theory was the creation of an intermediate syntactic node between phrasal (XP) and unit (X) levels; rather than introduce a different label, the intermediate unit was marked with a bar.

## Implementations

### HTML with CSS

In HTML using CSS, the overline is implemented via text-decoration property: <span style="text-decoration: overline">text</span>, results in: text. It supports also other typographical features with horizontal lines: underline (a line below the text) and strikethrough (a line through the text).

### Unicode

As mentioned above, Unicode includes two graphic characters, U+00AF ¯ MACRON and U+203E OVERLINE. They are compatibility equivalent to the U+0020   SPACE with non-spacing diacritics U+0304 ◌̄ COMBINING MACRON and U+0305 ◌̅ COMBINING OVERLINE respectively; the latter allows an overline to be placed over any character. As with any combining character, it appears in the same character box as the character that logically precedes it: for example, x̅, compared to x‾. A series of overlined characters usually results in an unbroken line, depending on the font (for example, 1̅2̅3̅).

For East Asian (CJK) computing, U+FFE3 FULLWIDTH MACRON is available. Despite the name, Unicode maps this character to both U+203E and U+00AF.[7]

Unicode maps the overline-like character from ISO/IEC 8859-1 and code page 850 to the U+00AF ¯ MACRON symbol mentioned above. In a reversal of its official name (and compatibility decomposition), it is much wider than an actual macron diacritic over most letters, and actually wider than U+203E OVERLINE in most fonts. In Microsoft Windows, U+00AF can be entered with the keystrokes Alt+0175 (where numbers are entered from the numeric keypad). In GTK/GTK+, the symbol can be added using the keystrokes Ctrl+⇧ Shift+U to activate Unicode input, then type "00AF" as the code for the character. On a Mac, with the ABC Extended keyboard, use ⌥ Option+a.

The Unicode character U+070F SYRIAC ABBREVIATION MARK is used to mark Syriac abbreviations and numbers. However, several computer environments do not render this line correctly or at all.

### Word processors

In Microsoft Word, overstriking of text can be accomplished with the EQ \O() field code. The field code {EQ \O(x,¯)} produces x and the field code {EQ \O(xyz,¯¯¯)} produces xyz. (Doesn't work in Word 2010; it is necessary to insert MS Equation object). Windows: Alt+0773 (once before character, one more time after character).

LibreOffice has direct support for several styles of overline in its "Format / Character / Font Effects" dialog.

Overstriking of longer sections of text, such as in 123, can also be produced in many text processors as text markup as a special form of understriking.

### TeX

In LaTeX, a text <text> can be overlined with $\overline{\mbox{<text>}}$. The inner \mbox{} is necessary to override the math-mode (here invoked by the dollar signs) which the \overline{} demands.

## References

1. ^ Smith, T. P. (2013). How Big is Big and How Small is Small: The Sizes of Everything and Why.
2. ^ Wells, J.C. (2001). "Orthographic diacritics and multilingual computing". University College London. Retrieved 23 March 2014.
3. ^ Mansfield, Daniel, and N. J. Wildberger. "Plimpton 322 is Babylonian exact sexagesimal trigonometry". Historia Mathematica. doi:10.1016/j.hm.2017.08.001.
4. ^ Hodgkin, A. L.; Huxley, A. F. (1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve". The Journal of Physiology. 117 (4): 500–544. doi:10.1113/jphysiol.1952.sp004764. PMC 1392413. PMID 12991237.
5. ^ Hardwig, Florian. "Gräfinnen". Flickr. Retrieved 26 December 2017.
6. ^ Hardwig, Florian. "Lieder zur Weihnachtszeit (1940)". Fonts in Use. Retrieved 26 December 2017. It used to be common to mark omitted double letters with an overbar, especially for 'mm' and 'nn'. These abbreviations come in handy when lyrics have to match the musical notes, see 'da kom[m]t er her'.
7. ^ The Unicode Consortium (2012), "Halfwidth and Fullwidth Forms" (PDF), The Unicode Standard 6.1, ISBN 978-1-936213-02-3, FULLWIDTH MACRON • sometimes treated as fullwidth overline
Annuity

An annuity is a series of payments made at equal intervals. Examples of annuities are regular deposits to a savings account, monthly home mortgage payments, monthly insurance payments and pension payments. Annuities can be classified by the frequency of payment dates. The payments (deposits) may be made weekly, monthly, quarterly, yearly, or at any other regular interval of time.

An annuity which provides for payments for the remainder of a person's lifetime is a life annuity.

Autocorrelation

Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.

Unit root processes, trend stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.

Bias of an estimator

In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. Otherwise the estimator is said to be biased. In statistics, "bias" is an objective property of an estimator, and while not a desired property, it is not pejorative, unlike the ordinary English use of the term "bias".

Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is related to consistency in that consistent estimators are convergent and asymptotically unbiased (hence converge to the correct value as the number of data points grows arbitrarily large), though individual estimators in a consistent sequence may be biased (so long as the bias converges to zero); see bias versus consistency.

All else being equal, an unbiased estimator is preferable to a biased estimator, but in practice all else is not equal, and biased estimators are frequently used, generally with small bias. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population or is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful. Further, mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see § Effect of transformations); for example, the sample variance is an unbiased estimator for the population variance, but its square root, the sample standard deviation, is a biased estimator for the population standard deviation. These are all illustrated below.

Circular uniform distribution

In probability theory and directional statistics, a circular uniform distribution is a probability distribution on the unit circle whose density is uniform for all angles.

Complex conjugate

In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. For example, the complex conjugate of 3 + 4i is 3 − 4i.

In polar form, the conjugate of ${\displaystyle re^{i\varphi }}$ is ${\displaystyle re^{-i\varphi }}$. This can be shown using Euler's formula.

Complex conjugates are important for finding roots of polynomials. According to the complex conjugate root theorem, if a complex number is a root to a polynomial in one variable with real coefficients (such as the quadratic equation or the cubic equation), so is its conjugate.

Coxeter–Dynkin diagram

In geometry, a Coxeter–Dynkin diagram (or Coxeter diagram, Coxeter graph) is a graph with numerically labeled edges (called branches) representing the spatial relations between a collection of mirrors (or reflecting hyperplanes). It describes a kaleidoscopic construction: each graph "node" represents a mirror (domain facet) and the label attached to a branch encodes the dihedral angle order between two mirrors (on a domain ridge). An unlabeled branch implicitly represents order-3.

Each diagram represents a Coxeter group, and Coxeter groups are classified by their associated diagrams.

Dynkin diagrams are closely related objects, which differ from Coxeter diagrams in two respects: firstly, branches labeled "4" or greater are directed, while Coxeter diagrams are undirected; secondly, Dynkin diagrams must satisfy an additional (crystallographic) restriction, namely that the only allowed branch labels are 2, 3, 4, and 6. Dynkin diagrams correspond to and are used to classify root systems and therefore semisimple Lie algebras.

Cross-correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology.

The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In probability and statistics, the term cross-correlations is used for referring to the correlations between the entries of two random vectors ${\displaystyle \mathbf {X} }$ and ${\displaystyle \mathbf {Y} }$, while the correlations of a random vector ${\displaystyle \mathbf {X} }$ are considered to be the correlations between the entries of ${\displaystyle \mathbf {X} }$ itself, those forming the correlation matrix (matrix of correlations) of ${\displaystyle \mathbf {X} }$. If each of ${\displaystyle \mathbf {X} }$ and ${\displaystyle \mathbf {Y} }$ is a scalar random variable which is realized repeatedly in temporal sequence (a time series), then the correlations of the various temporal instances of ${\displaystyle \mathbf {X} }$ are known as autocorrelations of ${\displaystyle \mathbf {X} }$, and the cross-correlations of ${\displaystyle \mathbf {X} }$ with ${\displaystyle \mathbf {Y} }$ across time are temporal cross-correlations.

Furthermore, in probability and statistics the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.

If ${\displaystyle X}$ and ${\displaystyle Y}$ are two independent random variables with probability density functions ${\displaystyle f}$ and ${\displaystyle g}$, respectively, then the probability density of the difference ${\displaystyle Y-X}$ is formally given by the cross-correlation (in the signal-processing sense) ${\displaystyle f\star g}$; however this terminology is not used in probability and statistics. In contrast, the convolution ${\displaystyle f*g}$ (equivalent to the cross-correlation of ${\displaystyle f(t)}$ and ${\displaystyle g(-t)}$) gives the probability density function of the sum ${\displaystyle X+Y}$.

De Morgan's laws

In propositional logic and boolean algebra, De Morgan's laws are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

The rules can be expressed in English as:

the negation of a disjunction is the conjunction of the negations; and
the negation of a conjunction is the disjunction of the negations;

or

the complement of the union of two sets is the same as the intersection of their complements; and
the complement of the intersection of two sets is the same as the union of their complements.

or

not (A or B) = not A and not B; and
not (A and B) = not A or not B

In set theory and Boolean algebra, these are written formally as

{\displaystyle {\begin{aligned}{\overline {A\cup B}}&={\overline {A}}\cap {\overline {B}},\\{\overline {A\cap B}}&={\overline {A}}\cup {\overline {B}},\end{aligned}}}

where

In formal language, the rules are written as

${\displaystyle \neg (P\lor Q)\iff (\neg P)\land (\neg Q),}$

and

${\displaystyle \neg (P\land Q)\iff (\neg P)\lor (\neg Q)}$

where

Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality.

Directional statistics

Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Rn), axes (lines through the origin in Rn) or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds.

The fact that 0 degrees and 360 degrees are identical angles, so that for example 180 degrees is not a sensible mean of 2 degrees and 358 degrees, provides one illustration that special statistical methods are required for the analysis of some types of data (in this case, angular data). Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), compass directions, dihedral angles in molecules, orientations, rotations and so on.

Fitness (biology)

Fitness (often denoted ${\displaystyle w}$ or ω in population genetics models) is the quantitative representation of natural and sexual selection within evolutionary biology. It can be defined either with respect to a genotype or to a phenotype in a given environment. In either case, it describes individual reproductive success and is equal to the average contribution to the gene pool of the next generation that is made by individuals of the specified genotype or phenotype. The fitness of a genotype is manifested through its phenotype, which is also affected by the developmental environment. The fitness of a given phenotype can also be different in different selective environments.

With asexual reproduction, it is sufficient to assign fitnesses to genotypes. With sexual reproduction, genotypes are scrambled every generation. In this case, fitness values can be assigned to alleles by averaging over possible genetic backgrounds. Natural selection tends to make alleles with higher fitness more common over time, resulting in Darwinian evolution.

The term "Darwinian fitness" can be used to make clear the distinction with physical fitness. Fitness does not include a measure of survival or life-span; Herbert Spencer's well-known phrase "survival of the fittest" should be interpreted as: "Survival of the form (phenotypic or genotypic) that will leave the most copies of itself in successive generations."

Inclusive fitness differs from individual fitness by including the ability of an allele in one individual to promote the survival and/or reproduction of other individuals that share that allele, in preference to individuals with a different allele. One mechanism of inclusive fitness is kin selection.

Gyromagnetic ratio

In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol γ, gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1).

The term "gyromagnetic ratio" is often used as a synonym for a different but closely related quantity, the g-factor. The g-factor, unlike the gyromagnetic ratio, is dimensionless. For more on the g-factor, see below, or see the article g-factor.

Hermitian matrix

In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

or in matrix form:

${\displaystyle A{\text{ Hermitian}}\quad \iff \quad A={\overline {A^{\mathsf {T}}}}}$.

Hermitian matrices can be understood as the complex extension of real symmetric matrices.

If the conjugate transpose of a matrix ${\displaystyle A}$ is denoted by ${\displaystyle A^{\mathsf {H}}}$, then the Hermitian property can be written concisely as

Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are ${\displaystyle A^{\mathsf {H}}=A^{\dagger }=A^{\ast }}$, although note that in quantum mechanics, ${\displaystyle A^{\ast }}$ typically means the complex conjugate only, and not the conjugate transpose.

Hotelling's T-squared distribution

In statistics Hotelling's T-squared distribution (T2) is a multivariate distribution proportional to the F-distribution and arises importantly as the distribution of a set of statistics which are natural generalizations of the statistics underlying Student's t-distribution.

Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.

Karnaugh map

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh introduced it in 1953 as a refinement of Edward Veitch's 1952 Veitch chart, which actually was a rediscovery of Allan Marquand's 1881 logical diagram aka Marquand diagram but with a focus now set on its utility for switching circuits. Veitch charts are therefore also known as Marquand–Veitch diagrams, and Karnaugh maps as Karnaugh–Veitch maps (KV maps).

The Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability. It also permits the rapid identification and elimination of potential race conditions.

The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code, and each cell position represents one combination of input conditions, while each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table. These terms can be used to write a minimal Boolean expression representing the required logic.

Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate. Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators. Diagrammatic and mechanical methods for minimizing simple logic expressions have existed since at least the medieval times. More systematic methods for minimizing complex expressions began to be developed in the early 1950s, but until the mid to late 1980's the Karnaugh map was the most common used in practice.

Kinetic theory of gases

The kinetic theory of gases describes a gas as a large number of submicroscopic particles (atoms or molecules), all of which are in constant, rapid, random motion. The randomness arises from the particles' many collisions with each other and with the walls of the container.

Kinetic theory of gases explains the macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure results from particles' collisions with the walls of a container at different velocities.

Kinetic molecular theory defines temperature in its own way, in contrast with the thermodynamic definition.Under an optical microscope, the molecules making up a liquid are too small to be visible. However, the jittery motion of pollen grains or dust particles in liquid are visible. Known as Brownian motion, the motion of the pollen or dust results from their collisions with the liquid's molecules.

Modular arithmetic

In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" upon reaching a certain value—the modulus (plural moduli). The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Usual addition would suggest that the later time should be 7 + 8 = 15, but this is not the answer because clock time "wraps around" every 12 hours. Because the hour number starts over after it reaches 12, this is arithmetic modulo 12. According to the definition below, 12 is congruent not only to 12 itself, but also to 0, so the time called "12:00" could also be called "0:00", since 12 is congruent to 0 modulo 12.

Perpendicular

In elementary geometry, the property of being perpendicular (perpendicularity) is the relationship between two lines which meet at a right angle (90 degrees). The property extends to other related geometric objects.

A line is said to be perpendicular to another line if the two lines intersect at a right angle. Explicitly, a first line is perpendicular to a second line if (1) the two lines meet; and (2) at the point of intersection the straight angle on one side of the first line is cut by the second line into two congruent angles. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, then the second line is also perpendicular to the first. For this reason, we may speak of two lines as being perpendicular (to each other) without specifying an order.

Perpendicularity easily extends to segments and rays. For example, a line segment ${\displaystyle {\overline {AB}}}$ is perpendicular to a line segment ${\displaystyle {\overline {CD}}}$ if, when each is extended in both directions to form an infinite line, these two resulting lines are perpendicular in the sense above. In symbols, ${\displaystyle {\overline {AB}}\perp {\overline {CD}}}$ means line segment AB is perpendicular to line segment CD. For information regarding the perpendicular symbol see Up tack.

A line is said to be perpendicular to a plane if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines.

Two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle (90 degrees).

Perpendicularity is one particular instance of the more general mathematical concept of orthogonality; perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the word "perpendicular" is sometimes used to describe much more complicated geometric orthogonality conditions, such as that between a surface and its normal.

In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor.

The radiation stress tensor describes the additional forcing due to the presence of the waves, which changes the mean depth-integrated horizontal momentum in the fluid layer. As a result, varying radiation stresses induce changes in the mean surface elevation (wave setup) and the mean flow (wave-induced currents).

For the mean energy density in the oscillatory part of the fluid motion, the radiation stress tensor is important for its dynamics, in case of an inhomogeneous mean-flow field.

The radiation stress tensor, as well as several of its implications on the physics of surface gravity waves and mean flows, were formulated in a series of papers by Longuet-Higgins and Stewart in 1960–1964.

The algebraic expressions ${\displaystyle A\cdot {\overline {B}}+{\overline {A}}\cdot B}$ and ${\displaystyle (A+B)\cdot }$ ( ${\displaystyle {\overline {A}}+{\overline {B}}}$ ) both represent the XOR gate with inputs A and B. The behavior of XOR is summarized in the truth table shown on the right.