Arithmetic (from the Greek ἀριθμός arithmos, "number" and τική [τέχνη], tiké [téchne], "art") is a branch of mathematics that consists of the study of numbers, especially the properties of the traditional operations on them—addition, subtraction, multiplication and division. Arithmetic is an elementary part of number theory, and number theory is considered to be one of the toplevel divisions of modern mathematics, along with algebra, geometry, and analysis. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are sometimes still used to refer to a wider part of number theory.^{[1]}
The prehistory of arithmetic is limited to a small number of artifacts which may indicate the conception of addition and subtraction, the bestknown being the Ishango bone from central Africa, dating from somewhere between 20,000 and 18,000 BC, although its interpretation is disputed.^{[2]}
The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a counting board or the Roman abacus to obtain the results.
Early number systems that included positional notation were not decimal, including the sexagesimal (base 60) system for Babylonian numerals and the vigesimal (base 20) system that defined Maya numerals. Because of this placevalue concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation.
The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, although it originated much later than the Babylonian and Egyptian examples. Prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, and their relationships to each other, in his Introduction to Arithmetic.
Greek numerals were used by Archimedes, Diophantus and others in a positional notation not very different from ours. The ancient Greeks lacked a symbol for zero until the Hellenistic period, and they used three separate sets of symbols as digits: one set for the units place, one for the tens place, and one for the hundreds. For the thousands place they would reuse the symbols for the units place, and so on. Their addition algorithm was identical to ours, and their multiplication algorithm was only very slightly different. Their long division algorithm was the same, and the digitbydigit square root algorithm, popularly used as recently as the 20th century, was known to Archimedes, who may have invented it. He preferred it to Hero's method of successive approximation because, once computed, a digit doesn't change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a fractional part, such as 546.934, they used negative powers of 60 instead of negative powers of 10 for the fractional part 0.934.^{[3]}
The ancient Chinese had advanced arithmetic studies dating from the Shang Dynasty and continuing through the Tang Dynasty, from basic numbers to advanced algebra. The ancient Chinese used a positional notation similar to that of the Greeks. Since they also lacked a symbol for zero, they had one set of symbols for the unit's place, and a second set for the ten's place. For the hundred's place they then reused the symbols for the unit's place, and so on. Their symbols were based on the ancient counting rods. It is a complicated question to determine exactly when the Chinese started calculating with positional representation, but it was definitely before 400 BC.^{[4]} The ancient Chinese were the first to meaningfully discover, understand, and apply negative numbers as explained in the Nine Chapters on the Mathematical Art (Jiuzhang Suanshu), which was written by Liu Hui.
The gradual development of the Hindu–Arabic numeral system independently devised the placevalue concept and positional notation, which combined the simpler methods for computations with a decimal base and the use of a digit representing 0. This allowed the system to consistently represent both large and small integers. This approach eventually replaced all other systems. In the early 6th century AD, the Indian mathematician Aryabhata incorporated an existing version of this system in his work, and experimented with different notations. In the 7th century, Brahmagupta established the use of 0 as a separate number and determined the results for multiplication, division, addition and subtraction of zero and all other numbers, except for the result of division by 0. His contemporary, the Syriac bishop Severus Sebokht (650 AD) said, "Indians possess a method of calculation that no word can praise enough. Their rational system of mathematics, or of their method of calculation. I mean the system using nine symbols."^{[5]} The Arabs also learned this new method and called it hesab.
Although the Codex Vigilanus described an early form of Arabic numerals (omitting 0) by 976 AD, Leonardo of Pisa (Fibonacci) was primarily responsible for spreading their use throughout Europe after the publication of his book Liber Abaci in 1202. He wrote, "The method of the Indians (Latin Modus Indoram) surpasses any known method to compute. It's a marvelous method. They do their computations using nine figures and symbol zero".^{[6]}
In the Middle Ages, arithmetic was one of the seven liberal arts taught in universities.
The flourishing of algebra in the medieval Islamic world and in Renaissance Europe was an outgrowth of the enormous simplification of computation through decimal notation.
Various types of tools have been invented and widely used to assist in numeric calculations. Before Renaissance, they were various types of abaci. More recent examples include slide rules, nomograms and mechanical calculators, such as Pascal's calculator. At present, they have been supplanted by electronic calculators and computers.
The basic arithmetic operations are addition, subtraction, multiplication and division, although this subject also includes more advanced operations, such as manipulations of percentages, square roots, exponentiation, logarithmic functions, and even trigonometric functions, in the same vein as logarithms (Prosthaphaeresis). Arithmetic expressions must be evaluated according to the intended sequence of operations. There are several methods to specify this, either—most common, together with infix notation—explicitly using parentheses, and relying on precedence rules, or using a pre– or postfix notation, which uniquely fix the order of execution by themselves. Any set of objects upon which all four arithmetic operations (except division by 0) can be performed, and where these four operations obey the usual laws (including distributivity), is called a field.^{[7]}
Addition is the most basic operation of arithmetic. In its simple form, addition combines two numbers, the addends or terms, into a single number, the sum of the numbers (Such as 2 + 2 = 4 or 3 + 5 = 8).
Adding finitely many numbers can be viewed as repeated simple addition; this procedure is known as summation, a term also used to denote the definition for "adding infinitely many numbers" in an infinite series. Repeated addition of the number 1 is the most basic form of counting, the result of adding 1 is usually called the successor of the original number.
Addition is commutative and associative, so the order in which finitely many terms are added does not matter. The identity element for a binary operation is the number that, when combined with any number, yields the same number as result. According to the rules of addition, adding 0 to any number yields that same number, so 0 is the additive identity. The inverse of a number with respect to a binary operation is the number that, when combined with any number, yields the identity with respect to this operation. So the inverse of a number with respect to addition (its additive inverse, or the opposite number), is the number, that yields the additive identity, 0, when added to the original number; it is immediate that this is the negative of the original number. For example, the additive inverse of 7 is −7, since 7 + (−7) = 0.
Addition can be interpreted geometrically as in the following example:
Subtraction is the inverse operation to addition. Subtraction finds the difference between two numbers, the minuend minus the subtrahend: D = M  S. Resorting to the previously established addition, this is to say that the difference is the number that, when added to the subtrahend, results in the minuend: D + S = M.
For positive arguments M and S holds:
In any case, if minuend and subtrahend are equal, the difference D = 0.
Subtraction is neither commutative nor associative. For that reason, in modern algebra the construction of this inverse operation is often discarded in favor of introducing the concept of inverse elements, as sketched under Addition, and to look at subtraction as adding the additive inverse of the subtrahend to the minuend, that is a − b = a + (−b). The immediate price of discarding the binary operation of subtraction is the introduction of the (trivial) unary operation, delivering the additive inverse for any given number, and losing the immediate access to the notion of difference, which is potentially misleading, anyhow, when negative arguments are involved.
For any representation of numbers there are methods for calculating results, some of which are particularly advantageous in exploiting procedures, existing for one operation, by small alterations also for others. For example, digital computers can reuse existing addingcircuitry and save additional circuits for implementing a subtraction by employing the method of two's complement for representing the additive inverses, which is extremely easy to implement in hardware (negation). The tradeoff is the halving of the number range for a fixed word length.
A formerly wide spread method to achieve a correct change amount, knowing the due and given amounts, is the counting up method, which does not explicitly generate the value of the difference. Suppose an amount P is given in order to pay the required amount Q, with P greater than Q. Rather than explicitly performing the subtraction P − Q = C and counting out that amount C in change, money is counted out starting with the successor of Q, and continuing in the steps of the currency, until P is reached. Although the amount counted out must equal the result of the subtraction P − Q, the subtraction was never really done and the value of P − Q is not supplied by this method.
Multiplication is the second basic operation of arithmetic. Multiplication also combines two numbers into a single number, the product. The two original numbers are called the multiplier and the multiplicand, mostly both are simply called factors.
Multiplication may be viewed as a scaling operation. If the numbers are imagined as lying in a line, multiplication by a number, say x, greater than 1 is the same as stretching everything away from 0 uniformly, in such a way that the number 1 itself is stretched to where x was. Similarly, multiplying by a number less than 1 can be imagined as squeezing towards 0. (Again, in such a way that 1 goes to the multiplicand.)
Another view on multiplication of integer numbers, extendable to rationals, but not very accessible for real numbers, is by considering it as repeated addition. So 3 × 4 corresponds to either adding 3 times a 4, or 4 times a 3, giving the same result. There are different opinions on the advantageousness of these paradigmata in math education.
Multiplication is commutative and associative; further it is distributive over addition and subtraction. The multiplicative identity is 1, since multiplying any number by 1 yields that same number (no stretching or squeezing). The multiplicative inverse for any number except 0 is the reciprocal of this number, because multiplying the reciprocal of any number by the number itself yields the multiplicative identity 1. 0 is the only number without a multiplicative inverse, and the result of multiplying any number and 0 is again 0. One says, 0 is not contained in the multiplicative group of the numbers.
The product of a and b is written as a × b or a·b. When a or b are expressions not written simply with digits, it is also written by simple juxtaposition: ab. In computer programming languages and software packages in which one can only use characters normally found on a keyboard, it is often written with an asterisk: a * b.
Algorithms implementing the operation of multiplication for various representations of numbers are by far more costly and laborious than those for addition. Those accessible for manual computation either rely on breaking down the factors to single place values and apply repeated addition, or employ tables or slide rules, thereby mapping the multiplication to addition and back. These methods are outdated and replaced by mobile devices. Computers utilize diverse sophisticated and highly optimized algorithms to implement multiplication and division for the various number formats supported in their system.
Division is essentially the inverse operation to multiplication. Division finds the quotient of two numbers, the dividend divided by the divisor. Any dividend divided by 0 is undefined. For distinct positive numbers, if the dividend is larger than the divisor, the quotient is greater than 1, otherwise it is less than 1 (a similar rule applies for negative numbers). The quotient multiplied by the divisor always yields the dividend.
Division is neither commutative nor associative. So as explained for subtraction, in modern algebra the construction of the division is discarded in favor of constructing the inverse elements with respect to multiplication, as introduced there. That is, division is a multiplication with the dividend and the reciprocal of the divisor as factors, that is a ÷ b = a × 1/b.
Within natural numbers there is also a different, but related notion, the Euclidean division, giving two results of "dividing" a natural N (numerator) by a natural D (denominator), first, a natural Q (quotient) and second, a natural R (remainder), such that N = D×Q + R and R < Q.
Decimal representation refers exclusively, in common use, to the written numeral system employing arabic numerals as the digits for a radix 10 ("decimal") positional notation; however, any numeral system based on powers of 10, e.g., Greek, Cyrillic, Roman, or Chinese numerals may conceptually be described as "decimal notation" or "decimal representation".
Modern methods for four fundamental operations (addition, subtraction, multiplication and division) were first devised by Brahmagupta of India. This was known during medieval Europe as "Modus Indoram" or Method of the Indians. Positional notation (also known as "placevalue notation") refers to the representation or encoding of numbers using the same symbol for the different orders of magnitude (e.g., the "ones place", "tens place", "hundreds place") and, with a radix point, using those same symbols to represent fractions (e.g., the "tenths place", "hundredths place"). For example, 507.36 denotes 5 hundreds (10^{2}), plus 0 tens (10^{1}), plus 7 units (10^{0}), plus 3 tenths (10^{−1}) plus 6 hundredths (10^{−2}).
The concept of 0 as a number comparable to the other basic digits is essential to this notation, as is the concept of 0's use as a placeholder, and as is the definition of multiplication and addition with 0. The use of 0 as a placeholder and, therefore, the use of a positional notation is first attested to in the Jain text from India entitled the Lokavibhâga, dated 458 AD and it was only in the early 13th century that these concepts, transmitted via the scholarship of the Arabic world, were introduced into Europe by Fibonacci^{[8]} using the Hindu–Arabic numeral system.
Algorism comprises all of the rules for performing arithmetic computations using this type of written numeral. For example, addition produces the sum of two arbitrary numbers. The result is calculated by the repeated addition of single digits from each number that occupies the same position, proceeding from right to left. An addition table with ten rows and ten columns displays all possible values for each sum. If an individual sum exceeds the value 9, the result is represented with two digits. The rightmost digit is the value for the current position, and the result for the subsequent addition of the digits to the left increases by the value of the second (leftmost) digit, which is always one. This adjustment is termed a carry of the value 1.
The process for multiplying two arbitrary numbers is similar to the process for addition. A multiplication table with ten rows and ten columns lists the results for each pair of digits. If an individual product of a pair of digits exceeds 9, the carry adjustment increases the result of any subsequent multiplication from digits to the left by a value equal to the second (leftmost) digit, which is any value from 1 to 8 (9 × 9 = 81). Additional steps define the final result.
Similar techniques exist for subtraction and division.
The creation of a correct process for multiplication relies on the relationship between values of adjacent digits. The value for any single digit in a numeral depends on its position. Also, each position to the left represents a value ten times larger than the position to the right. In mathematical terms, the exponent for the radix (base) of 10 increases by 1 (to the left) or decreases by 1 (to the right). Therefore, the value for any arbitrary digit is multiplied by a value of the form 10^{n} with integer n. The list of values corresponding to all possible positions for a single digit is written as {..., 10^{2}, 10, 1, 10^{−1}, 10^{−2}, ...}.
Repeated multiplication of any value in this list by 10 produces another value in the list. In mathematical terminology, this characteristic is defined as closure, and the previous list is described as closed under multiplication. It is the basis for correctly finding the results of multiplication using the previous technique. This outcome is one example of the uses of number theory.
Compound^{[9]} unit arithmetic is the application of arithmetic operations to mixed radix quantities such as feet and inches, gallons and pints, pounds shillings and pence, and so on. Prior to the use of decimalbased systems of money and units of measure, the use of compound unit arithmetic formed a significant part of commerce and industry.
The techniques used for compound unit arithmetic were developed over many centuries and are welldocumented in many textbooks in many different languages.^{[10]}^{[11]}^{[12]}^{[13]} In addition to the basic arithmetic functions encountered in decimal arithmetic, compound unit arithmetic employs three more functions:
Knowledge of the relationship between the various units of measure, their multiples and their submultiples forms an essential part of compound unit arithmetic.
There are two basic approaches to compound unit arithmetic:
UK predecimal currency  


The addition operation is carried out from right to left; in this case, pence are processed first, then shillings followed by pounds. The numbers below the "answer line" are intermediate results.
The total in the pence column is 25. Since there are 12 pennies in a shilling, 25 is divided by 12 to give 2 with a remainder of 1. The value "1" is then written to the answer row and the value "2" carried forward to the shillings column. This operation is repeated using the values in the shillings column, with the additional step of adding the value that was carried forward from the pennies column. The intermediate total is divided by 20 as there are 20 shillings in a pound. The pound column is then processed, but as pounds are the largest unit that is being considered, no values are carried forward from the pounds column.
For the sake of simplicity, the example chosen did not have farthings.
During the 19th and 20th centuries various aids were developed to aid the manipulation of compound units, particularly in commercial applications. The most common aids were mechanical tills which were adapted in countries such as the United Kingdom to accommodate pounds, shillings, pennies and farthings and "Ready Reckoners"—books aimed at traders that catalogued the results of various routine calculations such as the percentages or multiples of various sums of money. One typical booklet^{[15]} that ran to 150 pages tabulated multiples "from one to ten thousand at the various prices from one farthing to one pound".
The cumbersome nature of compound unit arithmetic has been recognized for many years—in 1586, the Flemish mathematician Simon Stevin published a small pamphlet called De Thiende ("the tenth")^{[16]} in which he declared the universal introduction of decimal coinage, measures, and weights to be merely a question of time. In the modern era, many conversion programs, such as that included in the Microsoft Windows 7 operating system calculator, display compound units in a reduced decimal format rather than using an expanded format (i.e. "2.5 ft" is displayed rather than "2 ft 6 in").
Until the 19th century, number theory was a synonym of "arithmetic". The addressed problems were directly related to the basic operations and concerned primality, divisibility, and the solution of equations in integers, such as Fermat's last theorem. It appeared that most of these problems, although very elementary to state, are very difficult and may not be solved without very deep mathematics involving concepts and methods from many other branches of mathematics. This led to new branches of number theory such as analytic number theory, algebraic number theory, Diophantine geometry and arithmetic algebraic geometry. Wiles' proof of Fermat's Last Theorem is a typical example of the necessity of sophisticated methods, which go far beyond the classical methods of arithmetic, for solving problems that can be stated in elementary arithmetic.
Primary education in mathematics often places a strong focus on algorithms for the arithmetic of natural numbers, integers, fractions, and decimals (using the decimal placevalue system). This study is sometimes known as algorism.
The difficulty and unmotivated appearance of these algorithms has long led educators to question this curriculum, advocating the early teaching of more central and intuitive mathematical ideas. One notable movement in this direction was the New Math of the 1960s and 1970s, which attempted to teach arithmetic in the spirit of axiomatic development from set theory, an echo of the prevailing trend in higher mathematics.^{[17]}
Also, arithmetic was used by Islamic Scholars in order to teach application of the rulings related to Zakat and Irth. This was done in a book entitled The Best of Arithmetic by AbdalFattahalDumyati.^{[18]}
The book begins with the foundations of mathematics and proceeds to its application in the later chapters.
An arithmetic logic unit (ALU) is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floatingpoint unit (FPU), which operates on floating point numbers. An ALU is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs). A single CPU, FPU or GPU may contain multiple ALUs.
The inputs to an ALU are the data to be operated on, called operands, and a code indicating the operation to be performed; the ALU's output is the result of the performed operation. In many designs, the ALU also has status inputs or outputs, or both, which convey information about a previous operation or the current operation, respectively, between the ALU and external status registers.
Arithmetic meanIn mathematics and statistics, the arithmetic mean ( , stress on third syllable of "arithmetic"), or simply the mean or average when the context is clear, is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results of an experiment or an observational study, or frequently a set of results from a survey. The term "arithmetic mean" is preferred in some contexts in mathematics and statistics because it helps distinguish it from other means, such as the geometric mean and the harmonic mean.
In addition to mathematics and statistics, the arithmetic mean is used frequently in many diverse fields such as economics, anthropology, and history, and it is used in almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population.
While the arithmetic mean is often used to report central tendencies, it is not a robust statistic, meaning that it is greatly influenced by outliers (values that are very much larger or smaller than most of the values). Notably, for skewed distributions, such as the distribution of income for which a few people's incomes are substantially greater than most people's, the arithmetic mean may not coincide with one's notion of "middle", and robust statistics, such as the median, may be a better description of central tendency.
Arithmetic progressionIn mathematics, an arithmetic progression (AP) or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant. Difference here means the second minus the first. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with common difference of 2.
If the initial term of an arithmetic progression is and the common difference of successive members is d, then the nth term of the sequence () is given by:
and in general
A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The sum of a finite arithmetic progression is called an arithmetic series.
The behavior of the arithmetic progression depends on the common difference d. If the common difference is:
AverageIn colloquial language, an average is a single number taken as representative of a list of numbers. Different concepts of average are used in different contexts. Often "average" refers to the arithmetic mean, the sum of the numbers divided by how many numbers are being averaged. In statistics, mean, median, and mode are all known as measures of central tendency, and in colloquial usage any of these might be called an average value.
Binary numberIn mathematics and digital electronics, a binary number is a number expressed in the base2 numeral system or binary numeral system, which uses only two symbols: typically "0" (zero) and "1" (one).
The base2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used by almost all modern computers and computerbased devices.
Division by zeroIn mathematics, division by zero is division where the divisor (denominator) is zero. Such a division can be formally expressed as a/0 where a is the dividend (numerator). In ordinary arithmetic, the expression has no meaning, as there is no number which, when multiplied by 0, gives a (assuming a ≠ 0), and so division by zero is undefined. Since any number multiplied by zero is zero, the expression 0/0 is also undefined; when it is the form of a limit, it is an indeterminate form. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to a/0 is contained in George Berkeley's criticism of infinitesimal calculus in 1734 in The Analyst ("ghosts of departed quantities").There are mathematical structures in which a/0 is defined for some a such as in the Riemann sphere and the projectively extended real line; however, such structures cannot satisfy every ordinary rule of arithmetic (the field axioms).
In computing, a program error may result from an attempt to divide by zero. Depending on the programming environment and the type of number (e.g. floating point, integer) being divided by zero, it may generate positive or negative infinity by the IEEE 754 floating point standard, generate an exception, generate an error message, cause the program to terminate, result in a special notanumber value, a freeze via infinite loop, or a crash.
Floatingpoint arithmeticIn computing, floatingpoint arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation so as to support a tradeoff between range and precision. For this reason, floatingpoint computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
where significand is an integer, base is an integer greater than or equal to two, and exponent is also an integer. For example:
The term floating point refers to the fact that a number's radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floatingpoint representation can be thought of as a kind of scientific notation.
A floatingpoint system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers grows with the chosen scale.
Over the years, a variety of floatingpoint representations have been used in computers. In 1985, the IEEE 754 Standard for FloatingPoint Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floatingpoint operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations.
A floatingpoint unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floatingpoint numbers.
Geometric meanIn mathematics, the geometric mean is a mean or average, which indicates the central tendency or typical value of a set of numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean is defined as the nth root of the product of n numbers, i.e., for a set of numbers x_{1}, x_{2}, ..., x_{n}, the geometric mean is defined as
For instance, the geometric mean of two numbers, say 2 and 8, is just the square root of their product, that is, . As another example, the geometric mean of the three numbers 4, 1, and 1/32 is the cube root of their product (1/8), which is 1/2, that is, .
A geometric mean is often used when comparing different items—finding a single "figure of merit" for these items—when each item has multiple properties that have different numeric ranges. For example, the geometric mean can give a meaningful value to compare two companies which are each rated at 0 to 5 for their environmental sustainability, and are rated at 0 to 100 for their financial viability. If an arithmetic mean were used instead of a geometric mean, the financial viability would have greater weight because its numeric range is larger. That is, a small percentage change in the financial rating (e.g. going from 80 to 90) makes a much larger difference in the arithmetic mean than a large percentage change in environmental sustainability (e.g. going from 2 to 5). The use of a geometric mean normalizes the differentlyranged values, meaning a given percentage change in any of the properties has the same effect on the geometric mean. So, a 20% change in environmental sustainability from 4 to 4.8 has the same effect on the geometric mean as a 20% change in financial viability from 60 to 72.
The geometric mean can be understood in terms of geometry. The geometric mean of two numbers, and , is the length of one side of a square whose area is equal to the area of a rectangle with sides of lengths and . Similarly, the geometric mean of three numbers, , , and , is the length of one edge of a cube whose volume is the same as that of a cuboid with sides whose lengths are equal to the three given numbers.
The geometric mean applies only to positive numbers. It is also often used for a set of numbers whose values are meant to be multiplied together or are exponential in nature, such as data on the growth of the human population or interest rates of a financial investment.
The geometric mean is also one of the three classical Pythagorean means, together with the aforementioned arithmetic mean and the harmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (see Inequality of arithmetic and geometric means.)
IEEE 754The IEEE Standard for FloatingPoint Arithmetic (IEEE 754) is a technical standard for floatingpoint arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floatingpoint implementations that made them difficult to use reliably and portably. Many hardware floatingpoint units use the IEEE 754 standard.
The standard defines:
arithmetic formats: sets of binary and decimal floatingpoint data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floatingpoint data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)The current version, IEEE 7542008 revision published in August 2008, includes nearly all of the original IEEE 7541985 standard plus IEEE 8541987 Standard for RadixIndependent FloatingPoint Arithmetic.
ISO/IEC 10967ISO/IEC 10967, Language independent arithmetic (LIA), is a series of
standards on computer arithmetic. It is compatible with ISO/IEC/IEEE 60559:2011,
more known as IEEE 7542008, and much of the
specifications are for IEEE 754 special values
(though such values are not required by LIA itself, unless the parameter iec559 is true).
It was developed by the working group ISO/IEC JTC1/SC22/WG11, which was disbanded in 2011.LIA currently consists of three parts:
Part 1: Integer and floating point arithmetic, second edition published 2012.
Part 2: Elementary numerical functions, first edition published 2001.
Part 3: Complex integer and floating point arithmetic and complex elementary numerical functions, first edition published 2006.
Instruction set architectureAn instruction set architecture (ISA) is an abstract model of a computer. It is also referred to as architecture or computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost (among other things); because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.
An ISA defines everything a machine language programmer needs to know in order to program a computer. What an ISA defines differs between ISAs; in general, ISAs define the supported data types, what state there is (such as the main memory and registers) and their semantics (such as the memory consistency and addressing modes), the instruction set (the set of machine instructions that comprises a computer's machine language), and the input/output model.
MeanThere are several kinds of means in various branches of mathematics (especially statistics).
For a data set, the arithmetic mean, also called the mathematical expectation or average, is the central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x_{1}, x_{2}, ..., x_{n} is typically denoted by , pronounced "x bar". If the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is the sample mean (denoted ) to distinguish it from the mean of the underlying distribution, the population mean (denoted or ). Pronounced "mew" /'mjuː/.
In probability and statistics, the population mean, or expected value, are a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In the case of a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving . An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean; see the Cauchy distribution for an example. Moreover, for some distributions the mean is infinite.
For a finite population, the population mean of a property is equal to the arithmetic mean of the given property while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers dictates that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.
Outside probability and statistics, a wide range of other notions of "mean" are often used in geometry and analysis; examples are given below.
Modular arithmeticIn mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" upon reaching a certain value—the modulus (plural moduli). The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
A familiar use of modular arithmetic is in the 12hour clock, in which the day is divided into two 12hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Usual addition would suggest that the later time should be 7 + 8 = 15, but this is not the answer because clock time "wraps around" every 12 hours. Because the hour number starts over after it reaches 12, this is arithmetic modulo 12. According to the definition below, 12 is congruent not only to 12 itself, but also to 0, so the time called "12:00" could also be called "0:00", since 12 is congruent to 0 modulo 12.
MultiplicationMultiplication (often denoted by the cross symbol "×", by a point "⋅", by juxtaposition, or, on computers, by an asterisk "∗") is one of the four elementary mathematical operations of arithmetic; with the others being addition, subtraction and division.
The multiplication of whole numbers may be thought as a repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, the multiplicand, as the value of the other one, the multiplier. The multiplier can be written first and multiplicand second (though the custom can vary by culture); both can be called factors.
For example, 4 multiplied by 3 (often written as and spoken as "3 times 4") can be calculated by adding 3 copies of 4 together:
Here 3 and 4 are the factors and 12 is the product.
One of the main properties of multiplication is the commutative property: adding 3 copies of 4 gives the same result as adding 4 copies of 3:
Thus the designation of multiplier and multiplicand does not affect the result of the multiplication.
The multiplication of integers (including negative numbers), rational numbers (fractions) and real numbers is defined by a systematic generalization of this basic definition.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths. The area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for instance multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis.
The inverse operation of multiplication is division. For example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number (since the division of a number other than 0 by itself equals 1).
Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For some of these more abstract constructs, the order in which the operands are multiplied together matters. A listing of the many different kinds of products that are used in mathematics is given in the product (mathematics) page.
Number theoryNumber theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." Number theorists study prime numbers as well as the properties of objects made out of integers (for example, rational numbers) or defined as generalizations of the integers (for example, algebraic integers).
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects (for example, the Riemann zeta function) that encode properties of the integers, primes or other numbertheoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, for example, as approximated by the latter (Diophantine approximation).
The older term for number theory is arithmetic. By the early twentieth century, it had been superseded by "number theory". (The word "arithmetic" is used by the general public to mean "elementary calculations"; it has also acquired other meanings in mathematical logic, as in Peano arithmetic, and computer science, as in floating point arithmetic.) The use of the term arithmetic for number theory regained some ground in the second half of the 20th century, arguably in part due to French influence. In particular, arithmetical is preferred as an adjective to numbertheoretic.
Peano axiomsIn mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete.
The need to formalize arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction. In 1881, Charles Sanders Peirce provided an axiomatization of naturalnumber arithmetic. In 1888, Richard Dedekind proposed another axiomatization of naturalnumber arithmetic, and in 1889, Peano published a simplified version of them as a collection of axioms in his book, The principles of arithmetic presented by a new method (Latin: Arithmetices principia, nova methodo exposita).
The Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set of natural numbers. The next four are general statements about equality; in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are firstorder statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. A weaker firstorder system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the secondorder induction axiom with a firstorder axiom schema.
Significant figuresThe significant figures (also known as the significant digits) of a number are digits that carry meaning contributing to its measurement resolution. This includes all digits except:
All leading zeros;
Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and
Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports.Significance arithmetic are approximate rules for roughly maintaining significance throughout a computation. The more sophisticated scientific rules are known as propagation of uncertainty.
Numbers are often rounded to avoid reporting insignificant figures. For example, it would create false precision to express a measurement as 12.34525 kg (which has seven significant figures) if the scales only measured to the nearest gram and gave a reading of 12.345 kg (which has five significant figures). Numbers can also be rounded merely for simplicity rather than to indicate a given precision of measurement, for example, to make them faster to pronounce in news broadcasts.
Two's complementTwo's complement is a mathematical operation on binary numbers, and is an example of a radix complement. It is used in computing as a method of signed number representation.
The two's complement of an Nbit number is defined as its complement with respect to 2N. For instance, for the threebit number 010, the two's complement is 110, because 010 + 110 = 1000.
Two's complement is the most common method of representing signed integers on computers, and more generally, fixed point binary values. In this scheme, if the binary number 0102 encodes the signed integer 210, then its two's complement, 1102, encodes the inverse: 210. In other words, to reverse the sign of any integer in this scheme, you can take the two's complement of its binary representation. The tables at right illustrate this property.
Compared to other systems for representing signed numbers (e.g., ones' complement), two's complement has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits  as the output, and any overflow beyond those bits is discarded from the result). This property makes the system simpler to implement, especially for higherprecision arithmetic. Unlike ones' complement systems, two's complement has no representation for negative zero, and thus does not suffer from its associated difficulties.
Conveniently, another way of finding the two's complement of a number is to take its ones' complement and add one: the sum of a number and its ones' complement is all '1' bits, or 2N − 1; and by definition, the sum of a number and its two's complement is 2N.
Weighted arithmetic meanThe weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.
If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox.
Algebra  

Analysis  
Number theory  
Discrete  
Geometry  
Foundations  
Topology  
Computational  
Applied  
Others  

This page is based on a Wikipedia article written by authors
(here).
Text is available under the CC BYSA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.