Large numbers

Large numbers are numbers that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers, or more generally, large positive real numbers, but it may also be used in other contexts.

Very large numbers often occur in fields such as mathematics, cosmology, cryptography, and statistical mechanics. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy.

Large numbers in the everyday world

Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is.

Examples of large numbers describing everyday real-world objects include:

  • The number of bits on a computer hard disk (as of 2010, typically about 1013, 500–1000 GB)
  • The estimated number of atoms in the observable universe (1080)
  • Earth's mass consists of about (4x1051) nucleons
  • The number of cells in the human body (estimated at 3.72 × 1013)[1]
  • The number of neuronal connections in the human brain (estimated at 1014)
  • The lower bound on the game-tree complexity of chess, also known as the "Shannon number" (estimated at around 10120)[2]
  • The Avogadro constant is the number of "elementary entities" (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 – approximately 6.022×1023.

Large numbers and governments

Large numbers have been central to “statistics-driven thinking”, which have become “ubiquitous in modern society.” Beginning with 17th-century probability theory, statistics have evolved and become integral to both governmental knowledge and power. There is a complex "reciprocity between modern governments and the mathematical artifacts that both dictate the duties of the state and measure its successes". These tools include economics, mathematical statistics, medical statistics, probability, psychology, sociology, and surveys. These have led to applied econometrics in modern times.[3]

Illinois Senator Everett Dirksen is noted as saying, "A billion here, a billion there, pretty soon, you're talking real money." Although there is no direct record of the remark,[4] he is believed to have made it during an appearance on The Tonight Show Starring Johnny Carson. (See wikiquotes of Everett Dirksen.)

Astronomically large numbers

Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model suggests that the universe is 13.8 billion years (4.355 × 1017 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 1026 metres), and contains about 5 × 1022 stars, organized into around 125 billion (1.25 × 1011) galaxies, according to Hubble Space Telescope observations. There are about 1080 atoms in the observable universe, by rough estimation.[5]

According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is

which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses.[6][7] This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where the universe's history repeats itself arbitrarily many times due to properties of statistical mechanics; this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again.

Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth.

Combinatorial processes generate very large numbers in statistical mechanics. These numbers are so large that they are typically only referred to using their logarithms.

Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions.

Logician Harvey Friedman has done work related to very large numbers, such as with Kruskal's tree theorem and the Robertson–Seymour theorem.

"Billions and billions"

To help viewers of Cosmos distinguish between "millions" and "billions", astronomer Carl Sagan wrote a book and discoursed, stressing the "b". Sagan never did, however, say "billions and billions". The public's association of the phrase and Sagan came from a Tonight Show skit. Parodying Sagan's affect, Johnny Carson quipped "billions and billions".[8] The phrase has, however, now become a humorous fictitious number—the Sagan. Cf., Sagan Unit.

Computers and computational complexity

Between 1980 and 2000, personal computer hard disk sizes increased from about 10 megabytes (107 bytes) to over 100 gigabytes (1011 bytes).[9] A 100-gigabyte disk could store the favorite color of all of Earth's seven billion inhabitants without using data compression (storing 14 bytes times 7 billion inhabitants would equal 98 GB used). But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe,[10] Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require. However, storing information on hard disk and computing it are very different functions. On the one hand storage currently has limitations as stated, but computational speed is a different matter. It is quite conceivable that the stated limitations regarding storage have no bearing on the limitations of actual computational capacity, especially if the current research into quantum computers results in a "quantum leap" (but see holographic principle).

Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.8 billion (1.38 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50-character passwords would outstrip the estimated computational potential of the entire universe.

Problems like this grow exponentially in the number of computations they require, and they are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute.

Such limits are an advantage in cryptography, since any cipher-breaking technique that requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000-digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these.

As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes. A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to brute-force the encryption keys in 128-bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable. Quantum computing might allow certain problems that require an exponential amount of calculations to become feasible. However, it has practical and theoretical challenges that may never be overcome, such as the mass production of qubits, the fundamental building block of quantum computing.

Examples of large numbers

  • googol =
  • centillion = or , depending on number naming system
  • The largest known Smith number = (101031−1) × (104594 + 3×102297 + 1)1476 ×103913210
  • The largest known Mersenne prime = (as of December 21, 2018)
  • googolplex =
  • Skewes' numbers: the first is approximately , the second
  • Graham's number, larger than what can be represented even using power towers (tetration). However, it can be represented using Knuth's up-arrow notation.
  • googolplexian=.

Systematically creating ever-faster-increasing sequences

Given a strictly increasing integer sequence/function (n≥1) we can produce a faster-growing sequence (where the superscript n denotes the nth functional power). This can be repeated any number of times by letting , each sequence growing much faster than the one before it. Then we could define , which grows much faster than any for finite k (here ω is the first infinite ordinal number, representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals.

For example, starting with f0(n) = n + 1:

  • f1(n) = f0n(n) = n + n = 2n
  • f2(n) = f1n(n) = 2nn > (2 ↑) n for n ≥ 2 (using Knuth up-arrow notation)
  • f3(n) = f2n(n) > (2 ↑)n n ≥ 2 ↑2 n for n ≥ 2.
  • fk+1(n) > 2 ↑k n for n ≥ 2, k < ω.
  • fω(n) = fn(n) > 2 ↑n – 1 n > 2 ↑n − 2 (n + 3) − 3 = A(n, n) for n ≥ 2, where A is the Ackermann function (of which fω is a unary version).
  • fω+1(64) > fω64(6) > Graham's number (= g64 in the sequence defined by g0 = 4, gk+1 = 3 ↑gk 3).
    • This follows by noting fω(n) > 2 ↑n – 1 n > 3 ↑n – 2 3 + 2, and hence fω(gk + 2) > gk+1 + 2.
  • fω(n) > 2 ↑n – 1 n = (2 → nn-1) = (2 → nn-1 → 1) (using Conway chained arrow notation)
  • fω+1(n) = fωn(n) > (2 → nn-1 → 2) (because if gk(n) = X → nk then X → nk+1 = gkn(1))
  • fω+k(n) > (2 → nn-1 → k+1) > (nnk)
  • fω2(n) = fω+n(n) > (nnn) = (nnn→ 1)
  • fω2+k(n) > (nnnk)
  • fω3(n) > (nnnn)
  • fωk(n) > (nn → ... → nn) (Chain of k+1 n's)
  • fω2(n) = fωn(n) > (nn → ... → nn) (Chain of n+1 n's)

Notations

Some notations for extremely large numbers:

These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever-faster-increasing functions can easily be constructed recursively by applying these functions with large integers as argument.

A function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal.

Standardized system of writing very large numbers

A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one.

To compare numbers in scientific notation, say 5×104 and 2×105, compare the exponents first, in this case 5 > 4, so 2×105 > 5×104. If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×104 > 2×104 because 5 > 2.

Tetration with base 10 gives the sequence , the power towers of numbers 10, where denotes a functional power of the function (the function also expressed by the suffix "-plex" as in googolplex, see the Googol family).

These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is.

More accurately, numbers in between can be expressed in the form , i.e., with a power tower of 10s and a number at the top, possibly in scientific notation, e.g. , a number between and (note that if ). (See also extension of tetration to real heights.)

Thus googolplex is

Another example:

(between and )

Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times (n) one has to take the to get a number between 1 and 10. Thus, the number is between and . As explained, a more accurate description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 1010, or the next, between 0 and 1.

Note that

I.e., if a number x is too large for a representation we can make the power tower one higher, replacing x by log10x, or find x from the lower-tower representation of the log10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10).

If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so we can use the double-arrow notation, e.g. . If the value after the double arrow is a very large number itself, the above can recursively be applied to that value.

Examples:

(between and )
(between and )

Similarly to the above, if the exponent of is not exactly given then giving a value at the right does not make sense, and we can, instead of using the power notation of , add 1 to the exponent of , so we get e.g. .

If the exponent of is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and we can, instead of using the power notation of , use the triple arrow operator, e.g. .

If the right-hand argument of the triple arrow operator is large the above applies to it, so we have e.g. (between and ). This can be done recursively, so we can have a power of the triple arrow operator.

We can proceed with operators with higher numbers of arrows, written .

Compare this notation with the hyper operator and the Conway chained arrow notation:

= ( abn ) = hyper(an + 2, b)

An advantage of the first is that when considered as function of b, there is a natural notation for powers of this function (just like when writing out the n arrows): . For example:

= ( 10 → ( 10 → ( 10 → b → 2 ) → 2 ) → 2 )

and only in special cases the long nested chain notation is reduced; for b = 1 we get:

= ( 10 → 3 → 3 )

Since the b can also be very large, in general we write a number with a sequence of powers with decreasing values of n (with exactly given integer exponents ) with at the end a number in ordinary scientific notation. Whenever a is too large to be given exactly, the value of is increased by 1 and everything to the right of is rewritten.

For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, , and . Thus we have the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10x are "almost equal" (for arithmetic of large numbers see also below).

If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it acts. We can simply use a standard value at the right, say 10, and the expression reduces to with an approximate n. For such numbers the advantage of using the upward arrow notation no longer applies, and we can also use the chain notation.

The above can be applied recursively for this n, so we get the notation in the superscript of the first arrow, etc., or we have a nested chain notation, e.g.:

(10 → 10 → (10 → 10 → ) ) =

If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function = (10 → 10 → n), these levels become functional powers of f, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly (for the example: . If n is large we can use any of the above for expressing it. The "roundest" of these numbers are those of the form fm(1) = (10→10→m→2). For example,

Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus , but also .

If m in is too large to give exactly we can use a fixed n, e.g. n = 1, and apply the above recursively to m, i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f. Introducing a function these levels become functional powers of g, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly. We have (10→10→m→3) = gm(1). If n is large we can use any of the above for expressing it. Similarly we can introduce a function h, etc. If we need many such functions we can better number them instead of using a new letter every time, e.g. as a subscript, so we get numbers of the form where k and m are given exactly and n is an integer which may or may not be given exactly. Using k=1 for the f above, k=2 for g, etc., we have (10→10→nk) = . If n is large we can use any of the above for expressing it. Thus we get a nesting of forms where going inward the k decreases, and with as inner argument a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.

When k is too large to be given exactly, the number concerned can be expressed as =(10→10→10→n) with an approximate n. Note that the process of going from the sequence =(10→n) to the sequence =(10→10→n) is very similar to going from the latter to the sequence =(10→10→10→n): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions , nested in lexicographical order with q the most significant number, but with decreasing order for q and for k; as inner argument we have a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.

For a number too large to write down in the Conway chained arrow notation we can describe how large it is by the length of that chain, for example only using elements 10 in the chain; in other words, we specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number we can apply the same techniques again for that.

Examples of numbers in numerical order

Numbers expressible in decimal notation:

  • 22 = 4
  • 222 = 2 ↑↑ 3 = 16
  • 33 = 27
  • 44 = 256
  • 55 = 3,125
  • 66 = 46,656
  • = 2 ↑↑ 4 = 2↑↑↑3 = 65,536
  • 77 = 823,543
  • 106 = 1,000,000 = 1 million
  • 88 = 16,777,216
  • 99 = 387,420,489
  • 109 = 1,000,000,000 = 1 billion
  • 1010 = 10,000,000,000
  • 1012 = 1,000,000,000,000 = 1 trillion
  • 333 = 3 ↑↑ 3 = 7,625,597,484,987 ≈ 7.63 × 1012
  • 1015 = 1,000,000,000,000,000 = 1 million billion = 1 quadrillion

Numbers expressible in scientific notation:

  • Approximate number of atoms in the observable universe = 1080 = 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
  • googol = 10100 = 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
  • 444 = 4 ↑↑ 3 ≈ 1.34 × 10154 ≈ (10 ↑)2 2.2
  • Approximate number of Planck volumes composing the volume of the observable universe = 8.5 × 10184
  • 555 = 5 ↑↑ 3 ≈ 1.91 × 102184 ≈ (10 ↑)2 3.3
  • 666 = 6 ↑↑ 3 ≈ 2.66 × 1036,305 ≈ (10 ↑)2 4.6
  • 777 = 7 ↑↑ 3 ≈ 3.76 × 10695,974 ≈ (10 ↑)2 5.8
  • 888 = 8 ↑↑ 3 ≈ 6.01 × 1015,151,335 ≈ (10 ↑)2 7.2
  • , the 50th and as of January 2018 the largest known Mersenne prime.
  • 999 = 9 ↑↑ 3 ≈ 4.28 × 10369,693,099 ≈ (10 ↑)2 8.6
  • 101010 =10 ↑↑ 3 = 1010,000,000,000 = (10 ↑)3 1

Numbers expressible in (10 ↑)n k notation:

  • googolplex =
  • 10 ↑↑ 5 = (10 ↑)5 1
  • 3 ↑↑ 6 ≈ (10 ↑)5 1.10
  • 2 ↑↑ 8 ≈ (10 ↑)5 4.3
  • 10 ↑↑ 6 = (10 ↑)6 1
  • 10 ↑↑↑ 2 = 10 ↑↑ 10 = (10 ↑)10 1
  • 2 ↑↑↑↑ 3 = 2 ↑↑↑ 4 = 2 ↑↑ 65,536 ≈ (10 ↑)65,533 4.3 is between 10 ↑↑ 65,533 and 10 ↑↑ 65,534

Bigger numbers:

  • 3 ↑↑↑ 3 = 3 ↑↑ (3 ↑↑ 3) ≈ 3 ↑↑ 7.6 × 1012 ≈ 10 ↑↑ 7.6 × 1012 is between (10 ↑↑)2 2 and (10 ↑↑)2 3
  • = ( 10 → 3 → 3 )
  • = ( 10 → 4 → 3 )
  • = ( 10 → 5 → 3 )
  • = ( 10 → 6 → 3 )
  • = ( 10 → 7 → 3 )
  • = ( 10 → 8 → 3 )
  • = ( 10 → 9 → 3 )
  • = ( 10 → 2 → 4 ) = ( 10 → 10 → 3 )
  • The first term in the definition of Graham's number, g1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) ≈ 3 ↑↑↑ (10 ↑↑ 7.6 × 1012) ≈ 10 ↑↑↑ (10 ↑↑ 7.6 × 1012) is between (10 ↑↑↑)2 2 and (10 ↑↑↑)2 3 (See Graham's number#Magnitude)
  • = (10 → 3 → 4)
  • = ( 4 → 4 → 4 )
  • = ( 10 → 4 → 4 )
  • = ( 10 → 5 → 4 )
  • = ( 10 → 6 → 4 )
  • = ( 10 → 7 → 4 )
  • = ( 10 → 8 → 4 )
  • = ( 10 → 9 → 4 )
  • = ( 10 → 2 → 5 ) = ( 10 → 10 → 4 )
  • ( 2 → 3 → 2 → 2 ) = ( 2 → 3 → 8 )
  • ( 3 → 2 → 2 → 2 ) = ( 3 → 2 → 9 ) = ( 3 → 3 → 8 )
  • ( 10 → 10 → 10 ) = ( 10 → 2 → 11 )
  • ( 10 → 2 → 2 → 2 ) = ( 10 → 2 → 100 )
  • ( 10 → 10 → 2 → 2 ) = ( 10 → 2 → ) =
  • The second term in the definition of Graham's number, g2 = 3 ↑g1 3 > 10 ↑g1 – 1 10.
  • ( 10 → 10 → 3 → 2 ) = (10 → 10 → (10 → 10 → ) ) =
  • g3 = (3 → 3 → g2) > (10 → 10 → g2 – 1) > (10 → 10 → 3 → 2)
  • g4 = (3 → 3 → g3) > (10 → 10 → g3 – 1) > (10 → 10 → 4 → 2)
  • ...
  • g9 = (3 → 3 → g8) is between (10 → 10 → 9 → 2) and (10 → 10 → 10 → 2)
  • ( 10 → 10 → 10 → 2 )
  • g10 = (3 → 3 → g9) is between (10 → 10 → 10 → 2) and (10 → 10 → 11 → 2)
  • ...
  • g63 = (3 → 3 → g62) is between (10 → 10 → 63 → 2) and (10 → 10 → 64 → 2)
  • ( 10 → 10 → 64 → 2 )
  • Graham's number, g64[11]
  • ( 10 → 10 → 65 → 2 )
  • ( 10 → 10 → 10 → 3 )
  • ( 10 → 10 → 10 → 4 )
  • ( 10 → 10 → 10 → 10 )
  • ( 10 → 10 → 10 → 10 → 10 )
  • ( 10 → 10 → 10 → 10 → 10 → 10 )
  • ( 10 → 10 → 10 → 10 → 10 → 10 → 10 → ... → 10 → 10 → 10 → 10 → 10 → 10 → 10 → 10 ) where there are ( 10 → 10 → 10 ) "10"s

Comparison of base values

The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers and the arithmetic.

, with base 10 the exponent is doubled.

, ditto.

, the highest exponent is very little more than doubled (increased by log102).

  • (thus if n is large it seems fair to say that is "approximately equal to" )
  • (compare ; thus if n is large it seems fair to say that is "approximately equal to" )
  • (compare )
  • (compare )
  • (compare ; if n is large this is "approximately" equal)

Accuracy

Note that for a number , one unit change in n changes the result by a factor 10. In a number like , with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable).

Accuracy for very large numbers

In the case of an approximation of an extremely large number, the relative error may be large, yet there may still be a sense in which we want to consider the numbers as "close in magnitude". For example, consider

and

The relative error is

a large relative error. However, we can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%.

The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error,

and

the relative error is larger, and

and

will have an even larger relative error. The question then becomes: on which level of iterated logarithms do we wish to compare two numbers? There is a sense in which we may want to consider

and

to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small:

and

Such comparisons of iterated logarithms are common, e.g., in analytic number theory.

Approximate arithmetic for very large numbers

There are some general rules relating to the usual arithmetic operations performed on very large numbers:

  • The sum and the product of two very large numbers are both "approximately" equal to the larger one.

Hence:

  • A very large number raised to a very large power is "approximately" equal to the larger of the following two values: the first value and 10 to the power the second. For example, for very large n we have (see e.g. the computation of mega) and also . Thus , see table.

Large numbers in some noncomputable sequences

The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ(n) for n = 1, 2, 3, 4 are 1, 4, 6, 13 (sequence A028444 in the OEIS). Σ(5) is not known but is definitely ≥ 4098. Σ(6) is at least 3.5×1018267.

Infinite numbers

Although all the numbers discussed above are very large, they are all still decidedly finite. Certain fields of mathematics define infinite and transfinite numbers. For example, aleph-null is the cardinality of the infinite set of natural numbers, and aleph-one is the next greatest cardinal number. is the cardinality of the reals. The proposition that is known as the continuum hypothesis.

See also

References

  1. ^ Bianconi, Eva; Piovesan, Allison; Facchin, Federica; Beraudi, Alina; Casadei, Raffaella; Frabetti, Flavia; Vitale, Lorenza; Pelleri, Maria Chiara; Tassani, Simone (Nov–Dec 2013). "An estimation of the number of cells in the human body". Annals of Human Biology. 40 (6): 463–471. doi:10.3109/03014460.2013.807878. ISSN 1464-5033. PMID 23829164.
  2. ^ Shannon, Claude (March 1950). "XXII. Programming a Computer for Playing Chess" (PDF). Philosophical Magazine. Series 7. 41 (314).
  3. ^ Desrosières, Alain; Naish, Camille, Translator (September 15, 2002). The Politics of Large Numbers: A History of Statistical Reasoning (Paperback). Cambridge, Massachusetts: Harvard University Press. ISBN 9780674009691.
  4. ^ "A Billion Here, A Billion There...", The Dirksen Center. (archived from the original on 2004-08-16)
  5. ^ Atoms in the Universe. Universe Today. 30-07-2009. Retrieved 02-03-13.
  6. ^ Information Loss in Black Holes and/or Conscious Beings?, Don N. Page, Heat Kernel Techniques and Quantum Gravity (1995), S. A. Fulling (ed), p. 461. Discourses in Mathematics and its Applications, No. 4, Texas A&M University Department of Mathematics. arXiv:hep-th/9411193. ISBN 0-9630728-3-8.
  7. ^ How to Get A Googolplex
  8. ^ Carl Sagan takes questions more from his 'Wonder and Skepticism' CSICOP 1994 keynote, Skeptical Inquirer Archived December 21, 2016, at the Wayback Machine
  9. ^ "History of Information Storage". August 30, 2012. Retrieved October 9, 2014.
  10. ^ Lloyd, Seth (2002). "Computational capacity of the universe". Phys. Rev. Lett. 88 (23): 237901. arXiv:quant-ph/0110141. Bibcode:2002PhRvL..88w7901L. doi:10.1103/PhysRevLett.88.237901. PMID 12059399.
  11. ^ Regarding the comparison with the previous value: , so starting the 64 steps with 1 instead of 4 more than compensates for replacing the numbers 3 by 10
1,000,000

1,000,000 (one million), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, "thousand", plus the augmentative suffix -one. It is commonly abbreviated as m (not to be confused with the metric prefix for 1×10−3) or M; further MM ("thousand thousands", from Latin "Mille"; not to be confused with the Roman numeral MM = 2,000), mm, or mn in financial contexts.In scientific notation, it is written as 1×106 or 106. Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1 MW) equals 1,000,000 watts.

The meaning of the word "million" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems.

The million is sometimes used in the English language as a metaphor for a very large number, as in "Not in a million years" and "You're one in a million", or a hyperbole, as in "I've walked a million miles" and "You've asked the million-dollar question".

Billion

A billion is a number with two distinct definitions:

1,000,000,000, i.e. one thousand million, or 109 (ten to the ninth power), as defined on the short scale. This is now the meaning in both British and American English.

1,000,000,000,000, i.e. one million million, or 1012 (ten to the twelfth power), as defined on the long scale. This is one thousand times larger than the short scale billion, and equivalent to the short scale trillion. This is the historic definition of a billion in British English.American English has always used the short scale definition in living memory but British English once employed both versions. Historically, the United Kingdom used the long scale billion but since 1974, official UK statistics have used the short scale. Since the 1950s, the short scale has been increasingly used in technical writing and journalism, although the long scale definition still enjoys some limited usage.Other countries use the word billion (or words cognate to it) to denote either the long scale or short scale billion. For details, see Long and short scales – Current usage.

Another word for one thousand million is milliard, but this is used much less often in English than billion. Most other European languages — including Bulgarian, Croatian, Czech, Danish, Dutch, Finnish, French, Georgian, German, Hungarian, Italian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish and Swedish — use milliard (or a related word) for the short scale billion, and billion (or a related word) for the long scale billion. Thus for these languages billion is thousand times larger than the modern English billion. However, in Russian, milliard (миллиард) is used for the short scale billion, and trillion (триллион) is used for the long scale billion.

Bowers's operators

Bowers's operators was created by Jonathan Bowers. It was created to help represent very large numbers, and was first published to the web in 2002.

Dirac large numbers hypothesis

The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of magnitude in the present cosmological epoch. According to Dirac's hypothesis, the apparent similarity of these ratios might not be a mere coincidence but instead could imply a cosmology with these unusual features:

Indefinite and fictitious numbers

Many languages have words expressing indefinite and fictitious numbers—inexact terms of indefinite size, used for comic effect, for exaggeration, as placeholder names, or when precision is unnecessary or undesirable. One technical term for such words is "non-numerical vague quantifier". Such words designed to indicate large quantities can be called "indefinite hyperbolic numerals".

Japanese numerals

The system of Japanese numerals is the system of number names used in the Japanese language. The Japanese numerals in writing are entirely based on the Chinese numerals and the grouping of large numbers follow the Chinese tradition of grouping by 10,000. Two sets of pronunciations for the numerals exist in Japanese: one is based on Sino-Japanese (on'yomi) readings of the Chinese characters and the other is based on the Japanese yamato kotoba (native words, kun'yomi readings).

Law of large numbers

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the law only applies (as the name indicates) when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).

Monomer

A monomer ( MON-ə-mər; mono-, "one" + -mer, "part") is a molecule that "can undergo polymerization thereby contributing constitutional units to the essential structure of a macromolecule". Large numbers of monomers combine to form polymers in a process called polymerization.

Movement (music)

A movement is a self-contained part of a musical composition or musical form. While individual or selected movements from a composition are sometimes performed separately, a performance of the complete work requires all the movements to be performed in succession. A movement is a section, "a major structural unit perceived as the result of the coincidence of relatively large numbers of structural phenomena".

A unit of a larger work that may stand by itself as a complete composition. Such divisions are usually self-contained. Most often the sequence of movements is arranged fast-slow-fast or in some other order that provides contrast.

Names of large numbers

This article lists and discusses the usage and derivation of names of large numbers, together with their possible extensions.

The following table lists those names of large numbers that are found in many English dictionaries and thus have a claim to being "real words." The "Traditional British" values shown are unused in American English and are obsolete in British English, but their other-language variants are dominant in many non-English-speaking areas, including continental Europe and Spanish-speaking countries in Latin America; see Long and short scales.

Indian English does not use millions, but has its own system of large numbers including lakhs and crores.

English also has many words, such as "zillion", used informally to mean large but unspecified amounts; see indefinite and fictitious numbers.

Order of magnitude

An order of magnitude is an approximate measure of the number of digits that a number has in the commonly-used base-ten number system. It is equal to the whole number floor of logarithm (base 10). For example, the order of magnitude of 1500 is 3, because 1500 = 1.5 × 103.

Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten). Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).

Peruvians

Peruvians (Spanish: Peruanos) are the citizens of the Republic of Peru or their descendants abroad. Peru is a multiethnic country formed by the combination of different groups over five centuries, so people in Peru usually treat their nationality as a citizenship rather than an ethnicity. Indigenous nations inhabited Peruvian territory for several millennia before Spanish Conquest in the 16th century; according to historian David N. Cook their population decreased from an estimated 5–9 million in the 1520s to around 600,000 in 1620 mainly because of infectious diseases. Spaniards and Africans arrived in large numbers under colonial rule, mixing widely with each other and with indigenous peoples. During the Republic, there has been a gradual immigration of European people (specially from Spain and Italy, and in a less extent from France, the Balkans, Portugal, Great Britain and Germany). Japanese and Chinese arrived in large numbers at the end of nineteenth century.

With 31.2 million inhabitants according to the 2017 Census, Peru is the fifth most populous country in South America. Its demographic growth rate declined from 2.6% to 1.6% between 1950 and 2000; population is expected to reach approximately 46 - 51 million in 2050. As of 2017, 79.3% lived in urban areas and 20.7% in rural areas. Major cities include Lima, home to over 9.5 million people, Arequipa, Trujillo, Chiclayo, Piura, Iquitos, Huancayo, Cusco and Pucallpa, all of which reported more than 250,000 inhabitants.

The largest expatriate Peruvian communities are in the United States (Peruvian Americans), South America (Argentina, Chile, Venezuela and Brazil), Europe (Spain, Italy, France and the United Kingdom), Japan, Australia and Canada.

Plantations in the American South

Plantations are an important aspect of the history of the American South, particularly the antebellum (pre-American Civil War) era. The mild subtropical climate, plentiful rainfall, and fertile soils of the southeastern United States allowed the flourishing of large plantations, where large numbers of workers, usually Africans held captive for slave labor, were required for agricultural production.

Probability theory

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion.

Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.

As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics.

Roman numerals

Roman numerals are a numeric system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet. Modern usage employs seven symbols, each with a fixed integer value:

The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced in most contexts by the more convenient Arabic numerals; however, this process was gradual, and the use of Roman numerals persists in some minor applications to this day.

One place they are often seen is on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as:

I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XIIThe notations IV and IX can be read as "one before five" (4) and "one before ten" (9). On most Roman numeral clock faces, however, 4 is traditionally written IIII.

Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of movies and television programs. MCM, signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written MCMXII. For this century, MM indicates 2000. Thus the current year is MMXIX.

Scraptiidae

The family Scraptiidae is a small group of beetles sometimes called false flower beetles. There are about 400 species in 30 genera with a world-wide distribution. The adults are found on flowers, sometimes in large numbers. These beetles are very common and easily confused with members of the related family Mordellidae.

Serac

A serac (originally from Swiss French sérac) is a block or column of glacial ice, often formed by intersecting crevasses on a glacier. Commonly house-sized or larger, they are dangerous to mountaineers, since they may topple with little warning. Even when stabilized by persistent cold weather, they can be an impediment to glacier travel.

Seracs are found within an icefall, often in large numbers, or on ice faces on the lower edge of a hanging glacier. Notable examples of the overhanging glacier edge type are well-known obstacles on some of the world's highest mountains, including K2 at "The Bottleneck" and Kanchenjunga on the border of India and Nepal. Significant seracs in the Alps are found on the northeast face of Piz Roseg, the north face of the Dent d'Hérens, and the north face of Lyskamm.

Smack (ship)

A smack was a traditional fishing boat used off the coast of Britain and the Atlantic coast of America for most of the 19th century and, in small numbers, up to the Second World War. Many larger smacks were originally cutter-rigged sailing boats until about 1865, when smacks had become so large that cutter main booms were unhandy. The smaller smacks retain the gaff cutter rig. The larger smacks were lengthened and re-rigged and new ketch-rigged smacks were built, but boats varied from port to port. Some boats had a topsail on the mizzen mast, while others had a bowsprit carrying a jib.

Large numbers of smacks operated in fleets from ports in the UK such as Brixham, Grimsby and Lowestoft as well as at locations along the Thames Estuary. In England the sails were white cotton until a proofing coat was applied, usually after the sail was a few years old. This gave the sails its distinctive red ochre colour, which made them a picturesque sight in large numbers. Smacks were often rebuilt into steam boats in the 1950s.

The Sand Reckoner

The Sand Reckoner (Greek: Ψαμμίτης, Psammites) is a work by Archimedes in which he set out to determine an upper bound for the number of grains of sand that fit into the Universe. In order to do this, he had to estimate the size of the universe according to the contemporary model, and invent a way to talk about extremely large numbers. The work, also known in Latin as Archimedis Syracusani Arenarius & Dimensio Circuli, which is about 8 pages long in translation, is addressed to the Syracusan king Gelo II (son of Hiero II), and is probably the most accessible work of Archimedes; in some sense, it is the first research-expository paper.

Large numbers
Examples in
numerical order
Expression
methods
Related
articles
(alphabetical order)
Primary
Inverse for left argument
Inverse for right argument
Related articles

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.