Large numbers are numbers that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers, or more generally, large positive real numbers, but it may also be used in other contexts.
Very large numbers often occur in fields such as mathematics, cosmology, cryptography, and statistical mechanics. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy.
Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is.
Examples of large numbers describing everyday real-world objects include:
Large numbers have been central to “statistics-driven thinking”, which have become “ubiquitous in modern society.” Beginning with 17th-century probability theory, statistics have evolved and become integral to both governmental knowledge and power. There is a complex "reciprocity between modern governments and the mathematical artifacts that both dictate the duties of the state and measure its successes". These tools include economics, mathematical statistics, medical statistics, probability, psychology, sociology, and surveys. These have led to applied econometrics in modern times.
Illinois Senator Everett Dirksen is noted as saying, "A billion here, a billion there, pretty soon, you're talking real money." Although there is no direct record of the remark, he is believed to have made it during an appearance on The Tonight Show Starring Johnny Carson. (See wikiquotes of Everett Dirksen.)
Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model suggests that the universe is 13.8 billion years (4.355 × 1017 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 1026 metres), and contains about 5 × 1022 stars, organized into around 125 billion (1.25 × 1011) galaxies, according to Hubble Space Telescope observations. There are about 1080 atoms in the observable universe, by rough estimation.
According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is
which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses. This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where the universe's history repeats itself arbitrarily many times due to properties of statistical mechanics; this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again.
Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth.
Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions.
To help viewers of Cosmos distinguish between "millions" and "billions", astronomer Carl Sagan wrote a book and discoursed, stressing the "b". Sagan never did, however, say "billions and billions". The public's association of the phrase and Sagan came from a Tonight Show skit. Parodying Sagan's affect, Johnny Carson quipped "billions and billions". The phrase has, however, now become a humorous fictitious number—the Sagan. Cf., Sagan Unit.
Between 1980 and 2000, personal computer hard disk sizes increased from about 10 megabytes (107 bytes) to over 100 gigabytes (1011 bytes). A 100-gigabyte disk could store the favorite color of all of Earth's seven billion inhabitants without using data compression (storing 14 bytes times 7 billion inhabitants would equal 98 GB used). But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe, Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require. However, storing information on hard disk and computing it are very different functions. On the one hand storage currently has limitations as stated, but computational speed is a different matter. It is quite conceivable that the stated limitations regarding storage have no bearing on the limitations of actual computational capacity, especially if the current research into quantum computers results in a "quantum leap" (but see holographic principle).
Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.8 billion (1.38 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50-character passwords would outstrip the estimated computational potential of the entire universe.
Problems like this grow exponentially in the number of computations they require, and they are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute.
Such limits are an advantage in cryptography, since any cipher-breaking technique that requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000-digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these.
As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes. A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to brute-force the encryption keys in 128-bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable. Quantum computing might allow certain problems that require an exponential amount of calculations to become feasible. However, it has practical and theoretical challenges that may never be overcome, such as the mass production of qubits, the fundamental building block of quantum computing.
Given a strictly increasing integer sequence/function (n≥1) we can produce a faster-growing sequence (where the superscript n denotes the nth functional power). This can be repeated any number of times by letting , each sequence growing much faster than the one before it. Then we could define , which grows much faster than any for finite k (here ω is the first infinite ordinal number, representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals.
For example, starting with f0(n) = n + 1:
Some notations for extremely large numbers:
These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever-faster-increasing functions can easily be constructed recursively by applying these functions with large integers as argument.
A function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal.
A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one.
To compare numbers in scientific notation, say 5×104 and 2×105, compare the exponents first, in this case 5 > 4, so 2×105 > 5×104. If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×104 > 2×104 because 5 > 2.
Tetration with base 10 gives the sequence , the power towers of numbers 10, where denotes a functional power of the function (the function also expressed by the suffix "-plex" as in googolplex, see the Googol family).
These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is.
More accurately, numbers in between can be expressed in the form , i.e., with a power tower of 10s and a number at the top, possibly in scientific notation, e.g. , a number between and (note that if ). (See also extension of tetration to real heights.)
Thus googolplex is
Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times (n) one has to take the to get a number between 1 and 10. Thus, the number is between and . As explained, a more accurate description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 1010, or the next, between 0 and 1.
I.e., if a number x is too large for a representation we can make the power tower one higher, replacing x by log10x, or find x from the lower-tower representation of the log10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10).
If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so we can use the double-arrow notation, e.g. . If the value after the double arrow is a very large number itself, the above can recursively be applied to that value.
Similarly to the above, if the exponent of is not exactly given then giving a value at the right does not make sense, and we can, instead of using the power notation of , add 1 to the exponent of , so we get e.g. .
If the exponent of is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and we can, instead of using the power notation of , use the triple arrow operator, e.g. .
If the right-hand argument of the triple arrow operator is large the above applies to it, so we have e.g. (between and ). This can be done recursively, so we can have a power of the triple arrow operator.
We can proceed with operators with higher numbers of arrows, written .
An advantage of the first is that when considered as function of b, there is a natural notation for powers of this function (just like when writing out the n arrows): . For example:
and only in special cases the long nested chain notation is reduced; for b = 1 we get:
Since the b can also be very large, in general we write a number with a sequence of powers with decreasing values of n (with exactly given integer exponents ) with at the end a number in ordinary scientific notation. Whenever a is too large to be given exactly, the value of is increased by 1 and everything to the right of is rewritten.
For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, , and . Thus we have the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10x are "almost equal" (for arithmetic of large numbers see also below).
If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it acts. We can simply use a standard value at the right, say 10, and the expression reduces to with an approximate n. For such numbers the advantage of using the upward arrow notation no longer applies, and we can also use the chain notation.
The above can be applied recursively for this n, so we get the notation in the superscript of the first arrow, etc., or we have a nested chain notation, e.g.:
If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function = (10 → 10 → n), these levels become functional powers of f, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly (for the example: . If n is large we can use any of the above for expressing it. The "roundest" of these numbers are those of the form fm(1) = (10→10→m→2). For example,
Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus , but also .
If m in is too large to give exactly we can use a fixed n, e.g. n = 1, and apply the above recursively to m, i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f. Introducing a function these levels become functional powers of g, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly. We have (10→10→m→3) = gm(1). If n is large we can use any of the above for expressing it. Similarly we can introduce a function h, etc. If we need many such functions we can better number them instead of using a new letter every time, e.g. as a subscript, so we get numbers of the form where k and m are given exactly and n is an integer which may or may not be given exactly. Using k=1 for the f above, k=2 for g, etc., we have (10→10→n→k) = . If n is large we can use any of the above for expressing it. Thus we get a nesting of forms where going inward the k decreases, and with as inner argument a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.
When k is too large to be given exactly, the number concerned can be expressed as =(10→10→10→n) with an approximate n. Note that the process of going from the sequence =(10→n) to the sequence =(10→10→n) is very similar to going from the latter to the sequence =(10→10→10→n): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions , nested in lexicographical order with q the most significant number, but with decreasing order for q and for k; as inner argument we have a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.
For a number too large to write down in the Conway chained arrow notation we can describe how large it is by the length of that chain, for example only using elements 10 in the chain; in other words, we specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number we can apply the same techniques again for that.
Numbers expressible in decimal notation:
Numbers expressible in scientific notation:
Numbers expressible in (10 ↑)n k notation:
The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers and the arithmetic.
, with base 10 the exponent is doubled.
, the highest exponent is very little more than doubled (increased by log102).
Note that for a number , one unit change in n changes the result by a factor 10. In a number like , with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable).
In the case of an approximation of an extremely large number, the relative error may be large, yet there may still be a sense in which we want to consider the numbers as "close in magnitude". For example, consider
The relative error is
a large relative error. However, we can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%.
The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error,
the relative error is larger, and
will have an even larger relative error. The question then becomes: on which level of iterated logarithms do we wish to compare two numbers? There is a sense in which we may want to consider
to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small:
Such comparisons of iterated logarithms are common, e.g., in analytic number theory.
There are some general rules relating to the usual arithmetic operations performed on very large numbers:
The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ(n) for n = 1, 2, 3, 4 are 1, 4, 6, 13 (sequence A028444 in the OEIS). Σ(5) is not known but is definitely ≥ 4098. Σ(6) is at least 3.5×1018267.
Although all the numbers discussed above are very large, they are all still decidedly finite. Certain fields of mathematics define infinite and transfinite numbers. For example, aleph-null is the cardinality of the infinite set of natural numbers, and aleph-one is the next greatest cardinal number. is the cardinality of the reals. The proposition that is known as the continuum hypothesis.
1,000,000 (one million), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, "thousand", plus the augmentative suffix -one. It is commonly abbreviated as m (not to be confused with the metric prefix for 1×10−3) or M; further MM ("thousand thousands", from Latin "Mille"; not to be confused with the Roman numeral MM = 2,000), mm, or mn in financial contexts.In scientific notation, it is written as 1×106 or 106. Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1 MW) equals 1,000,000 watts.
The meaning of the word "million" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems.
The million is sometimes used in the English language as a metaphor for a very large number, as in "Not in a million years" and "You're one in a million", or a hyperbole, as in "I've walked a million miles" and "You've asked the million-dollar question".Billion
A billion is a number with two distinct definitions:
1,000,000,000, i.e. one thousand million, or 109 (ten to the ninth power), as defined on the short scale. This is now the meaning in both British and American English.
1,000,000,000,000, i.e. one million million, or 1012 (ten to the twelfth power), as defined on the long scale. This is one thousand times larger than the short scale billion, and equivalent to the short scale trillion. This is the historic definition of a billion in British English.American English has always used the short scale definition in living memory but British English once employed both versions. Historically, the United Kingdom used the long scale billion but since 1974, official UK statistics have used the short scale. Since the 1950s, the short scale has been increasingly used in technical writing and journalism, although the long scale definition still enjoys some limited usage.Other countries use the word billion (or words cognate to it) to denote either the long scale or short scale billion. For details, see Long and short scales – Current usage.
Another word for one thousand million is milliard, but this is used much less often in English than billion. Most other European languages — including Bulgarian, Croatian, Czech, Danish, Dutch, Finnish, French, Georgian, German, Hungarian, Italian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish and Swedish — use milliard (or a related word) for the short scale billion, and billion (or a related word) for the long scale billion. Thus for these languages billion is thousand times larger than the modern English billion. However, in Russian, milliard (миллиард) is used for the short scale billion, and trillion (триллион) is used for the long scale billion.Bowers's operators
Bowers's operators was created by Jonathan Bowers. It was created to help represent very large numbers, and was first published to the web in 2002.Dirac large numbers hypothesis
The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of magnitude in the present cosmological epoch. According to Dirac's hypothesis, the apparent similarity of these ratios might not be a mere coincidence but instead could imply a cosmology with these unusual features:Indefinite and fictitious numbers
Many languages have words expressing indefinite and fictitious numbers—inexact terms of indefinite size, used for comic effect, for exaggeration, as placeholder names, or when precision is unnecessary or undesirable. One technical term for such words is "non-numerical vague quantifier". Such words designed to indicate large quantities can be called "indefinite hyperbolic numerals".Japanese numerals
The system of Japanese numerals is the system of number names used in the Japanese language. The Japanese numerals in writing are entirely based on the Chinese numerals and the grouping of large numbers follow the Chinese tradition of grouping by 10,000. Two sets of pronunciations for the numerals exist in Japanese: one is based on Sino-Japanese (on'yomi) readings of the Chinese characters and the other is based on the Japanese yamato kotoba (native words, kun'yomi readings).Law of large numbers
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the law only applies (as the name indicates) when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).Monomer
A monomer ( MON-ə-mər; mono-, "one" + -mer, "part") is a molecule that "can undergo polymerization thereby contributing constitutional units to the essential structure of a macromolecule". Large numbers of monomers combine to form polymers in a process called polymerization.Movement (music)
A movement is a self-contained part of a musical composition or musical form. While individual or selected movements from a composition are sometimes performed separately, a performance of the complete work requires all the movements to be performed in succession. A movement is a section, "a major structural unit perceived as the result of the coincidence of relatively large numbers of structural phenomena".
A unit of a larger work that may stand by itself as a complete composition. Such divisions are usually self-contained. Most often the sequence of movements is arranged fast-slow-fast or in some other order that provides contrast.Names of large numbers
This article lists and discusses the usage and derivation of names of large numbers, together with their possible extensions.
The following table lists those names of large numbers that are found in many English dictionaries and thus have a claim to being "real words." The "Traditional British" values shown are unused in American English and are obsolete in British English, but their other-language variants are dominant in many non-English-speaking areas, including continental Europe and Spanish-speaking countries in Latin America; see Long and short scales.
Indian English does not use millions, but has its own system of large numbers including lakhs and crores.
English also has many words, such as "zillion", used informally to mean large but unspecified amounts; see indefinite and fictitious numbers.Order of magnitude
An order of magnitude is an approximate measure of the number of digits that a number has in the commonly-used base-ten number system. It is equal to the whole number floor of logarithm (base 10). For example, the order of magnitude of 1500 is 3, because 1500 = 1.5 × 103.
Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten). Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).Peruvians
Peruvians (Spanish: Peruanos) are the citizens of the Republic of Peru or their descendants abroad. Peru is a multiethnic country formed by the combination of different groups over five centuries, so people in Peru usually treat their nationality as a citizenship rather than an ethnicity. Indigenous nations inhabited Peruvian territory for several millennia before Spanish Conquest in the 16th century; according to historian David N. Cook their population decreased from an estimated 5–9 million in the 1520s to around 600,000 in 1620 mainly because of infectious diseases. Spaniards and Africans arrived in large numbers under colonial rule, mixing widely with each other and with indigenous peoples. During the Republic, there has been a gradual immigration of European people (specially from Spain and Italy, and in a less extent from France, the Balkans, Portugal, Great Britain and Germany). Japanese and Chinese arrived in large numbers at the end of nineteenth century.
With 31.2 million inhabitants according to the 2017 Census, Peru is the fifth most populous country in South America. Its demographic growth rate declined from 2.6% to 1.6% between 1950 and 2000; population is expected to reach approximately 46 - 51 million in 2050. As of 2017, 79.3% lived in urban areas and 20.7% in rural areas. Major cities include Lima, home to over 9.5 million people, Arequipa, Trujillo, Chiclayo, Piura, Iquitos, Huancayo, Cusco and Pucallpa, all of which reported more than 250,000 inhabitants.
The largest expatriate Peruvian communities are in the United States (Peruvian Americans), South America (Argentina, Chile, Venezuela and Brazil), Europe (Spain, Italy, France and the United Kingdom), Japan, Australia and Canada.Plantations in the American South
Plantations are an important aspect of the history of the American South, particularly the antebellum (pre-American Civil War) era. The mild subtropical climate, plentiful rainfall, and fertile soils of the southeastern United States allowed the flourishing of large plantations, where large numbers of workers, usually Africans held captive for slave labor, were required for agricultural production.Probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.
Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion.
Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics.Roman numerals
Roman numerals are a numeric system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet. Modern usage employs seven symbols, each with a fixed integer value:
The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced in most contexts by the more convenient Arabic numerals; however, this process was gradual, and the use of Roman numerals persists in some minor applications to this day.
One place they are often seen is on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as:
I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XIIThe notations IV and IX can be read as "one before five" (4) and "one before ten" (9). On most Roman numeral clock faces, however, 4 is traditionally written IIII.
Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of movies and television programs. MCM, signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written MCMXII. For this century, MM indicates 2000. Thus the current year is MMXIX.Scraptiidae
The family Scraptiidae is a small group of beetles sometimes called false flower beetles. There are about 400 species in 30 genera with a world-wide distribution. The adults are found on flowers, sometimes in large numbers. These beetles are very common and easily confused with members of the related family Mordellidae.Serac
A serac (originally from Swiss French sérac) is a block or column of glacial ice, often formed by intersecting crevasses on a glacier. Commonly house-sized or larger, they are dangerous to mountaineers, since they may topple with little warning. Even when stabilized by persistent cold weather, they can be an impediment to glacier travel.
Seracs are found within an icefall, often in large numbers, or on ice faces on the lower edge of a hanging glacier. Notable examples of the overhanging glacier edge type are well-known obstacles on some of the world's highest mountains, including K2 at "The Bottleneck" and Kanchenjunga on the border of India and Nepal. Significant seracs in the Alps are found on the northeast face of Piz Roseg, the north face of the Dent d'Hérens, and the north face of Lyskamm.Smack (ship)
A smack was a traditional fishing boat used off the coast of Britain and the Atlantic coast of America for most of the 19th century and, in small numbers, up to the Second World War. Many larger smacks were originally cutter-rigged sailing boats until about 1865, when smacks had become so large that cutter main booms were unhandy. The smaller smacks retain the gaff cutter rig. The larger smacks were lengthened and re-rigged and new ketch-rigged smacks were built, but boats varied from port to port. Some boats had a topsail on the mizzen mast, while others had a bowsprit carrying a jib.
Large numbers of smacks operated in fleets from ports in the UK such as Brixham, Grimsby and Lowestoft as well as at locations along the Thames Estuary. In England the sails were white cotton until a proofing coat was applied, usually after the sail was a few years old. This gave the sails its distinctive red ochre colour, which made them a picturesque sight in large numbers. Smacks were often rebuilt into steam boats in the 1950s.The Sand Reckoner
The Sand Reckoner (Greek: Ψαμμίτης, Psammites) is a work by Archimedes in which he set out to determine an upper bound for the number of grains of sand that fit into the Universe. In order to do this, he had to estimate the size of the universe according to the contemporary model, and invent a way to talk about extremely large numbers. The work, also known in Latin as Archimedis Syracusani Arenarius & Dimensio Circuli, which is about 8 pages long in translation, is addressed to the Syracusan king Gelo II (son of Hiero II), and is probably the most accessible work of Archimedes; in some sense, it is the first research-expository paper.
|Inverse for left argument|
|Inverse for right argument|