In computer architecture, 60-bit integers, memory addresses, or other data units are those that are 60 bits wide. Also, 60-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.
Computers designed with 60-bit words are quite rare with Control Data Corporation (CDC) being perhaps one of the few or perhaps only manufacturer to use this size. Examples include the CDC 6000 series, the CDC 7600, and the CDC Cyber 70 and 170 series.
Museum examples of 60-bit CDC machines exist. There also exists an emulator for the series which will simulate the CDC 60-bit machines on commodity hardware and operating systems.
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2.
The computer industry has historically used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB, in at least two slightly different measurement systems. In citations of main memory (RAM) capacity, gigabyte customarily means 1073741824 bytes. As this is a power of 1024, and 1024 is a power of two (210), this usage is referred to as a binary measurement.
In most other contexts, the industry uses the multipliers kilo, mega, giga, etc., in a manner consistent with their meaning in the International System of Units (SI), namely as powers of 1000. For example, a 500 gigabyte hard disk holds 500000000000 bytes, and a 1 Gbit/s (gigabit per second) Ethernet connection transfers data at 1000000000 bit/s. In contrast with the binary prefix usage, this use is described as a decimal prefix, as 1000 is a power of 10 (103).
The use of the same unit prefixes with two different meanings has caused confusion. Starting around 1998, the International Electrotechnical Commission (IEC) and several other standards and trade organizations addressed the ambiguity by publishing standards and recommendations for a set of binary prefixes that refer exclusively to powers of 1024. Accordingly, the US National Institute of Standards and Technology (NIST) requires that SI prefixes only be used in the decimal sense: kilobyte and megabyte denote one thousand bytes and one million bytes respectively (consistent with SI), while new terms such as kibibyte, mebibyte and gibibyte, having the symbols KiB, MiB, and GiB, denote 1024 bytes, 1048576 bytes, and 1073741824 bytes, respectively. In 2008, the IEC prefixes were incorporated into the international standard system of units used alongside the International System of Quantities (see ISO/IEC 80000).Bit
The bit is a basic unit of information in information theory, computing, and digital communications. The name is a portmanteau of binary digit.In information theory, one bit is typically defined as the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information, the bit has also been called a shannon, named after Claude Shannon.
As a binary digit, the bit represents a logical value, having only one of two values. It may be physically implemented with a two-state device. These state values are most commonly represented as either 0or1, but other representations such as true/false, yes/no, +/−, or on/off are possible. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program.
The symbol for the binary digit is either simply bit per recommendation by the IEC 80000-13:2008 standard, or the lowercase character b, as recommended by the IEEE 1541-2002 and IEEE Std 260.1-2004 standards. A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.Byte
The byte is a unit of digital information that most commonly consists of eight bits, representing a binary number. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.
The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size – byte-sizes from 1 to 48 bits are known to have been used in the past. Early character encoding systems often used six bits, and machines using six-bit and nine-bit bytes were common into the 1960s. These machines most commonly had memory words of 12, 24, 36, 48 or 60 bits, corresponding to two, four, six, eight or 10 six-bit bytes. In this era, bytes in the instruction stream were often referred to as syllables, before the term byte became common.
The modern de-facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the values 0 through 255 for one byte (2 in power of 8 = 256, where zero signifies a number as well). The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the eight-bit size. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes.
The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE) in contrast to the bit, whose IEEE symbol is a lower-case b. Internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte.CDC 6000 series
The CDC 6000 series was a family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each was a large, solid-state, general-purpose, digital computer that performed scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series.CDC 6600
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.The first CDC 6600's were delivered in 1965 to Livermore and Los Alamos. They quickly became a must-have system in scientific and mathematical computing circles, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. Approximately 50 were delivered in total.A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs.CDC 7600
The CDC 7600 was the Seymour Cray-designed successor to the CDC 6600, extending Control Data's dominance of the supercomputer field into the 1970s. The 7600 ran at 36.4 MHz (27.5 ns clock cycle) and had a 65 Kword primary memory (with a 60-bit word size) using magnetic core and variable-size (up to 512 Kword) secondary memory (depending on site). It was generally about ten times as fast as the CDC 6600 and could deliver about 10 MFLOPS on hand-compiled code, with a peak of 36 MFLOPS. In addition, in benchmark tests in early 1970 it was shown to be slightly faster than its IBM rival, the IBM System/360, Model 195. When the system was released in 1969, it sold for around $5 million in base configurations, and considerably more as options and features were added.
Among the 7600's notable state-of-the-art contributions, beyond extensive pipelining, was the physical C-shape, which both reduced floor space and dramatically increased performance by reducing the distance that signals needed to travel.CDC Cyber
The CDC Cyber range of mainframe-class supercomputers were the primary products of Control Data Corporation (CDC) during the 1970s and 1980s. In their day, they were the computer architecture of choice for scientific and mathematically intensive computing. They were used for modeling fluid flow, material science stress analysis, electrochemical machining analysis, probabilistic analysis, energy and academic computing, radiation shielding modeling, and other applications. The lineup also included the Cyber 18 and Cyber 1000 minicomputers. Like their predecessor, the CDC 6600, they were unusual in using the ones' complement binary representation.CDC Kronos
Kronos is an operating system with time-sharing capabilities, written by Control Data Corporation in the 1970s. Kronos ran on the 60-bit CDC 6000 series mainframe computers and their successors. CDC replaced Kronos with the NOS operating system in the late 1970s, which were succeeded by the NOS/VE operating system in the mid-1980s.The MACE operating system and APEX were forerunners to KRONOS. It was written by Control Data systems programmer Greg Mansfield, Dave Cahlander, Bob Tate and three others.CDC display code
Display code is the six-bit character code used by many computer systems manufactured by Control Data Corporation, notably the CDC3000 series and the following CDC 6000 series in 1964. The CDC 6000 series, and their followons, had 60 bit words. As such, typical usage packed 10 characters per word.
There were several variations of display code, notably the 63-character character set, and the 64-character character set. There were also 'CDC graphic' and 'ASCII graphic' variants of both the 63- and 64-character sets. The choice between 63- or 64-character character set, and between CDC or ASCII graphic was site-selectable. Generally, early CDC customers started out with the 63-character character set, and CDC graphic print trains on their line printers. As time-sharing became prevalent, almost all sites used the ASCII variant - so that line printer output would match interactive usage. Later CDC customers were also more likely to use the 64-character character set.
A later variation, called 6/12 display code, was used in the Kronos and NOS timesharing systems in order to support full ASCII capabilities. In 6/12 mode, an escape character (the circumflex, octal 76) would indicate that the following letter was lower case. Thus, upper case and other characters were 6 bits in length, and lower case characters were 12 bits in length.
The PLATO system used a further variant of 6/12 display code. Noting that lower case letters were most common in typical PLATO usage, the roles were reversed. Lower case letters were the norm, and the escape character preceded upper case letters.
The typical text file format used a zero-byte terminator to signify the end of each record. The zero-byte terminator was indicated by, at least, the final twelve bits of a 60-bit word being set to zero. The terminator could actually be anywhere from 12- to 66-bits long - depending on the length of the record. This caused an ambiguity in the 64-character character set, when a colon character needed to be the final character in a record. In such cases a blank character was typically appended to the record after the trailing colon.COMPASS
COMPASS is an acronym for COMPrehensive ASSembler. COMPASS is any of a family of macro assembly languages on Control Data Corporation's 3000 series, and on the 60-bit CDC 6000 series, 7600 and Cyber 70 and 170 series mainframe computers. While the architectures are very different, the macro and conditional assembly facilities are similar.Central processing unit
A central processing unit (CPU), also called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.The form, design, and implementation of CPUs have changed over the course of their history, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching (from memory) and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit (IC) chip. An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC). Some computers employ a multi-core processor, which is a single chip containing two or more CPUs called "cores"; in that context, one can speak of such single chips as "sockets".Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources.KeeLoq
KeeLoq is a proprietary hardware-dedicated block cipher that uses a non-linear feedback shift register (NLFSR). The uni-directional command transfer protocol was designed by Frederick Bruwer of Nanoteq (Pty) Ltd., the cryptographic algorithm was created by Gideon Kuhn at the University of Pretoria, and the silicon implementation was by Willem Smit at Nanoteq Pty Ltd (South Africa) in the mid-1980s. KeeLoq was sold to Microchip Technology Inc in 1995 for $10 million. It is used in "code hopping" encoders and decoders such as NTQ105/106/115/125D/129D, HCS101/2XX/3XX/4XX/5XX and MCS31X2. KeeLoq is or was used in many remote keyless entry systems by such companies as Chrysler, Daewoo, Fiat, GM, Honda, Toyota, Volvo, Volkswagen Group, Clifford, Shurlok, and Jaguar.List of computer system emulators
This article lists software and hardware that emulates computing platforms.
The host in this article is the system running the emulator, and the guest is the system being emulated.
The list is organized by guest operating system (the system being emulated), grouped by bitness. Each section contains a list of emulators capable of emulating the specified guest, details of the range of guest systems able to be emulated, and the required host environment and licensing.NOS (software)
NOS (Network Operating System) is a discontinued operating system with time-sharing capabilities, written by Control Data Corporation in the 1970s.NOS ran on the 60-bit CDC 6000 series of mainframe computers and their successors. NOS replaced the earlier CDC Kronos operating system of the 1970s. NOS was intended to be the sole operating system for all CDC machines, a fact CDC promoted heavily. NOS was replaced with NOS/VE on the 64-bit Cyber-180 systems in the mid-1980s.
Version 1 of NOS continued to be updated until about 1981; NOS version 2 was released early 1982.Organizationally unique identifier
An organizationally unique identifier (OUI) is a 24-bit number that uniquely identifies a vendor, manufacturer, or other organization.
OUIs are purchased from the Institute of Electrical and Electronics Engineers (IEEE) Registration Authority by the assignee (IEEE term for the vendor, manufacturer, or other organization). They are used to uniquely identify a particular piece of equipment through derived identifiers such as MAC addresses, Subnetwork Access Protocol protocol identifiers, World Wide Names for Fibre Channel devices.In MAC addresses, the OUI is combined with a 24-bit number (assigned by the assignee of the OUI) to form the address. The first three octets of the address are the OUI.Time-sharing system evolution
This article covers the evolution of time-sharing systems, providing links to major early time-sharing operating systems, showing their subsequent evolution.Universally unique identifier
A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems. The term globally unique identifier (GUID) is also used, typically in software created by Microsoft.
When generated according to the standard methods, UUIDs are for practical purposes unique, without depending for their uniqueness on a central registration authority or coordination between the parties generating them, unlike most other numbering schemes. While the probability that a UUID will be duplicated is not zero, it is close enough to zero to be negligible.
Thus, anyone can create a UUID and use it to identify something with near certainty that the identifier does not duplicate one that has already been, or will be, created to identify something else. Information labeled with UUIDs by independent parties can therefore be later combined into a single database or transmitted on the same channel, with a negligible probability of duplication.
Adoption of UUIDs and GUIDs is widespread, with many computing platforms providing support for generating them and for parsing their textual representation.Word (computer architecture)
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor. The number of bits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word sized and the largest piece of data that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).
Modern processors, including those in embedded systems, usually have a word size of 8, 16, 24, 32, or 64 bits; those in modern general-purpose computers in particular usually use 32 or 64 bits. Special-purpose digital processors, such as DSPs for instance, may use other sizes, and many other sizes have been used historically, including 9, 12, 18, 24, 26, 36, 39, 40, 48, and 60 bits. Several of the earliest computers (and a few modern as well) used binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers had no fixed word length at all.
The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).