In computer architecture, 26-bit integers, memory addresses, or other data units are those that are 26 bits wide, and thus can represent values up to 64 mega (base 2). Two examples of computer processors that featured 26-bit memory addressing are certain second generation IBM System/370 mainframe computer models introduced in 1981 (and several subsequent models), which had 26-bit physical addresses but had only the same 24-bit virtual addresses as earlier models, and the first generations of ARM processors.


IBM System/370

As data processing needs continued to grow, IBM and their customers faced challenges directly addressing larger memory sizes. In what ended up being a short-term "emergency" solution, a pair of IBM's second wave of System/370 models, the 3033 and 3081, introduced 26-bit real memory addressing, increasing the System/370's amount of physical memory that could be attached by a factor of 4 from the previous 24-bit limit of 16 MB. IBM referred to 26-bit addressing as "extended real addressing," and some subsequent models also included 26-bit support. However, only 2 years later, IBM introduced 31-bit memory addressing, expanding both physical and virtual addresses to 31 bits, with its System/370-XA models, and even the popular 3081 was upgradeable to XA standard.

Given 26-bit's brief history as the state-of-the-art in memory addressing available in IBM's model range, and given that virtual addresses were still limited to 24 bits, software exploitation of 26-bit mode was limited. The few customers that exploited 26-bit mode eventually adjusted their applications to support 31-bit addressing, and IBM dropped support for 26-bit mode after several years producing models supporting 24-bit, 26-bit, and 31-bit modes. The 26-bit mode is the only addressing mode that IBM removed from its line of mainframe computers descended from the System/360. All the other addressing modes, including now 64-bit mode, are supported in current model mainframes.

Early ARM processors

In the ARM processor architecture, 26-bit refers to the design used in the original ARM processors where the Program Counter (PC) and Processor Status Register (PSR) were combined into one 32-bit register (R15), the status flags filling the high 6 bits and the Program Counter taking up the lower 26 bits.

In fact, because the program counter is always word-aligned the lowest two bits are always zero which allowed the designers to reuse these two bits to hold the processor's mode bits too. The four modes allowed were USR26, SVC26, IRQ26, FIQ26; contrast this with the 32 possible modes available when the program status was separated from the program counter in more recent ARM architectures.

This design enabled more efficient program execution, as the Program Counter and status flags could be saved and restored with a single operation. This resulted in faster subroutine calls and interrupt response than traditional designs, which would have to do two register loads or saves when calling or returning from a subroutine.

Despite having a 32-bit ALU and word-length, processors based on ARM architecture version 1 and 2 had only a 26-bit PC and address bus, and were consequently limited to 64 MiB of addressable memory. This was still a vast amount of memory at the time, but because of this limitation, architectures since have included various steps away from the original 26-bit design.

The ARM architecture version 3 introduced a 32-bit PC and separate PSR, as well as a 32-bit address bus, allowing 4 GiB of memory to be addressed. The change in the PC/PSR layout caused incompatibility with code written for previous architectures, so the processor also included a 26-bit compatibility mode which used the old PC/PSR combination. The processor could still address 4 GB in this mode, but could not execute anything above address 0x3FFFFFC (64 MB). This mode was used by RISC OS running on the Acorn Risc PC to utilise the new processors while retaining compatibility with existing software.

ARM architecture version 4 made the support of the 26-bit addressing modes optional, and ARM architecture version 5 onwards has removed them entirely.

External links


In computer architecture, 31-bit integers, memory addresses, or other data units are those that are 31 bits wide.

In 1983, IBM introduced 31-bit addressing in the System/370-XA mainframe architecture as an upgrade to the 24-bit physical and virtual, and transitional 24-bit-virtual/26-bit physical, addressing of earlier models. This enhancement allowed address spaces to be 128 times larger, permitting programs to address memory above 16 MB (referred to as "above the line"). Support for COBOL, FORTRAN and later on Linux/390 were included.

ARM architecture

ARM, previously Advanced RISC Machine, originally Acorn RISC Machine, is a family of reduced instruction set computing (RISC) architectures for computer processors, configured for various environments. Arm Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures‍—‌including systems-on-chips (SoC) and systems-on-modules (SoM) that incorporate memory, interfaces, radios, etc. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products.

Processors that have a RISC architecture typically require fewer transistors than those with a complex instruction set computing (CISC) architecture (such as the x86 processors found in most personal computers), which improves cost, power consumption, and heat dissipation. These characteristics are desirable for light, portable, battery-powered devices‍—‌including smartphones, laptops and tablet computers, and other embedded systems. For supercomputers, which consume large amounts of electricity, ARM could also be a power-efficient solution.ARM Holdings periodically releases updates to the architecture. Architecture versions ARMv3 to ARMv7 support 32-bit address space (pre-ARMv3 chips, made before ARM Holdings was formed, as used in the Acorn Archimedes, had 26-bit address space) and 32-bit arithmetic; most architectures have 32-bit fixed-length instructions. The Thumb version supports a variable-length instruction set that provides both 32- and 16-bit instructions for improved code density. Some older cores can also provide hardware execution of Java bytecodes. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.With over 100 billion ARM processors produced as of 2017, ARM is the most widely used instruction set architecture and the instruction set architecture produced in the largest quantity. Currently, the widely used Cortex cores, older "classic" cores, and specialized SecurCore cores variants are available for each of these to include or exclude optional capabilities.

Access badge

An access badge is a credential used to gain entry to an area having automated access control entry points. Entry points may be doors, turnstiles, parking gates or other barriers.

Access badges use various technologies to identify the holder of the badge to an access control system. The most common technologies are magnetic stripe, proximity, barcode, smart cards and various biometric devices. The magnetic strip ID card was invented by Forrest Parry in 1960.The access badge contains a number that is read by a card reader. This number is usually called the facility code and programmed by the admin. The number is sent to an access control system, a computer system that makes access control decisions based on information about the credential. If the credential is included in an access control list, the access control system unlocks the controlled access point. The transaction is stored in the system for later retrieval; reports can be generated showing the date/time the card was used to enter the controlled access point.

The Wiegand effect was used in early access cards. This method was abandoned in favor of other proximity technologies. The new technologies retained the Wiegand upstream data so that the new readers were compatible with old systems. Readers are still called Wiegand but no longer use the Wiegand effect. A Wiegand reader radiates a 1" to 5" electrical field around itself. Cards use a simple LC circuit. When a card is presented to the reader, the reader's electrical field excites a coil in the card. The coil charges a capacitor and in turn powers an integrated circuit. The integrated circuit outputs the card number to the coil which transmits it to the reader. The transmission of the card number happens in the clear—it is not encrypted. With basic understanding of radio technology and of card formats, Wiegand proximity cards can be hacked.

A common proximity format is 26 bit Wiegand. This format uses a facility code also called a site code. The facility code is unique number common to all of the cards in a particular set. The idea is an organization has their own facility code and then numbered cards incrementing from 1. Another organization has a different facility code and their card set also increments from 1. Thus different organizations can have card sets with the same card numbers but since the facility codes differ, the cards only work at one organization. This idea worked fine for a while but there is no governing body controlling card numbers, different manufacturers can supply cards with identical facility codes and identical card numbers to different organizations. Thus there is a problem of duplicate cards. To counteract this problem some manufacturers have created formats beyond 26 bit Wiegand that they control and issue to an organization.

In the 26 bit Wiegand format bit 1 is an even parity bit. Bits 2-9 are a facility code. Bits 10-25 are the card number. Bit 26 is an odd parity bit. Other formats have a similar structure of leading facility code followed by card number and including parity bits for error checking.

Smart cards can be used to counteract the problems of transmitting card numbers in the clear and control of the card numbers by manufacturers. Smart cards can be encoded by organizations with unique numbers and the communication between card and reader can be encrypted.

Acorn Archimedes

The Acorn Archimedes is a family of personal computers designed by Acorn Computers Ltd in Cambridge (England) and sold in the late-1980s to mid-1990s, Acorn's first general-purpose home computer based on its own ARM architecture. The first Archimedes was launched in 1987.

ARM's RISC design, a 32-bit CPU (using 26-bit addressing), running at 8 MHz, was stated as achieving 4.5+ MIPS, which provided a significant upgrade from 8-bit home computers, such as Acorn's previous machines. Claims of being the fastest micro in the world and running at 18 MIPS were also made during tests.The models in the family omitted either the Acorn or Archimedes part of the name; the first models were named "BBC Archimedes", yet the name "Acorn Archimedes" is commonly used to describe any of Acorn's contemporary designs based on the same architecture. Archimedes machines are no longer sold, but computers such as the Raspberry Pi can still run its operating system, RISC OS (at least later versions), as they use ARM chips that are (mostly) compatible.


In computing, Aemulor is an emulator of the earlier 26-bit addressing-mode ARM microprocessors. It runs on ARM processors under 32-bit addressing-mode versions of RISC OS. It was written by Adrian Lees and released in 2003. An enhanced version is available under the name Aemulor Pro.

The software allows Raspberry Pi, Iyonix PC and A9home computers running RISC OS to make use of some software written for older hardware. As of 2012, compatibility with the BeagleBoard single-board computer was under development.

Card reader

A card reader is a data input device that reads data from a card-shaped storage medium. The first were punched card readers, which read the paper or cardboard punched cards that were used during the first several decades of the computer industry to store information and programs for computer systems. Modern card readers are electronic devices that can read plastic cards embedded with either a barcode, magnetic strip, computer chip or another storage medium.

A memory card reader is a device used for communication with a smart card or a memory card.

A magnetic card reader is a device used to read magnetic stripe cards, such as credit cards.

A business card reader is a device used to scan and electronically save printed business cards.


The DLX (pronounced "Deluxe") is a RISC processor architecture designed by John L. Hennessy and David A. Patterson, the principal designers of the Stanford MIPS and the Berkeley RISC designs (respectively), the two benchmark examples of RISC design (named after the Berkeley design).

The DLX is essentially a cleaned up (and modernized) simplified MIPS CPU. The DLX has a simple 32-bit load/store architecture, somewhat unlike the modern MIPS CPU. As the DLX was intended primarily for teaching purposes, the DLX design is widely used in university-level computer architecture courses.

There are two known implementations: ASPIDA and VAMP. ASPIDA project resulted in a core with many nice features: open source, supports Wishbone, asynchronous design, supports multiple ISA's, ASIC proven. VAMP is a DLX-variant that was mathematically verified as part of Verisoft project. It was specified with PVS, implemented in Verilog, and runs on a Xilinx FPGA. A full stack from compiler to kernel to TCP/IP was built on it.

E0 (cipher)

E0 is a stream cipher used in the Bluetooth protocol. It generates a sequence of pseudorandom numbers and combines it with the data using the XOR operator. The key length may vary, but is generally 128 bits.

Intel 80386EX

The Intel 80386EX (386EX) is a variant of the Intel 386 microprocessor designed for embedded systems. Introduced in August 1994 and was successful in the market being used aboard several orbiting satellites and microsatellites.

Intel did not manufacture another integrated x86 processor until 2007, when it confirmed the Enhanced Pentium M-based Tolapai (EP80579).

Iyonix PC

The Iyonix PC was an Acorn-clone personal computer sold by Castle Technology and Iyonix Ltd between 2002 and 2008. According to news site Slashdot, it was the first personal computer to use Intel's XScale processor. It ran RISC OS 5.

List of ARM microarchitectures

This is a list of microarchitectures based on the ARM family of instruction sets designed by ARM Holdings and 3rd parties, sorted by version of the ARM instruction set, release and name. ARM provides a summary of the numerous vendors who implement ARM cores in their design. Keil also provides a somewhat newer summary of vendors of ARM based processors. ARM further provides a chart displaying an overview of the ARM processor lineup with performance and functionality versus capabilities for the more recent ARM core families.

MIPS architecture

MIPS (Microprocessor without Interlocked Pipelined Stages) is a reduced instruction set computer (RISC) instruction set architecture (ISA) developed by MIPS Computer Systems (an American company that is now called MIPS Technologies).

There are multiple versions of MIPS: including MIPS I, II, III, IV, and V; as well as five releases of MIPS32/64 (for 32- and 64-bit implementations, respectively). The early MIPS architectures were 32-bit only; 64-bit versions were developed later. As of April 2017, the current version of MIPS is MIPS32/64 Release 6. MIPS32/64 primarily differs from MIPS I–V by defining the privileged kernel mode System Control Coprocessor in addition to the user mode architecture.

The MIPS architecture has several optional extensions. MIPS-3D which is a simple set of floating-point SIMD instructions dedicated to common 3D tasks, MDMX (MaDMaX) which is a more extensive integer SIMD instruction set using the 64-bit floating-point registers, MIPS16e which adds compression to the instruction stream to make programs take up less room, and MIPS MT, which adds multithreading capability.Computer architecture courses in universities and technical schools often study the MIPS architecture. The architecture greatly influenced later RISC architectures such as Alpha.

As of April 2017, MIPS processors are used in embedded systems such as residential gateways and routers. Originally, MIPS was designed for general-purpose computing. During the 1980s and 1990s, MIPS processors for personal, workstation, and server computers were used by many companies such as Digital Equipment Corporation, MIPS Computer Systems, NEC, Pyramid Technology, SiCortex, Siemens Nixdorf, Silicon Graphics, and Tandem Computers. Historically, video game consoles such as the Nintendo 64, Sony PlayStation, PlayStation 2, and PlayStation Portable used MIPS processors. MIPS processors also used to be popular in supercomputers during the 1990s, but all such systems have dropped off the TOP500 list. These uses were complemented by embedded applications at first, but during the 1990s, MIPS became a major presence in the embedded processor market, and by the 2000s, most MIPS processors were for these applications. In the mid- to late-1990s, it was estimated that one in three RISC microprocessors produced was a MIPS processor.MIPS is a modular architecture supporting up to four coprocessors (CP0/1/2/3). In MIPS terminology, CP0 is the System Control Coprocessor (an essential part of the processor that is implementation-defined in MIPS I–V), CP1 is an optional floating-point unit (FPU) and CP2/3 are optional implementation-defined coprocessors (MIPS III removed CP3 and reused its opcodes for other purposes). For example, in the PlayStation video game console, CP2 is the Geometry Transformation Engine (GTE), which accelerates the processing of geometry in 3D computer graphics.

In December 2018, Wave Computing, the new owner of the MIPS architecture (see MIPS Technologies), announced that MIPS ISA will be open-sourced in a program dubbed the MIPS Open initiative. The program being planned for 2019 is intended to open up access to the most recent versions of both the 32-bit and 64-bit designs making them available without any licensing or royalty fees as well as granting participants licenses to existing MIPS patents.

Processor register

In computer architecture, a processor register is a quickly accessible location available to a computer's central processing unit (CPU). Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. Registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.

Almost all computers, whether load/store architecture or not, load data from a larger memory into registers where it is used for arithmetic operations and is manipulated or tested by machine instructions. Manipulated data is then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic RAM as main memory, with the latter usually accessed via one or more cache levels.

Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5.

A common property of computer programs is locality of reference, which refers to accessing the same values repeatedly and holding frequently used values in registers to improve performance; this makes fast registers and caches meaningful. Allocating frequently used variables to registers can be critical to a program's performance; this register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer.

Project Gemini

Project Gemini was NASA's second human spaceflight program. Conducted between projects Mercury and Apollo, Gemini started in 1961 and concluded in 1966. The Gemini spacecraft carried a two-astronaut crew. Ten Gemini crews flew low Earth orbit (LEO) missions during 1965 and 1966, putting the United States in the lead during the Cold War Space Race against the Soviet Union.

Gemini's objective was the development of space travel techniques to support the Apollo mission to land astronauts on the Moon. It performed missions long enough for a trip to the Moon and back, perfected working outside the spacecraft with extra-vehicular activity (EVA), and pioneered the orbital maneuvers necessary to achieve space rendezvous and docking. With these new techniques proven by Gemini, Apollo could pursue its prime mission without doing these fundamental exploratory operations.

All Gemini flights were launched from Launch Complex 19 (LC-19) at Cape Kennedy Air Force Station in Florida. Their launch vehicle was the Gemini–Titan II, a modified Intercontinental Ballistic Missile (ICBM). Gemini was the first program to use the newly built Mission Control Center at the Houston Manned Spacecraft Center for flight control.The astronaut corps that supported Project Gemini included the "Mercury Seven", "The New Nine", and the 1963 astronaut class. During the program, three astronauts died in air crashes during training, including both members of the prime crew for Gemini 9. This mission was flown by the backup crew, the only time a backup crew has completely replaced a prime crew on a mission in NASA's history to date.

Gemini was robust enough that the United States Air Force planned to use it for the Manned Orbital Laboratory (MOL) program, which was later canceled. Gemini's chief designer, Jim Chamberlin, also made detailed plans for cislunar and lunar landing missions in late 1961. He believed that Gemini spacecraft could fly in lunar operations before Project Apollo, and cost less. NASA's administration did not approve those plans. In 1969, McDonnell-Douglas proposed a "Big Gemini" that could have been used to shuttle up to 12 astronauts to the planned space stations in the Apollo Applications Project (AAP). The only AAP project funded was Skylab – which used existing spacecraft and hardware – thereby eliminating the need for Big Gemini.


RISCOS Ltd. (also referred to as ROL) was a limited company engaged in computer software and IT consulting. It licensed the rights to continue the development of RISC OS 4 and to distribute it for desktop machines (as an upgrade or for new machines) from Element 14 and subsequently Pace Micro Technology. Company founders include developers who formerly worked within Acorn's dealership network. It was established as a nonprofit company. On or before 4 March 2013 3QD Developments acquired RISCOS Ltd's flavour of RISC OS. RISCOS Ltd was dissolved on 14 May 2013.


RISC OS is a computer operating system originally designed by Acorn Computers Ltd in Cambridge, England. First released in 1987, it was specifically designed to run on the ARM chipset, which Acorn had designed concurrently for use in its new line of Archimedes personal computers. RISC OS takes its name from the RISC (reduced instruction set computing) architecture supported.

Between 1987 and 1998, RISC OS was contained within every ARM-based Acorn computer model. These included the Acorn Archimedes range, Acorn's R line of computers (with RISC iX as a dual boot option), RiscPC, A7000 and also prototype models such as the Acorn NewsPad and Phoebe computer. A version of the OS (called NCOS) was also used in Oracle's Network Computer and compatible systems.

After the break-up of Acorn in 1998, development of the OS was forked and separately continued by several companies, including RISCOS Ltd, Pace Micro Technology and Castle Technology. Since then, it has been bundled with a number of ARM-based desktop computers such as the Iyonix and A9home. As of March 2017, the OS remains forked and is independently developed by RISCOS Ltd and the RISC OS Open community.

Most recent stable versions run on the ARMv3/ARMv4 RiscPC, the ARMv5 Iyonix, ARMv7 Cortex-A8 processors (such as that used in the BeagleBoard and Touch Book) and Cortex-A9 processors (such as that used in the PandaBoard). There is a development version for the Raspberry Pi. SD card images have been made available for downloading free of charge to Raspberry Pi 1, 2 & 3 users with a full graphical user interface (GUI) version and a command-line interface only version (RISC OS Pico, at 3.8 MB).In October 2018, RISC OS 5 was re-licensed under the Apache 2.0 license.

Saturn Launch Vehicle Digital Computer

The Saturn Launch Vehicle Digital Computer (LVDC) was a computer that provided the autopilot for the Saturn V rocket from launch to Earth orbit insertion. Designed and manufactured by IBM's Electronics Systems Center in Owego, N.Y., it was one of the major components of the Instrument Unit, fitted to the S-IVB stage of the Saturn V and Saturn IB rockets. The LVDC also supported pre- and post-launch checkout of the Saturn hardware. It was used in conjunction with the Launch Vehicle Data Adaptor (LVDA) which performed signal conditioning to the sensor inputs to the computer from the launch vehicle.


WWVB is a time signal radio station near Fort Collins, Colorado and is operated by the National Institute of Standards and Technology (NIST). Most radio-controlled clocks in North America use WWVB's transmissions to set the correct time. The 70 kW ERP signal transmitted from WWVB is a continuous 60 kHz carrier wave, the frequency of which is derived from a set of atomic clocks located at the transmitter site, yielding a frequency uncertainty of less than 1 part in 1012. A one-bit-per-second time code, which is based on the IRIG "H" time code format and derived from the same set of atomic clocks, is then modulated onto the carrier wave using pulse-width modulation and amplitude-shift keying. A single complete frame of time code begins at the start of each minute, lasts one minute, and conveys the year, day of year, hour, minute, and other information such as the beginning of the minute.

WWVB is co-located with WWV, a time signal station that broadcasts in both voice and time code on multiple shortwave radio frequencies.

While most time signals encode the local time of the broadcasting nation, the United States spans multiple time zones, so WWVB broadcasts the time in Coordinated Universal Time (UTC). Radio-controlled clocks can then apply time zone and daylight saving time offsets as needed to display local time. The time used in the broadcast is set by the NIST Time Scale, known as UTC(NIST). This time scale is the calculated average time of an ensemble of master clocks, themselves calibrated by the NIST-F1 and NIST-F2 cesium fountain atomic clocks.In 2011, NIST estimated the number of radio clocks and wristwatches equipped with a WWVB receiver at over 50 million.WWVB, along with NIST's shortwave time code-and-announcement stations WWV and WWVH, were proposed for defunding and elimination in the 2019 NIST budget. However, the final 2019 NIST budget preserved funding for the three stations.

Word (computer architecture)

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor. The number of bits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.

The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word sized and the largest piece of data that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).

Modern processors, including those in embedded systems, usually have a word size of 8, 16, 24, 32, or 64 bits; those in modern general-purpose computers in particular usually use 32 or 64 bits. Special-purpose digital processors, such as DSPs for instance, may use other sizes, and many other sizes have been used historically, including 9, 12, 18, 24, 26, 36, 39, 40, 48, and 60 bits. Several of the earliest computers (and a few modern as well) used binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers had no fixed word length at all.

The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.