48-bit

In computer architecture, 48-bit integers can represent 281,474,976,710,656 (248 or 2.814749767×1014) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (248 − 1) or a signed two's complement range of -140,737,488,355,328 (-247) through 140,737,488,355,327 (247 − 1). A 48-bit memory address can directly address every byte of 256 tebibytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets) in width. Examples include 48-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.

Word size

Computers with 48-bit words include the AN/FSQ-32, CDC 1604/upper-3000 series, BESM-6, Ferranti Atlas, and Burroughs large systems[a][b].

Addressing

The IBM System/38 and the AS/400, in its CISC variants, are 48-bit addressing systems. The address size used in logical block addressing was increased to 48 bits with the introduction of ATA-6. The Ext4 file system physically limits the file block count to 48 bits.

The minimal implementation of the x86-64 architecture provides 48-bit addressing encoded into 64 bits; future versions of the architecture can expand this without breaking properly written applications.

The media access control address (MAC address) of a computer uses a 48-bit address space. This can be changed to 64-bit addressing.

Images

In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image, but the existence of more shades of each of the three primary colors (65,536 as opposed to 256) means that more operations can be performed on the image without risk of noticeable banding or posterization.

Notes

  1. ^ The B5000, B5500 and B5700 took 3 bits in control words and numeric data for use as a tag; alphanumeric data and instruction syllables were stored in the full 48 bits and had no tags.
  2. ^ The B5900-B8xxx additionally had a 3- or 4-bit type tag.
AVR32

The AVR32 is a 32-bit RISC microcontroller architecture produced by Atmel. The microcontroller architecture was designed by a handful of people educated at the Norwegian University of Science and Technology, including lead designer Øyvind Strøm and CPU architect Erik Renno in Atmel's Norwegian design center.

Most instructions are executed in a single-cycle. The multiply–accumulate unit can perform a 32-bit × 16-bit + 48-bit arithmetic operation in two cycles (result latency), issued once per cycle.

It does not resemble the 8-bit AVR microcontroller family, even though they were both designed at Atmel Norway, in Trondheim. Some of the debug-tools are similar.

Support for AVR32 has been dropped from Linux as of kernel 4.12; Atmel has switched mostly to M variants of the ARM architecture.

CDC 1604

The CDC 1604 was a 48-bit computer designed and manufactured by Seymour Cray and his team at the Control Data Corporation (CDC). The 1604 is known as one of the first commercially successful transistorized computers. (The IBM 7090 was delivered earlier, in November 1959.) Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-UNIVAC 1103.A cut-down 24-bit version, designated the CDC 924, was shortly thereafter produced, and delivered to NASA.The first 1604 was delivered to the US Navy in 1960 for applications supporting major Fleet Operations Control Centers primarily for weather prediction in Hawaii, London, and Norfolk, Virginia. By 1964, over 50 systems were built. The CDC 3000, which added five op codes, succeeded the 1604, and "was largely compatible" with it.One of the 1604s was shipped to the Pentagon to DASA (Defense Atomic Support Agency) and used during the Cuban missile crises to predict possible strikes by the Soviet Union against the United States.

A 12-bit minicomputer, called the CDC 160, was often used as an I/O processor in 1604 systems. A stand-alone version of the 160 called the CDC-160A was arguably the first minicomputer.

CDC 3000 series

The CDC 3000 series computers from Control Data Corporation were mid-1960s follow-ons to the CDC 1604 and CDC 924 systems.Over time, a range of machines were produced - divided into

the 48-bit upper 3000 series and

the 24-bit lower 3000 series.Early in the 1970s CDC phased out production of the 3000 series, which had been the cash cows of Control Data during the 1960s; sales of these machines funded the company while the 6000 series was designed.

Color depth

Color depth or colour depth (see spelling differences), also known as bit depth, is either the number of bits used to indicate the color of a single pixel, in a bitmapped image or video framebuffer, or the number of bits used for each color component of a single pixel. For consumer video standards, such as High Efficiency Video Coding (H.265), the bit depth specifies the number of bits used for each color component. When referring to a pixel, the concept can be defined as bits per pixel (bpp), which specifies the number of bits used. When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Color depth is only one aspect of color representation, expressing the precision with which colors can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

Datagram Congestion Control Protocol

In computer networking, the Datagram Congestion Control Protocol (DCCP) is a message-oriented transport layer protocol. DCCP implements reliable connection setup, teardown, Explicit Congestion Notification (ECN), congestion control, and feature negotiation. The IETF published DCCP as RFC 4340, a proposed standard, in March 2006. RFC 4336 provides an introduction.

DCCP provides a way to gain access to congestion-control mechanisms without having to implement them at the application layer. It allows for flow-based semantics like in Transmission Control Protocol (TCP), but does not provide reliable in-order delivery. Sequenced delivery within multiple streams as in the Stream Control Transmission Protocol (SCTP) is not available in DCCP. A DCCP connection contains acknowledgment traffic as well as data traffic. Acknowledgments inform a sender whether its packets have arrived, and whether they were marked by Explicit Congestion Notification (ECN). Acknowledgements are transmitted as reliably as the congestion control mechanism in use requires, possibly completely reliably.

DCCP is useful for applications with timing constraints on the delivery of data. Such applications include streaming media, multiplayer online games and Internet telephony. In such applications, old messages quickly become useless, so that getting new messages is preferred to resending lost messages. As of 2017 such applications have often either settled for TCP or used User Datagram Protocol (UDP) and implemented their own congestion-control mechanisms, or have no congestion control at all. While being useful for these applications, DCCP can also serve as a general congestion-control mechanism for UDP-based applications, by adding, as needed, mechanisms for reliable or in-order delivery on top of UDP/DCCP. In this context, DCCP allows the use of different, but generally TCP-friendly congestion-control mechanisms.

DCCP has the option for very long (48-bit) sequence numbers corresponding to a packet ID, rather than a byte ID as in TCP. The long length of the sequence numbers aims to guard against "some blind attacks, such as the injection of DCCP-Resets into the connection".

Dd (Unix)

dd is a command-line utility for Unix and Unix-like operating systems, the primary purpose of which is to convert and copy files.On Unix, device drivers for hardware (such as hard disk drives) and special device files (such as /dev/zero and /dev/random) appear in the file system just like normal files; dd can also read and/or write from/to these files, provided that function is implemented in their respective driver. As a result, dd can be used for tasks such as backing up the boot sector of a hard drive, and obtaining a fixed amount of random data. The dd program can also perform conversions on the data as it is copied, including byte order swapping and conversion to and from the ASCII and EBCDIC text encodings.

Ethernet

Ethernet is a family of computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, and has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.

The original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second (Mbit/s) to the latest 400 gigabits per second (Gbit/s). The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet.

Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. As per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 Wi-Fi, as well as by FDDI, and EtherType values are also used in Subnetwork Access Protocol (SNAP) headers.

Ethernet is widely used in home and industry. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.

Honeywell 800

The Datamatic Division of Honeywell announced the H-800 electronic computer in 1958. The first installation occurred in 1960. A total of 89 were delivered. The H-800 design was part of a family of 48-bit word, three-address instruction format computers that descended from the Datamatic 1000, which was a joint Honeywell and Raytheon project started in 1955. The 1800 and 1800-II were follow-on designs to the H-800.

IEEE 802.1ah-2008

Provider Backbone Bridges (PBB; known as "mac-in-mac") is a set of architecture and protocols for routing over a provider's network allowing interconnection of multiple Provider Bridge Networks without losing each customer's individually defined VLANs. It was initially created by Nortel before being submitted to the IEEE 802.1 committee for standardization. The final standard was approved by the IEEE in June 2008 as IEEE 802.1ah-2008 and has been integrated into IEEE 802.1Q-2011.

Logical block addressing

Logical block addressing (LBA) is a common scheme used for specifying the location of blocks of data stored on computer storage devices, generally secondary storage systems such as hard disk drives. LBA is a particularly simple linear addressing scheme; blocks are located by an integer index, with the first block being LBA 0, the second LBA 1, and so on.

The IDE standard included 22-bit LBA as an option, which was further extended to 28-bit with the release of ATA-1 (1994) and to 48-bit with the release of ATA-6 (2003), whereas the size of entries in on-disk and in-memory data structures holding the address is typically 32 or 64 bits. Most hard disk drives released after 1996 implement logical block addressing.

Lucifer (cipher)

In cryptography, Lucifer was the name given to several of the earliest civilian block ciphers, developed by Horst Feistel and his colleagues at IBM. Lucifer was a direct precursor to the Data Encryption Standard. One version, alternatively named DTD-1, saw commercial use in the 1970s for electronic banking.

MAC address

A media access control address (MAC address) of a device is a unique identifier assigned to a network interface controller (NIC). For communications within a network segment, it is used as a network address for most IEEE 802 network technologies, including Ethernet, Wi-Fi, and Bluetooth. Within the Open Systems Interconnection (OSI) model, MAC addresses are used in the medium access control protocol sublayer of the data link layer. As typically represented, MAC addresses are recognizable as six groups of two hexadecimal digits, separated by hyphens, colons, or no separator.

A MAC address may be referred to as the burned-in address, and is also known as an Ethernet hardware address, hardware address, and physical address.

A network node with multiple NICs must have a unique MAC address for each. Sophisticated network equipment such as a multilayer switch or router may require one or more permanently assigned MAC addresses.

MAC addresses are most often assigned by the manufacturer of network interface cards. Each is stored in hardware, such as the card's read-only memory or by a firmware mechanism. A MAC address typically includes the manufacturer's organizationally unique identifier (OUI). MAC addresses are formed according to the principles of two numbering spaces based on Extended Unique Identifiers (EUI) managed by the Institute of Electrical and Electronics Engineers (IEEE): EUI-48, which replaces the obsolete term MAC-48, and EUI-64.

Organizationally unique identifier

An organizationally unique identifier (OUI) is a 24-bit number that uniquely identifies a vendor, manufacturer, or other organization.

OUIs are purchased from the Institute of Electrical and Electronics Engineers (IEEE) Registration Authority by the assignee (IEEE term for the vendor, manufacturer, or other organization). They are used to uniquely identify a particular piece of equipment through derived identifiers such as MAC addresses, Subnetwork Access Protocol protocol identifiers, World Wide Names for Fibre Channel devices.In MAC addresses, the OUI is combined with a 24-bit number (assigned by the assignee of the OUI) to form the address. The first three octets of the address are the OUI.

SGI VPro

VPro, also known as Odyssey, is a computer graphics architecture for Silicon Graphics workstations. First released on the Octane2, it was subsequently used on the Fuel, Tezro workstations and the Onyx visualization systems, where it was branded InfinitePerformance.

VPro provides some very advanced capabilities such as per-pixel lighting, also known as "phong shading", (through the SGIX_fragment_lighting extension) and 48-bit RGBA color. On the other hand, later designs suffered from constrained bandwidth and poorer texture mapping performance compared to competing GPU solutions, which rapidly caught up to SGI in the market.Four different Odyssey-based VPro graphics board revisions existed, designated V6, V8, V10 and V12. The first series were the V6 and V8, with 32MB and 128MB of RAM respectively; the V10 and V12 had double the geometry performance of the older V6/V8, but were otherwise similar. The V6 and V10 can have up to 8MB RAM allocated to textures, while V8 and V12 can have up to 108MB RAM used for textures. The V10 and V12 boards used in Fuel, Tezro and Onyx 3000 computers use a different XIO connector than the cards used in Octane2 workstations.

The VPro graphics subsystem consists of an SGI proprietary chip set and associated software. The chip set consists of the buzz ASIC, the pixel blaster and jammer (PB&J) ASIC, and associated SDRAM. The buzz ASIC is a single-chip graphics pipeline. It operates at 251 MHz and contains on-chip SRAM. The buzz ASIC has three interfaces:

Host (16-bit, 400-MHz peer-to-peer XIO link)

SDRAM (The SDRAM is 32 MB (V6 or V10) or 128 MB (V8 or V12); the memory bus operates at half the speed of the buzz ASIC.)

PB&J ASICAs a result of a patent infringement settlement, SGI acquired rights to some of the Nvidia Quadro GPUs and released VPro-branded products (the V3, VR3, V7 and VR7) based on these (the GeForce 256, Quadro, Quadro 2 MXR, and Quadro 2 Pro, respectively). These cards share nothing with the original Odyssey line and could not be used in SGI MIPS workstations.All VPro boards support the OpenGL ARB imaging extensions, allowing for hardware acceleration of numerous imaging operations at real-time rates.

ScRGB

scRGB is a wide color gamut RGB (Red Green Blue) color space created by Microsoft and HP that uses the same color primaries and white/black points as the sRGB color space but allows coordinates below zero and greater than one. The full range is -0.5 through just less than +7.5.

Negative numbers enables scRGB to encompass most of the CIE 1931 color space while maintaining simplicity and backward compatibility with sRGB without the complexity of color management. The cost of maintaining compatibility with sRGB is that approximately 80% of the scRGB color space consists of imaginary colors.

Large positive numbers allow high dynamic range images to be represented, though the range is inferior to that of some other high dynamic range formats such as OpenEXR.

Service set (802.11 network)

In IEEE 802.11 wireless local area networking standards (including Wi-Fi), a service set is a group of wireless network devices that are operating with the same networking parameters.

Service sets are arranged hierarchically,: basic service sets (BSS) are units of devices operating with the same medium access characteristics (i.e. radio frequency, modulation scheme etc.), while extended service sets (ESS) are logical units of one or more basic service sets on the same logical network segment (i.e. IP subnet, VLAN etc.). There are two classes of basic service sets: those that are formed by infrastructure mode redistribution points (access points or mesh nodes), and those that are formed by independent stations in a peer-to-peer ad hoc topology. Basic service sets are identified by BSSIDs (basic service set identifiers), which are 48-bit labels that conform to MAC-48 conventions. Logical networks (including extended service sets) are identified by SSIDs (service set identifiers), which serve as "network names" and are typically natural language labels.

SilverFast

SilverFast is the name of a family of software for image scanning and processing, including photos, documents and slides, developed by LaserSoft Imaging.

Universally unique identifier

A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems. The term globally unique identifier (GUID) is also used, typically in software created by Microsoft.

When generated according to the standard methods, UUIDs are for practical purposes unique, without depending for their uniqueness on a central registration authority or coordination between the parties generating them, unlike most other numbering schemes. While the probability that a UUID will be duplicated is not zero, it is close enough to zero to be negligible.

Thus, anyone can create a UUID and use it to identify something with near certainty that the identifier does not duplicate one that has already been, or will be, created to identify something else. Information labeled with UUIDs by independent parties can therefore be later combined into a single database or transmitted on the same channel, with a negligible probability of duplication.

Adoption of UUIDs and GUIDs is widespread, with many computing platforms providing support for generating them and for parsing their textual representation.

Word (computer architecture)

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor. The number of bits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.

The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word sized and the largest piece of data that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).

Modern processors, including those in embedded systems, usually have a word size of 8, 16, 24, 32, or 64 bits; those in modern general-purpose computers in particular usually use 32 or 64 bits. Special-purpose digital processors, such as DSPs for instance, may use other sizes, and many other sizes have been used historically, including 9, 12, 18, 24, 26, 36, 39, 40, 48, and 60 bits. Several of the earliest computers (and a few modern as well) used binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers had no fixed word length at all.

The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).

Models
Architecture
Instruction set
architectures
Execution
Parallelism
Processor
performance
Types
Word size
Core count
Components
Power
management
Related

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.