Mainframe computer

Mainframe computers or mainframes (colloquially referred to as "big iron")[1] are computers used primarily by large organizations for critical applications; bulk data processing, such as census, industry and consumer statistics, enterprise resource planning; and transaction processing. They are larger and have more processing power than some other classes of computers: minicomputers, servers, workstations, and personal computers.

The term originally referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers.[2][3] Later, the term was used to distinguish high-end commercial machines from less powerful units.[4] Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are often used as servers.

IBM z13 and LinuxONE Rockhopper
A pair of IBM mainframes. On the left is the IBM z Systems z13. On the right is the IBM LinuxONE Rockhopper.
Front Z9 2094
An IBM System z9 mainframe


Modern mainframe design is characterized less by raw computational speed and more by:

  • Redundant internal engineering resulting in high reliability and security
  • Extensive input-output ("I/O") facilities with the ability to offload to separate engines
  • Strict backward compatibility with older software
  • High hardware and computational utilization rates through virtualization to support massive throughput.
  • Hot-swapping of hardware, such as processors and memory.

Their high stability and reliability enable these machines to run uninterrupted for very long periods of time, with mean time between failures (MTBF) measured in decades.

Mainframes have high availability, one of the primary reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z (previously called z Systems, System z and zSeries), Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, and Linux.[5] Software upgrades usually require setting up the operating system or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.

In the late 1950s, mainframes had only a rudimentary interactive interface (the console), and used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery. When interactive user terminals were introduced, they were used almost exclusively for applications (e.g. airline booking) rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although ultimately supplanted by keyboard/display devices.

By the early 1970s, many mainframes acquired interactive user terminals[NB 1] operating as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or later from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphic display terminals, and terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes partially or entirely phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces.

The infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, and reduced physical space requirements compared to server farms.[6]


Inside Z9 2094
Inside an IBM System z9 mainframe

Modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.[7]

Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case), or with shared, geographically dispersed storage provided by EMC or Hitachi.

Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late-1950s,[NB 2] mainframe designs have included subsidiary hardware[NB 3] (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual.[8] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online,[9] and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing.

Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors.

Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.

Current market

IBM, with z Systems, continues to be a major manufacturer in the mainframe market. Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs MCP products and ClearPath Dorado mainframes based on Sperry Univac OS 1100 product lines. In 2000, Hitachi co-developed the zSeries z900 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's GCOS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu (formerly Amdahl) GS21 mainframes globally. NEC with ACOS and Hitachi with AP10000-VOS3[10] still maintain mainframe hardware businesses in the Japanese market.

The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture of Itanium and Xeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2012's 5.5 GHz six-core zEC12 mainframe microprocessor. Unisys produces code compatible mainframe systems that range from laptops to cabinet-sized mainframes that utilize homegrown CPUs as well as Xeon processors. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[11]

Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant players in this market include BMC,[12] Compuware,[13][14] and CA Technologies.[15]


IBM 704 mainframe
An IBM 704 mainframe (1964)

Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The US group of manufacturers was first known as "IBM and the Seven Dwarfs":[16]:p.83 usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer.

Shrinking demand and tough competition started a shakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986.

During the 1980s, minicomputer-based systems grew more sophisticated and were able to displace the lower-end of the mainframes. These computers, sometimes called departmental computers were typified by the DEC VAX.

In 1991, AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop infamously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999",[17] a reference to the anticipated Year 2000 problem (Y2K).

That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large-scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or up to ~ 8,000 virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS (millions of instructions per second) shipments increased 4% per year over the past two years.[18] Alsop had himself photographed in 2000, symbolically eating his own words ("death of the mainframe").[19]

In 2012, NASA powered down its last mainframe, an IBM System z9.[20] However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology — hardware, software and services — remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world’s financial markets and much of global commerce".[21] As of 2010, while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".[22]

In 2015, IBM launched the IBM z13[23] and on June 2017 the IBM z14.[24][25]

Differences from supercomputers

A supercomputer is a computer at the leading edge of data processing capability, with respect to calculation speed. Supercomputers are used for scientific and engineering problems (high-performance computing) which crunch numbers and data,[26] while mainframes focus on transaction processing. The differences are:

  • Mainframes are built to be reliable for transaction processing (measured by TPC-metrics; not used or helpful for most supercomputing applications) as it is commonly understood in the business world: the commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council,[27] updates a database system for inventory control (goods), airline reservations (services), or banking (money) by adding a record. A transaction may refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another which is not measured by the processing speed of the cpu. Transaction processing is not exclusive to mainframes but is also used by microprocessor-based servers and online networks.
  • Supercomputer performance is measured in floating point operations per second (FLOPS)[28] or in traversed edges per second or TEPS,[29] metrics that are not very meaningful for mainframe applications, while mainframes are sometimes measured in millions of instructions per second (MIPS), although the definition depends on the instruction mix measured.[30] Examples of integer operations measured by MIPS include adding numbers together, checking values or moving data around in memory (while moving information to and from storage, so-called I/O is most helpful for mainframes; and within memory, only helping indirectly). Floating point operations are mostly addition, subtraction, and multiplication (of binary floating point in supercomputers; measured by FLOPS) with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations (only recently standardized decimal floating point, not used in supercomputers, are appropriate for monetary values such as those useful for mainframe applications). In terms of computational speed, supercomputers are more powerful.[31]

In 2007,[32] an amalgamation of the different technologies and architectures for supercomputers and mainframes has led to the so-called gameframe.

See also


  1. ^ Some had been introduced in the 1960s, but their deployment became more common in the 1970s
  2. ^ E.g., the IBM 709 had channels in 1958
  3. ^ sometimes computers, sometimes more limited


  1. ^ "IBM preps big iron fiesta". The Register. July 20, 2005.
  2. ^ "mainframe, n". Oxford English Dictionary (on-line ed.).
  3. ^ Ebbers, Mike; O’Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (PDF). IBM International Technical Support Organization. Retrieved 2007-06-01.
  4. ^ Beach, Thomas E. "Computer Concepts and Terminology: Types of Computers". Archived from the original on July 30, 2015. Retrieved November 17, 2012.
  5. ^ "National Vulnerability Database". Retrieved September 20, 2011.
  6. ^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. Retrieved December 28, 2009.
  7. ^ "Emulation or Virtualization?".
  8. ^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. Retrieved 2008-05-16.
  9. ^ "Improvements in Mainframe Computer Storage Management Practices and Reporting Are Needed to Promote Effective and Efficient Utilization of Disk Resources". Between October 2001 and September 2005, the IRS’ mainframe computer disk storage capacity increased from 79 terabytes to 168.5 terabytes.
  10. ^ Hitachi AP10000 - VOS3
  11. ^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007.
  12. ^ "Mainframe Automation Management". Retrieved 26 October 2012.
  13. ^ "Mainframe Modernization". Retrieved 26 October 2012.
  14. ^ "Automated Mainframe Testing & Auditing". Retrieved 26 October 2012.
  15. ^ "CA Technologies".
  16. ^ Bergin, Thomas J (ed.) (2000). 50 Years of Army Computing: From ENIAC to MSRC. DIANE Publishing. ISBN 978-0-9702316-1-1.CS1 maint: Extra text: authors list (link)
  17. ^ Also, Stewart (Mar 8, 1993). "IBM still has brains to be player in client/server platforms". InfoWorld. Retrieved Dec 26, 2013.
  18. ^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks" (PDF). IBM. January 19, 2010.
  19. ^ "Stewart Alsop eating his words". Computer History Museum. Retrieved Dec 26, 2013.
  20. ^ Cureton, Linda (11 February 2012). The End of the Mainframe Era at NASA. NASA. Retrieved 31 January 2014.
  21. ^ Lohr, Steve (March 23, 2008). "Why Old Technologies Are Still Kicking". The New York Times. Retrieved Dec 25, 2013.
  22. ^ Ante, Spencer E. (July 22, 2010). "IBM Calculates New Mainframes Into Its Future Sales Growth". The Wall Street Journal. Retrieved Dec 25, 2013.
  23. ^ Press, Gil. "From IBM Mainframe Users Group To Apple 'Welcome IBM. Seriously': This Week In Tech History". Forbes. Retrieved 2016-10-07.
  24. ^ "IBM Mainframe Ushers in New Era of Data Protection".
  25. ^ "IBM unveils new mainframe capable of running more than 12 billion encrypted transactions a day". CNBC.
  26. ^ High-Performance Graph Analysis Retrieved on February 15, 2012
  27. ^ Transaction Processing Performance Council Retrieved on December 25, 2009.
  28. ^ The "Top 500" list of High Performance Computing (HPC) systems Retrieved on July 19, 2016
  29. ^ The Graph 500 Archived 2011-12-27 at the Wayback Machine Retrieved on February 19, 2012
  30. ^ Resource consumption for billing and performance purposes is measured in units of a million service units (MSUs), but the definition of MSU varies from processor to processor so that MSUs are useless for comparing processor performance.
  31. ^ World's Top Supercomputer Retrieved on December 25, 2009
  32. ^ "Cell Broadband Engine Project Aims to Supercharge IBM Mainframe for Virtual Worlds". 26 April 2007.

External links

Computer hardware

Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, monitor, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes or modifications; whereas software is "soft" because it is easy to update or change. Intermediate between software and hardware is "firmware", which is software that is strongly coupled to the particular hardware of a computer system and thus the most difficult to change but also among the most stable with respect to consistency of interface. The progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing.

Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.


ESCON (Enterprise Systems Connection) is a data connection created by IBM, and is commonly used to connect their mainframe computers to peripheral devices such as disk storage and tape drives. ESCON is an optical fiber, half-duplex, serial interface. It originally operated at a rate of 10 Mbyte/s, which was later increased to 17Mbyte/s. The current maximum distance is 43 kilometers.

ESCON was introduced by IBM in the early 1990s. It replaced the older, slower (4.5 Mbyte/s), copper-based, parallel, IBM System/360 Bus and Tag channels technology of 1960-1990 era mainframes. Optical fiber is smaller in diameter and weight, and hence could save installation costs. Space and labor could also be reduced when fewer physical links were required - due to ESCON's switching features. ESCON is being supplanted by the substantially faster FICON, which runs over Fibre Channel.

ESCON allows the establishment and reconfiguration of channel connections dynamically, without having to take equipment off-line and manually move the cables. ESCON supports channel connections using serial transmission over a pair of fibers. The ESCON Director supports dynamic switching (which could be achieved prior to ESCON, but not with IBM-only products). It also allows the distance between units to be extended up to 60 km over a dedicated fiber. “Permanent virtual circuits” are supported through the switch.

ESCON switching has advantages over a collection of point-to-point links. A peripheral previously capable of accessing a single mainframe can now be connected simultaneously to up to eight mainframes, providing peripheral sharing.

The ESCON interface specifications were adopted in 1996 by ANSI X3T1 committee as the SBCON standard, which is now managed by X3T11.

End-of-Transmission character

In telecommunication, an End-of-Transmission character (EOT) is a transmission control character. Its intended use is to indicate the conclusion of a transmission that may have included one or more texts and any associated message headings.An EOT is often used to initiate other functions, such as releasing circuits, disconnecting terminals, or placing receive terminals in a standby condition. Its most common use today is to cause a Unix terminal driver to signal end of file and thus exit programs that are awaiting input.

In ASCII and Unicode, the character is encoded at U+0004 . It can be referred to as Ctrl+D, ^D in caret notation. Unicode provides the character U+2404 ␄ SYMBOL FOR END OF TRANSMISSION (HTML ␄) for when EOT needs to be displayed graphically. In addition, U+2301 ⌁ ELECTRIC ARROW can also be used as a graphic representation of EOT; it is defined in Unicode as "symbol for End of Transmission".

IBM 7040

The IBM 7040 was a historic but short-lived model of transistor computer built in the 1960s.

It was announced by IBM in December 1961, but did not ship until April, 1963. A later member of the IBM 700/7000 series of scientific computers, it was a scaled-down version of the IBM 7090. It was not fully compatible with the 7090. Some 7090 features, including index registers, character instructions and floating point, were extra-cost options. It also featured a different input/output architecture, based on the IBM 1414 data synchronizer, allowing more modern IBM peripherals to be used. A model designed to be compatible with the 7040 with more performance was announced as the 7044 at the same time.

Peter Fagg headed the development of the 7040 under executive Bob O. Evans.

A number of IBM 7040 and 7044 computers were shipped, but it was eventually made obsolete by the IBM System/360 family, announced in 1964. The schedule delays caused by IBM's multiple incompatible architectures provided motivation for the unified System/360 family.The 7040 proved popular for use at universities, due to its comparatively low price. For example, one was installed in May 1965 at Columbia University.

One of the first in Canada was at the University of Waterloo, bought by professor J. Wesley Graham . A team of students was frustrated with the slow performance of the Fortran compiler. In the summer of 1965 they wrote the WATFOR compiler for their 7040, which became popular with many newly formed computer science departments.IBM also offered the 7040 (or 7044) as an input-output processor attached to a 7090, in a configuration known as the 7090/7040 Direct Coupled System (DCS). Each computer was slightly modified to be able to interrupt the other.IBM used similar numbers for a model of its eServer pSeries 690 RS6000 architecture much later. The 7040-681, for example, was withdrawn in 2005.

IBM 7080

The IBM 7080 was a variable word length BCD transistor computer in the IBM 700/7000 series commercial architecture line, introduced in August 1961, that provided an upgrade path from the vacuum tube IBM 705 computer.

The 7080 weighed about 19,700 pounds (9.9 short tons; 8.9 t).After the introduction of the IBM 7070, in June 1960, as an upgrade path for both the IBM 650 and IBM 705 computers, IBM realized that it was so incompatible with the 705 that few users of that system wanted to upgrade to the 7070. That prompted the development of the 7080, which was fully compatible with all models of the 705 and added many improvements.

IBM 7090/94 IBSYS

IBSYS is the discontinued tape-based operating system that IBM supplied with its IBM 7090 and IBM 7094 computers. A similar operating system (but with several significant differences), also called IBSYS, was provided with IBM 7040 and IBM 7044 computers. IBSYS was based on FORTRAN Monitor System (FMS) and SHARE Operating System.

IBSYS itself was really a basic monitor program, that read control card images placed between the decks of program and data cards of individual jobs. An IBSYS control card began with a "$" in column 1, immediately followed by a Control Name that selected the various IBSYS utility programs needed to set up and run the job. These card deck images were read from magnetic tapes, prepared offline, not usually directly from the punched card reader.

IBM Airline Control Program

IBM Airline Control Program, or ACP, is a discontinued operating system developed by IBM beginning about 1965. In contrast to previous airline transaction processing systems, the most notable aspect of ACP is that it was designed to run on most models of the IBM System/360 mainframe computer family. This departed from the earlier model in which each airline had a different, machine-specific transaction system.

Development began with SABRE (Semi-Automatic Business Research Environment), Deltamatic, and PANAMAC. From these Programmed Airline Reservations System (PARS) was developed. In 1969 the control program, ACP was separated from PARS; PARS keeping the functions for processing airline reservations and related data.

In December 1979, ACP became known as ACP/TPF and then just TPF (Transaction Processing Facility) as the transaction operating system became more widely implemented by businesses other than the major airlines, such as online credit card processing, hotel and rental car reservations, police emergency response systems, and package delivery systems.

The last "free" release of ACP, 9.2.1, was intended for use in bank card and similar applications. It was shipped on a "mini-reel" which contained a complete ACP system, and its libraries for restoration to IBM 3340 DASD packs. From that complete system one could easily create derivative works. A hypervisor was included, which allowed OS/370 VS1 or VS2 (SVS or MVS) to be run as a "guest" OS under ACP itself. The end-user documentation, which was shipped with the tape, took almost 60 linear inches of shelf space.

See also IBM Airline Control System (ALCS), a variant of TPF specially designed to provide all the benefits of TPF (very high speed, high volume, high availability transaction processing) but with the advantages such as easier integration into the data center offered by running on a standard IBM operating system platform.


The ILLIAC I (Illinois Automatic Computer), a pioneering computer built in 1952 by the University of Illinois, was the first computer built and owned entirely by a US educational institution.

The project was the brainchild of Ralph Meagher and Abraham H. Taub, who both were associated with Princeton's Institute for Advanced Study before coming to the University of Illinois. The ILLIAC I became operational on September 1, 1952. It was the second of two identical computers, the first of which was ORDVAC, also built at the University of Illinois. These two machines were the first pair of machines to run the same instruction set.

ILLIAC I was based on the IAS Von Neumann architecture as described by mathematician John von Neumann in his influential First Draft of a Report on the EDVAC. Unlike most computers of its era, the ILLIAC I and ORDVAC computers were twin copies of the same design, with software compatibility. The computer had 2,800 vacuum tubes, measured 10 ft (3 m) by 2 ft (0.6 m) by 8½ ft (2.6 m) (L×B×H), and weighed 4,000 pounds (2.0 short tons; 1.8 t). ILLIAC I was very powerful for its time; in 1956 it had more computing power than all of Bell Labs.

Because the lifetime of the tubes within ILLIAC was about a year, the machine was shut down every day for "preventive maintenance" when older vacuum tubes would be replaced in order to increase reliability. Visiting scholars from Japan assisted in the design of the ILLIAC series of computers, and later developed the MUSASINO-1 computer in Japan. ILLIAC I was retired in 1962, when the ILLIAC II became operational.

M series (computer)

M-20, M-220 and M222 were a range of general-purpose computers designed and manufactured in the USSR.

These computers were developed by the Scientific Research Institute of Electronic Machines (NIIEM) and built

at Moscow Plant of Calculating and Analyzing Machines (SAM) and

the Kazan Plant of Computing Machines (under the Ministry of Radio Industry of the USSR).

Minsk family of computers

Minsk family of mainframe computers was developed and produced in the Byelorussian SSR from 1959 to 1975.

The MINSK-1 was a vacuum tube digital computer that went into production in 1960.The MINSK-2 was a solid-state digital computer that went into production in 1962.The MINSK-22 was a modified version of Minsk-2 that went into production in 1965.

The MINSK-23 went into production in 1966.

The most advanced model was Minsk-32, developed in 1968. It supported COBOL, FORTRAN and ALGAMS (a version of ALGOL). This and earlier versions also used a machine-oriented language called AKI (AvtoKod "Inzhener", i.e., "Engineer's Autocode"). It stood somewhere between the native assembly language SSK (Sistema Simvolicheskogo Kodirovaniya, or "System of symbolic coding") and higher-level languages, like FORTRAN.

The word size was 31 bits for Minsk-1 and 37 bits for the other models.

At one point Minsk-222 (an upgraded prototype based on the most popular model, Minsk-22) and Minsk-32 were considered as a potential base for a future unified line of mutually compatible mainframes — that would later become the ES EVM line, but despite being popular among users, good match between their tech and Soviet tech base and familiarity to both programmers and technicians lost to the proposal to copy the IBM/360 line of mainframes — the possibility to just copy all the software existing for it was deemed more important.


OS/390 is an IBM operating system for the System/390 IBM mainframe computers.

OS/390 was introduced in late 1995 in an effort to simplify the packaging and ordering for the key, entitled elements needed to complete a fully functional MVS operating system package. These elements included, but were not limited to:

Data Facility Storage Management Subsystem Data Facility Product (DFP) (Provides access methods to enable I/O to, e.g., DASD subsystems, printers, Tape; provides utilities and program management)

Job Entry Subsystem (JES) (Provides ability to submit batch work and manage print)

IBM Communications Server - Communications Server (Provides VTAM and TCP/IP communications protocols)An additional benefit of the OS/390 packaging concept was to improve reliability, availability and serviceability (RAS) for the operating system, as the number of different combinations of elements that a customer could order and run was drastically reduced. This reduced the overall time required for customers to test and deploy the operating system in their environments, as well as reducing the number of customer reported problems (PMRs), errors (APARs) and fixes (PTFs) arising from the variances in element levels.

In December 2001 IBM extended OS/390 to include support for 64-bit zSeries processors and added various other improvements, and the result is now named z/OS. IBM ended support for the older OS/390-branded versions in late 2004.

Object access method

Object access method (OAM) is an access method under z/OS which is designed for the storage of large numbers of large files, such as images. It has a number of distinguishing features, e.g. compared to VSAM:

OAM datasets do not have an internal record structure; they are accessed as binary data streams.

OAM datasets are not directly cataloged. Rather, they are stored into OAM collections, with only the OAM collection being cataloged. The reason for this is to prevent the catalog from being overloaded with large numbers of (e.g. image) files.OAM is used in conjunction with DB2. An example use case for OAM would be storing medical images in a DB2 database running under z/OS.

Portable data terminal

A portable data terminal, or shortly PDT, is an electronic device that is used to enter or retrieve data via wireless transmission (WLAN or WWAN). They have also recently weighed down by ANS been called enterprise digital assistants (EDA), data capture mobile devices, batch terminals or just portables.

They can also serve as barcode readers, and they are used in large stores, warehouses, hospitals or information the field, to access a database from a remote location. Others have a touch screen, IrDA, Bluetooth, a memory card slot, or one or more data capture devices.

PDT's frequently run wireless device management software that allows them to interact with a database or software application hosted on a server or mainframe computer.Boundaries among PDA, smartphone and ERA can be blurred when comparing the wide array of common features and functions. EDAs attempt to distinguish themselves with a pre-defined requirement for long term constant daily operation (Normally allowing a minimum of 8 hours). They seek a higher than normal impact rating / drop test rating and an ingress protection rating of no less than IP54, Most have at least one Data Collection function i.e. a Barcode or RFID Reader etc.

Proprietary hardware

Proprietary hardware is computer hardware whose interface is controlled by the proprietor, often under patent or trade-secret protection.

Historically, most early computer hardware was designed as proprietary until the 1980s, when IBM PC changed this paradigm. Earlier, in the 1970s, many vendors tried to challenge IBM's monopoly in the mainframe computer market by reverse engineering and producing hardware components electrically compatible with expensive equipment and (usually) able to run the same software. Those vendors were nicknamed plug compatible manufacturers (PCMs).

Queued Telecommunications Access Method

Queued Telecommunications Access Method (QTAM) is an IBM System/360 communications access method incorporating built-in queuing. QTAM was an alternative to the lower level Basic Telecommunications Access Method (BTAM) access method

Strela computer

Strela computer (Russian: ЭВМ Стрела, arrow) was the first mainframe computer manufactured serially in the Soviet Union, beginning in 1953.This first-generation computer had 6200 vacuum tubes and 60,000 semiconductor diodes.

Strela's speed was 2000 operations per second. Its floating-point arithmetics was based on 43-bit floating point words, with a signed 35-bit mantissa and a signed 6-bit exponent. Operative Williams tube memory (RAM) was 2048 words. It also had read-only semiconductor diode memory for programs. Data input was from punched cards or magnetic tape. Data output was to magnetic tape, punched cards or wide printer. The last version of Strela used a 4096-word magnetic drum, rotating at 6000 rpm.

While Yuri Bazilevsky was officially Strela's chief designer, Bashir Rameyev, who developed the project prior to Bazilevsky's appointment, could be considered its main inventor. Strela was constructed at the Special Design Bureau 245 (Argon R&D Institute since 1986) in Moscow.

Strelas were manufactured by the Moscow Plant of Computing-Analytical Machines during 1953–1957; 7 copies were manufactured. They were installed in the Computing Centre of the USSR Academy of Sciences, Keldysh Institute of Applied Mathematics, Moscow State University, and in computing centres of some ministries related to defense and economic planning.

In 1954, the designers of Strela were awarded the Stalin Prize of 1st degree (Bashir Rameyev, Yu. Bazilevsky, V. Alexandrov, D. Zhuchkov, I. Lygin, G. Markov, B. Melnikov, G. Prokudayev, N. Trubnikov, A. Tsygankin, Yu. Shcherbakov, L. Larionova).


A sysop (; an abbreviation of system operator) is an administrator of a multi-user computer system, such as a bulletin board system (BBS) or an online service virtual community. The phrase may also be used to refer to administrators of other Internet-based network services.Co-sysops are users who may be granted certain admin privileges on a BBS. Generally, they help validate users and monitor discussion forums. Some co-sysops serve as file clerks, reviewing, describing, and publishing newly uploaded files into appropriate download directories.Historically, the term system operator applied to operators of any computer system, especially a mainframe computer. In general, a sysop is a person who oversees the operation of a server, typically in a large computer system. Usage of the term became popular in the late 1980s and 1990s, originally in reference to BBS operators. A person with equivalent functions on a network host or server is typically called a sysadmin, short for system administrator.Because such duties were often shared with that of the sysadmin prior to the advent of the World Wide Web, the term sysop is often used more generally to refer to an administrator or moderator, such as a forum administrator. Hence, the term sysadmin is technically used to distinguish the professional position of a network operator.


TOPS-10 System (Timesharing / Total Operating System-10) is a discontinued operating system from Digital Equipment Corporation (DEC) for the PDP-10 (or DECsystem-10) mainframe computer family. Launched in 1967, TOPS-10 evolved from the earlier "Monitor" software for the PDP-6 and PDP-10 computers; this was renamed to TOPS-10 in 1970.

Telecommunications Access Method

Telecommunications Access Method (TCAM) is an access method, in IBM's OS/360 and successors computer operating systems on IBM System/360 and later, that provides access to terminals units within a teleprocessing network.


This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.