Massively parallel

In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).

In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.[2]

In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[3]

The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.[4]

Data warehouse appliances such as Teradata, Netezza or Microsoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.

See also


  1. ^ Grid computing: experiment management, tool integration, and scientific workflows by Radu Prodan, Thomas Fahringer 2007 ISBN 3-540-69261-4 pages 1–4
  2. ^ Parallel and Distributed Computational Intelligence by Francisco Fernández de Vega 2010 ISBN 3-642-10674-9 pages 65–68
  3. ^ Knight, Will: "IBM creates world's most powerful computer", news service, June 2007
  4. ^

Ambric, Inc. was a designer of computer processors that developed the Ambric architecture. Its Am2045 Massively Parallel Processor Array (MPPA) chips were primarily used in high-performance embedded systems such as medical imaging, video, and signal-processing.

Ambric was founded in 2003 in Beaverton, Oregon. The company developed and introduced the Am2045 and its software tools in 2007, but fell victim to the crash of 2008. Ambric's Am2045 and tools remained available through Nethra Imaging, Inc., which closed in 2012.

Apache Impala

Apache Impala is an open source massively parallel processing (MPP) SQL query engine for data stored in a computer cluster running Apache Hadoop. Impala has been described as the open-source equivalent of Google F1, which inspired its development in 2012.

Apache Phoenix

Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing store. Phoenix provides a JDBC driver that hides the intricacies of the noSQL store enabling users to create, delete, and alter SQL tables, views, indexes, and sequences; insert and delete rows singly and in bulk; and query data through SQL. Phoenix compiles queries and other statements into native noSQL store APIs rather than using MapReduce enabling the building of low latency applications on top of noSQL stores.

Cellular architecture

A cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications.

Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM.

Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software.

Connection Machine

A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at the Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence and symbolic processing, but later versions found greater success in the field of computational science.


The Cray-3/SSS (Super Scalable System) was a pioneering massively parallel supercomputer project that bonded a two-processor Cray-3 to a new SIMD processing unit based entirely in the computer's main memory. It was later considered as an add-on for the Cray T90 series in the form of the T94/SSS, but there is no evidence this was ever built.


FROSTBURG was a Connection Machine 5 (CM-5) massively parallel supercomputer used by the US National Security Agency (NSA) to perform higher-level mathematical calculations. The CM-5 was built by the Thinking Machines Corporation, based in Cambridge, Massachusetts, at a cost of US$25 million. The system was installed at NSA in 1991, and operated until 1997. It was the first massively parallel processing computer bought by NSA, originally containing 256 processing nodes. The system was upgraded in 1993 with another 256 nodes, for a total of 512 nodes. The system had a total of 500 billion 32-bit words (≈2 terabytes) of storage, 2.5 billion words (≈10 gigabytes) of memory, and could perform at a theoretical maximum 65.5 gigaFLOPS. The operating system CMost was based on Unix, but optimized for parallel processing.

FROSTBURG is now on display at the National Cryptologic Museum.

Goodyear MPP

The Goodyear Massively Parallel Processor (MPP) was a

massively parallel processing supercomputer built by Goodyear Aerospace

for the NASA Goddard Space Flight Center.

It was designed to deliver enormous computational power at lower cost than

other existing supercomputer architectures, by using thousands of

simple processing elements, rather than one or a few highly complex CPUs.

Development of the MPP began circa 1979; it was delivered in May 1983,

and was in general use from 1985 until 1991.

It was based on Goodyear's earlier STARAN array processor, a 4x256 1-bit processing element (PE) computer.

The MPP was a 128x128 2-dimensional array of 1-bit wide PEs. In actuality 132x128 PEs were configured with a 4x128 configuration added for fault tolerance to substitute for up to 4 rows (or columns) of processors in the presence of problems.

The PEs operated in a single instruction, multiple data (SIMD) fashion—each PE performed the same operation simultaneously, on different data elements, under the control of a microprogrammed control unit.

After the MPP was retired in 1991, it was donated to the Smithsonian Institution, and is now in the collection of the National Air and Space Museum's Steven F. Udvar-Hazy Center. It was succeeded at Goddard by the MasPar MP-1 and Cray T3D massively parallel computers.

ICL Distributed Array Processor

The Distributed Array Processor (DAP) produced by

International Computers Limited (ICL) was the world's first commercial

massively parallel computer. The original paper study was

complete in 1972 and building of the prototype began in 1974.

The first machine was delivered to

Queen Mary College in 1979.


Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of Message Passing Interface (MPI) for parallel communication and is free and open-source software, distributed under the terms of the GNU General Public License.LAMMPS was originally developed under a Cooperative Research and Development Agreement (CRADA) between two laboratories from United States Department of Energy and three other laboratories from private sector firms. As of 2016, it is maintained and distributed by researchers at the Sandia National Laboratories and Temple University.


MPQC (Massively Parallel Quantum Chemistry) is an ab initio computational chemistry software program. Three features distinguish it from other quantum chemistry programs such as Gaussian and GAMESS: it is open-source, has an object-oriented design, and is created from the beginning as a parallel processing program. It is available in Ubuntu and Debian.MPQC provides implementations for a number of important methods for calculating electronic structure, including Hartree–Fock, Møller–Plesset perturbation theory (including its explicitly correlated linear R12 versions), and density functional theory.

Massive parallel sequencing

Massive parallel sequencing or massively parallel sequencing is any of several high-throughput approaches to DNA sequencing using the concept of massively parallel processing; it is also called next-generation sequencing (NGS) or second-generation sequencing. Some of these technologies emerged in 1994-1998 and have been commercially available since 2005. These technologies use miniaturized and parallelized platforms for sequencing of 1 million to 43 billion short reads (50-400 bases each) per instrument run.

Many NGS platforms differ in engineering configurations and sequencing chemistry. They share the technical paradigm of massive parallel sequencing via spatially separated, clonally amplified DNA templates or single DNA molecules in a flow cell. This design is very different from that of Sanger sequencing—also known as capillary sequencing or first-generation sequencing—that is based on electrophoretic separation of chain-termination products produced in individual sequencing reactions.

Massively parallel (disambiguation)

Massively parallel in computing is the use of a large number of processors to perform a set of computations in parallel (simultaneously).

Massively parallel may also refer to:

Massive parallel sequencing, or massively parallel sequencing, DNA sequencing using the concept of massively parallel processing

Massively parallel signature sequencing, a procedure used to identify and quantify mRNA transcripts

Massively parallel processor array

A massively parallel processor array, also known as a multi purpose processor array (MPPA) is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Massively parallel signature sequencing

Massive parallel signature sequencing (MPSS) is a procedure that is used to identify and quantify mRNA transcripts, resulting in data similar to serial analysis of gene expression (SAGE), although it employs a series of biochemical and sequencing steps that are substantially different.

Meiko Scientific

Meiko Scientific Ltd. was a British supercomputer company based in Bristol, founded by members of the design team working on the INMOS transputer microprocessor.


nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video.


RaftLib is a portable parallel processing system that aims to provide extreme performance while increasing programmer productivity. It enables a programmer to assemble a massively parallel program (both local and distributed) using simple iostream-like operators. RaftLib handles threading, memory allocation, memory placement, and auto-parallelization of compute kernels. It enables applications to be constructed from chains of compute kernels forming a task and pipeline parallel compute graph. Programs are authored in C++ (although other language bindings are planned).

Thinking Machines Corporation

Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems.


This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.