Connection Machine

A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at the Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence and symbolic processing, but later versions found greater success in the field of computational science.

Thinking machines cm2
Thinking Machines CM-2 at the Computer History Museum in Mountain View, California. One of the face plates has been partly removed to show the circuit boards inside.

Origin of idea

Danny Hillis and Sheryl Handler founded Thinking Machines Corporation (TMC) in Waltham, Massachusetts, in 1983, moving in 1984 to Cambridge, MA. At TMC, Hillis assembled a team to develop what would become the CM-1 Connection Machine, a design for a massively parallel hypercube-based arrangement of thousands of microprocessors, springing from his PhD thesis work at MIT in Electrical Engineering and Computer Science (1985).[1] The dissertation won the ACM Distinguished Dissertation prize in 1985,[2] and was presented as a monograph that overviewed the philosophy, architecture, and software for the first Connection Machine, including information on its data routing between central processing unit (CPU) nodes, its memory handling, and the programming language Lisp applied in the parallel machine.[1][3]

CM designs

Each CM-1 microprocessor has its own 4 kilobits of random-access memory (RAM), and the hypercube-based array of them was designed to perform the same operation on multiple data points simultaneously, i.e., to execute tasks in single instruction, multiple data (SIMD) fashion. The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes. Each subcube contains 16 printed circuit boards and a main processor called a sequencer. Each circuit board contains 32 chips. Each chip contains a router, 16 processors, and 16 RAMs. The CM-1 as a whole has a 20-dimensional hypercube-based routing network, a main RAM, and an input-output processor (a channel controller). Each router contains 5 buffers to store the data being transmitted when a clear channel isn't available. The engineers had originally calculated that 7 buffers per chip would be needed, but this made the chip slightly too large to build. Nobel Prize winning physicist Richard Feynman had previously calculated that 5 buffers would be enough, using a differential equation involving the average number of 1 bits in an address. They resubmitted the design of the chip with only 5 buffers, and when they put the machine together, it worked fine. Each chip is connected to a switching device called a nexus. The CM-1 uses Feynman's algorithm for computing logarithms that he had developed at Los Alamos National Laboratory for the Manhattan Project. It is well suited to the CM-1, using as it did, only shifting and adding, with a small table shared by all the processors. Feynman also discovered that the CM-1 would compute the Feynman diagrams for quantum chromodynamics (QCD) calculations faster than an expensive special purpose machine developed at Caltech.[4][5]

To improve its commercial viability, TMC launched the CM-2 in 1987, adding Weitek 3132 floating point numeric coprocessors and more RAM to the system. Thirty-two of the original one-bit processors shared each numeric processor. The CM-2 can be configured with up to 512 MB of RAM, and a redundant array of independent disks (RAID) hard disk system, called a DataVault, of up to 25 GB. Two later variants of the CM-2 were also produced, the smaller CM-2a with either 4096 or 8192 single-bit processors, and the faster CM-200.

Frostburg
The light panels of FROSTBURG, a CM-5, on display at the National Cryptologic Museum. The panels were used to check the usage of the processing nodes, and to run diagnostics.

Due to its origins in AI research, the software for the CM-1/2/200 single-bit processor was influenced by the Lisp programming language and a version of Common Lisp, *Lisp (spoken: Star-Lisp), was implemented on the CM-1. Other early languages included Karl Sims' IK and Cliff Lasser's URDU. Much system utility software for the CM-1/2 was written in *Lisp. Many applications for the CM-2, however, were written in C*, a data-parallel superset of ANSI C.

With the CM-5, announced in 1991, TMC switched from the CM-2's hypercubic architecture of simple processors to a new and different multiple instruction, multiple data (MIMD) architecture based on a fat tree network of reduced instruction set computing (RISC) SPARC processors. To make programming easier, it was made to simulate a SIMD design. The later CM-5E replaces the SPARC processors with faster SuperSPARCs. A CM-5 was the fastest computer in the world in 1993 according to the TOP500 list, running 1024 cores with Rpeak of 131.0 GFLOPS, and for several years many of the top 10 fastest computers were CM-5s[6]

Visual design

Connection Machines were noted for their (intentionally) striking visual design. The CM-1 and CM-2 design teams were led by Tamiko Thiel.[7][8] The physical form of the CM-1, CM-2, and CM-200 chassis was a cube-of-cubes, referencing the machine's internal 12-dimensional hypercube network, with the red light-emitting diodes (LEDs), by default indicating the processor status, visible through the doors of each cube.

By default, when a processor is executing an instruction, its LED is on. In a SIMD program, the goal is to have as many processors as possible working the program at the same time – indicated by having all LEDs being steady on. Those unfamiliar with the use of the LEDs wanted to see the LEDs blink – or even spell out messages to visitors. The result is that finished programs often have superfluous operations to blink the LEDs.

The CM-5, in plain view, had a staircase-like shape, and also had large panels of red blinking LEDs. Prominent sculptor-architect Maya Lin contributed to the CM-5 design.[9]

References in popular culture

A CM-5 was featured in the film Jurassic Park in the control room for the island (instead of a Cray X-MP supercomputer as in the novel).[10]

See also

References

  1. ^ a b Hillis, W. Danny (1986). The Connection Machine. MIT Press. ISBN 0262081571.
  2. ^ "William Daniel Hillis - Award Winner". ACM Awards. Retrieved 30 April 2015.
  3. ^ Brewster Kahle & W. Daniel Hillis, 1989, The Connection Machine Model CM-1 Architecture (Technical report), Cambridge, MA:Thinking Machines Corp., 7 pp., see [1], accessed 2015-04-25.
  4. ^ Hillis, W. Daniel (1989). "Richard Feynman and The Connection Machine". Physics Today. Institute of Physics. 42 (2). Bibcode:1989PhT....42b..78H. doi:10.1063/1.881196. Archived from the original on 28 July 2009.
  5. ^ [2]-Text of Danny Hillis's Physics Today article on Feynman and the Connection machine; also a video of Hillis *How I met Feynman *Feynman's last days.
  6. ^ "November 1993". www.top500.org. Retrieved 16 January 2015.
  7. ^ Design Issues, (Vol. 10, No. 1, Spring 1994) ISSN 0747-9360 MIT Press, Cambridge, MA.
  8. ^ Thiel, Tamiko (Spring 1994). "The Design of the Connection Machine". Design Issues. 10 (1). Retrieved 16 January 2015.
  9. ^ "Bloodless Beige Boxes: The Story of an Artist and a Thinking Machine". IT History Society. 2 September 2014. Retrieved 16 January 2015.
  10. ^ Movie Quotes Database

Further reading

  • Hillis, D. 1982 "New Computer Architectures and Their Relationship to Physics or Why CS is No Good", Int J. Theoretical Physics 21 (3/4) 255-262.
  • Lewis W. Tucker, George G. Robertson, "Architecture and Applications of the Connection Machine," Computer, vol. 21, no. 8, pp. 26–38, August, 1988.
  • Arthur Trew and Greg Wilson (eds.) (1991). Past, Present, Parallel: A Survey of Available Parallel Computing Systems. New York: Springer-Verlag. ISBN 0-387-19664-1
  • Charles E. Leiserson, Zahi S. Abuhamdeh, David C. Douglas, Carl R. Feynman, Mahesh N. Ganmukhi, Jeffrey V. Hill, W. Daniel Hillis, Bradley C. Kuszmaul, Margaret A. St. Pierre, David S. Wells, Monica C. Wong, Shaw-Wen Yang, and Robert Zak. "The Network Architecture of the Connection Machine CM-5". Proceedings of the fourth annual ACM Symposium on Parallel Algorithms and Architectures. 1992.
  • W. Daniel Hillis and Lewis W. Tucker. The CM-5 Connection Machine: A Scalable Supercomputer. In Communications of the ACM, Vol. 36, No. 11 (November 1993).

External links

Records
Preceded by
NEC SX-3/44
20.0 gigaflops
World's most powerful supercomputer
Thinking Machines CM-5/1024

June 1993
Succeeded by
Numerical Wind Tunnel
124.0 gigaflops
*Lisp

*Lisp (or StarLisp) is a programming language, a dialect of the language Lisp. It was conceived of in 1985 by two employees of the Thinking Machines Corporation, Cliff Lasser and Steve Omohundro, as a way to provide an efficient yet high-level language for programming the nascent Connection Machine (CM).

1-bit architecture

A 1-bit computer architecture is an instruction set architecture for a processor that has datapath widths and data register widths of 1 bit (1/8 octet) wide.

An example of a 1-bit computer built from discrete logic SSI chips were the Wang 700 (1968/1970) and Wang 500 (1970/1971) calculator as well as the Wang 1200 (1971/1972) word processor series of Wang Laboratories.

An example of a 1-bit architecture that was marketed as a CPU is the Motorola MC14500B Industrial Control Unit (ICU), introduced in 1977 and manufactured at least up into the mid 1990s. One of the computers known to be based on this CPU was the WDR 1-bit computer. A typical sequence of instructions from a program for a 1-bit architecture might be:

load digital input 1 into a 1-bit register;

OR the value in the 1-bit register with input 2, leaving the result in the register;

write the value in the 1-bit register to output 1.This architecture was considered superior for programs making decisions rather than performing arithmetic computations, for ladder logic as well as for serial data processing.There are also several design studies for 1-bit architectures in academia, and corresponding 1-bit logic can also be found in programming.

Other examples of 1-bit architectures are programmable logic controllers (PLCs), programmed in instruction list (IL).

Several early massively parallel computers used 1-bit architectures for the processors as well. Examples include the Goodyear MPP and the Connection Machine. By using a 1-bit architecture for the individual processors a very large array (e.g.: the Connection Machine had 65,536 processors) could be constructed with the chip technology available at the time. In this case the slow computation of a 1-bit processor was traded off against the large number of processors.

1-bit CPUs can meanwhile be considered obsolete, not many kinds have been produced and none are known to be available in the major computer component stores (as of 2019, a few MC14500B chips are still available from brokers for obsolete parts.).

C*

C* is an object-oriented, data-parallel superset of ANSI C with synchronous semantics.

Charles E. Leiserson

Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. He invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung. He conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but nevertheless use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, and Clifford Stein.

Leiserson received a B.S. degree in computer science and mathematics from Yale University in 1975 and a Ph.D. degree in computer science from Carnegie Mellon University in 1981, where his advisors were Jon Bentley and H. T. Kung.

He then joined the faculty of the Massachusetts Institute of Technology, where he is now a Professor. In addition, he is a principal in the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, and he was formerly Director of Research and Director of System Architecture for Akamai Technologies. He was Founder and Chief Technology Officer of Cilk Arts, Inc., a start-up that developed Cilk technology for multicore computing applications. (Cilk Arts, Inc. was acquired by Intel in 2009.)

Leiserson's dissertation, Area-Efficient VLSI Computation, won the first ACM Doctoral Dissertation Award. In 1985, the National Science Foundation awarded him a Presidential Young Investigator Award. He is a Fellow of the Association for Computing Machinery (ACM), the American Association for the Advancement of Science (AAAS), the Institute of Electrical and Electronics Engineers (IEEE), and the Society for Industrial and Applied Mathematics (SIAM). He received the 2014 Taylor L. Booth Education Award from the IEEE Computer Society "for worldwide computer science education impact through writing a best-selling algorithms textbook, and developing courses on algorithms and parallel programming." He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development." He was also cited for "distinguished mentoring of computer science leaders and students." He received the 2013 ACM Paris Kanellakis Theory and Practice Award for "contributions to robust parallel and distributed computing."

Danny Hillis

William Daniel "Danny" Hillis (born September 25, 1956) is an American inventor, entrepreneur, scientist, and writer who is particularly known for his work in computer science. He is best known as the founder of Thinking Machines Corporation, a parallel supercomputer manufacturer, and subsequently was a fellow at Walt Disney Imagineering. More recently, Hillis co-founded Applied Minds, the technology R&D think-tank.Currently, he is co-founder of Applied Invention, an interdisciplinary group of engineers, scientists, and artists that develops technology solutions in partnership with leading companies and entrepreneurs.Hillis is a visiting professor at the MIT Media Lab, Judge Widney Professor of Engineering and Medicine at the University of Southern California, Professor of Research Medicine at the Keck School of Medicine of USC, and Research Professor of Engineering at the USC Viterbi School of Engineering. He is the principal investigator of the National Cancer Institute's Physical Sciences in Oncology Laboratory at USC.

FROSTBURG

FROSTBURG was a Connection Machine 5 (CM-5) massively parallel supercomputer used by the US National Security Agency (NSA) to perform higher-level mathematical calculations. The CM-5 was built by the Thinking Machines Corporation, based in Cambridge, Massachusetts, at a cost of US$25 million. The system was installed at NSA in 1991, and operated until 1997. It was the first massively parallel processing computer bought by NSA, originally containing 256 processing nodes. The system was upgraded in 1993 with another 256 nodes, for a total of 512 nodes. The system had a total of 500 billion 32-bit words (≈2 terabytes) of storage, 2.5 billion words (≈10 gigabytes) of memory, and could perform at a theoretical maximum 65.5 gigaFLOPS. The operating system CMost was based on Unix, but optimized for parallel processing.

FROSTBURG is now on display at the National Cryptologic Museum.

George G. Robertson

George G. Robertson is an American information visualization expert and Senior Researcher, Visualization and Interaction (VIBE) Research Group, Microsoft Research. With Stuart K. Card, Jock D. Mackinlay and others he invented a number of Information Visualization techniques.

Hypertree network

A hypertree network is a network topology that shares some traits with the binary tree network. It is a variation of the fat tree architecture.A hypertree of degree k depth d may be visualized as a 3-dimensional object whose front view is the top-down complete k-ary tree of depth d and the side view is the bottom-up complete binary tree of depth d.Hypertrees were proposed in 1981 by James R. Goodman and Carlo Sequin.Hypertrees are a choice for parallel computer architecture, used, e.g., in the connection machine CM-5.

Karl Sims

Karl Sims is a computer graphics artist and researcher, who is best known for using particle systems and artificial life in computer animation.

Sims received a B.S. from MIT in 1984, and a M.S. from the MIT Media Lab in 1987. He worked for Thinking Machines as an artist-in-residence, for Whitney-Demos Production as a researcher, and co-founded Optomystic. Sims was the CEO of GenArts, a Cambridge, Massachusetts company that develops special effects plugins used in film and advertising. In June 2008 he moved to a role on the board of directors and Katherine Hayes became CEO of GenArts.

At Optomystic, Sims developed software for the Connection Machine 2 (CM-2) that animated the water from drawings of a deluge by Leonardo da Vinci, used in Mark Whitney's film Excerpts from Leonardo's Deluge.

Sims' animations Particle Dreams and Panspermia used the CM-2 to animate and render various complex phenomena via particle systems. Panspermia was also used as the video for Pantera's cover of Black Sabbath's Planet Caravan.

Sims wrote landmark papers on virtual creatures and artificial evolution for computer art. His virtual creatures used an artificial neural network to process input from virtual sensors and act on virtual muscles between cuboid 'limbs'. The creatures were evolved to display multiple modes of water and land based movements such as swimming like a sea snake or fish, jumping and tumbling (walking was not achieved). The creatures were also co-evolved in different species to compete for possession of a virtual cube, displaying the red queen effect. The cover of Artificial Life: An Overview by Chris Langton notably used an image of the creatures generated by Sims. In 1997, Sims created the interactive installation Galápagos for the NTT InterCommunication Center in Tokyo; in this installation, viewers help evolve 3D animated creatures by selecting which ones will be allowed to live and produce new, mutated offspring.

His paper "Artificial Evolution for Computer Graphics" described the application of genetic algorithms to generate abstract 2D images from complex mathematical formulae, evolved under the guidance of a human. He used this method to create the video Primordial Dance, as well as parts of Liquid Selves. Genetic Images was an interactive installation also based on this method; it was exhibited at the Centre Georges Pompidou in Paris, 1993, as well as Ars Electronica and the Los Angeles Interactive Media Festival.

In 1998, Sims was awarded a MacArthur Fellowship. He has won two Golden Nicas at the Ars Electronica Festival, in 1991 and in 1992. He has also received honors from Imagina, the National Computer Graphics Association, the Berlin Video Festival, NICOGRAPH, Images du Futur, and other festivals.

Massively parallel

In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).

In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available. An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.Data warehouse appliances such as Teradata, Netezza or Microsoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.

NEARnet

NEARnet (New England Academic and Research Network) was a high-speed network of academic, industrial, government, and non-profit organizations centered in Cambridge and Boston, Massachusetts. NEARnet was the precursor to New England's regional Internet, established by Boston University, Harvard University, and MIT late in 1988, after DARPA announced plans to dismantle the ARPANET, then accounting for 71 of its 258 host connections. By June 1990, NEARnet included 40 members, including universities, high-tech companies and non-profits.

NEARnet was operated by BBN Systems and Technologies under contract to MIT. NEARnet used the TCP/IP protocol suite and had a backbone consisting of 10 Mbit/s, FCC licensed microwave links, and leased line connections to smaller, more remote members. The microwave was a core technology helping to fund the network by eliminating recurring transmission charges and was designed, installed and supported by Microwave Bypass Systems, Inc.

NEARnet had the goal of creating a regional information infrastructure in New England to support education, research and development. Special services and facilities, such as the Connection Machine, the Massachusetts Microelectronics Center, and library catalogs, were made available over NEARnet. NEARnet was linked to the NSFNet backbone via connections to the John von Neumann Center network and NYSERNet. It also has a link to the Defense Research Internet.

Richard Feynman

Richard Phillips Feynman (; May 11, 1918 – February 15, 1988) was an American theoretical physicist, known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics for which he proposed the parton model. For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Shin'ichirō Tomonaga, received the Nobel Prize in Physics in 1965.

Feynman developed a widely used pictorial representation scheme for the mathematical expressions describing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll of 130 leading physicists worldwide by the British journal Physics World he was ranked as one of the ten greatest physicists of all time.He assisted in the development of the atomic bomb during World War II and became known to a wide public in the 1980s as a member of the Rogers Commission, the panel that investigated the Space Shuttle Challenger disaster. Along with his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing and introducing the concept of nanotechnology. He held the Richard C. Tolman professorship in theoretical physics at the California Institute of Technology.

Feynman was a keen popularizer of physics through both books and lectures including a 1959 talk on top-down nanotechnology called There's Plenty of Room at the Bottom and the three-volume publication of his undergraduate lectures, The Feynman Lectures on Physics. Feynman also became known through his semi-autobiographical books Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think? and books written about him such as Tuva or Bust! by Ralph Leighton and the biography Genius: The Life and Science of Richard Feynman by James Gleick.

Serial computer

A serial computer is a computer typified by bit-serial architecture — i.e., internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers.

Serial computers required much less hardware than their parallel computing counterpart, but were much slower.

Slate gray

Slate grey is a grey color with a slight azure tinge that is a representation of the average color of the material slate. As a tertiary color, slate is an equal mix of purple and green pigments.Slaty, referring to this color, is often used to describe birds.

The first recorded use of slate grey as a color name in English was in 1705.

StarLogo

StarLogo is an agent-based simulation language developed by Mitchel Resnick, Eric Klopfer, and others at MIT Media Lab and MIT Scheller Teacher Education Program in Massachusetts. It is an extension of the Logo programming language, a dialect of Lisp. Designed for education, StarLogo can be used by students to model the behavior of decentralized systems.

The first StarLogo ran on a Connection Machine 2 parallel computer. A subsequent version ran on Macintosh computers; this version became known later as MacStarLogo (and now is called MacStarLogo Classic). The current StarLogo is written in Java and works on most computers.

StarLogo is also available in a version called OpenStarLogo. The source code for OpenStarLogo is available online, although the license under which it is released is not an open source license according to the Open Source Definition, because of restrictions on the commercial use of the code.

StarLogo TNG (The Next Generation) version 1.0 was released in July 2008. It provides a 3D world using OpenGL graphics and a block-based graphical language to increase ease of use and learnability. It is written in C and Java. StarLogo TNG uses "blocks" to put together puzzle-like pieces. StarLogo TNG reads the blocks in the order you fit them together, and sets the program in the Spaceland view.

StarLogo is a primary influence for the Kedama particle system, programmed by Yoshiki Oshima, found in the Etoys educational programming environment and language, which can be viewed as a Logo done originally in Squeak Smalltalk.

Symbolics

Symbolics refers to two companies: now-defunct computer manufacturer Symbolics, Inc., and a privately held company that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system.The symbolics.com domain was originally registered on March 15, 1985, making it the first .com-domain in the world. In August 2009, it was sold to napkin.com (formerly XF.com) Investments.

Tamiko Thiel

Tamiko Thiel (born June 15, 1957, daughter of Midori Kono Thiel) is an internationally active American media artist who specializes in "exploring the interplay of place, space, the body and cultural identity".

Thinking Machines Corporation

Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.