Von Neumann architecture

The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by the mathematician and physicist John von Neumann and others in the First Draft of a Report on the EDVAC.[1] That document describes a design architecture for an electronic digital computer with these components:

The term "von Neumann architecture" has evolved to mean any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the von Neumann bottleneck and often limits the performance of the system.[3]

The design of a von Neumann architecture machine is simpler than a Harvard architecture machine—which is also a stored-program system but has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions.

A stored-program digital computer keeps both program instructions and data in read-write, random-access memory (RAM). Stored-program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC. Those were programmed by setting switches and inserting patch cables to route data and control signals between various functional units. The vast majority of modern computers use the same memory for both data and program instructions. The von Neumann vs. Harvard distinction applies to the cache architecture, not the main memory (split cache architecture).

Von Neumann Architecture
Von Neumann architecture scheme

History

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot run a word processor or games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program on ENIAC.[4]

With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an instruction set, and can store in memory a set of instructions (a program) that details the computation.

A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which operators had to do manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing. Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes.

Capabilities

On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. It makes "programs that write programs" possible.[5] This has made a sophisticated self-hosting computing ecosystem flourish around von Neumann architecture machines.

Some high level languages leverage the von Neumann architecture by providing an abstract, machine-independent way to manipulate executable code at runtime (e.g., LISP), or by using runtime information to tune just-in-time compilation (e.g. languages hosted on the Java virtual machine, or languages embedded in web browsers).

On a smaller scale, some repetitive operations such as BITBLT or pixel and vertex shaders can be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular.

Development of the stored-program concept

The mathematician Alan Turing, who had been alerted to a problem of mathematical logic by the lectures of Max Newman at the University of Cambridge, wrote a paper in 1936 entitled On Computable Numbers, with an Application to the Entscheidungsproblem, which was published in the Proceedings of the London Mathematical Society.[6] In it he described a hypothetical machine he called a universal computing machine, now known as the "Universal Turing machine". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data. John von Neumann became acquainted with Turing while he was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at the Institute for Advanced Study in Princeton, New Jersey during 1936 – 1937. Whether he knew of Turing's paper of 1936 at that time is not clear.

In 1936, Konrad Zuse also anticipated in two patent applications that machine instructions could be stored in the same storage used for data.[7]

Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School of Electrical Engineering, at the University of Pennsylvania, wrote about the stored-program concept in December 1943. [8][9] In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work.

Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory, which required huge amounts of calculation. This drew him to the ENIAC project, during the summer of 1944. There he joined the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up a description titled First Draft of a Report on the EDVAC[1] based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it with only von Neumann's name on it, to the consternation of Eckert and Mauchly.[10] The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced the next round of computer designs.

Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'".[11] His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing's ideas:

I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936… Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in so far as not anticipated by Babbage… Both Turing and von Neumann, of course, also made substantial contributions to the "reduction to practice" of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities.

[12]

At the time that the "First Draft" report was circulated, Turing was producing a report entitled Proposed Electronic Calculator. It described in engineering and programming detail, his idea of a machine he called the Automatic Computing Engine (ACE).[13] He presented this to the Executive Committee of the British National Physical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surrounding Colossus, that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of the ACE design were produced.

Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited by B. V. Bowden), a section in the chapter on Computers in America reads as follows:[14]

The Machine of the Institute For Advanced Studies, Princeton

In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers, a report on the logical design of digital computers. The report contained a detailed proposal for the design of the machine that has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but the von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see page 130).

In 1947, Burks, Goldstine and von Neumann published another report that outlined the design of another type of machine (a parallel machine this time) that would be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such a machine was the development of suitable memory with instantaneously accessible contents. At first they suggested using a special vacuum tube—called the "Selectron"—which the Princeton Laboratories of RCA had invented. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on the Williams memory. This machine—completed in June, 1952 in Princeton—has become popularly known as the Maniac. The design of this machine inspired at least half a dozen machines now being built in America, all known affectionately as "Johniacs."

In the same book, the first two paragraphs of a chapter on ACE read as follows:[15]

Automatic Computation at the National Physical Laboratory

One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as the Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine.

The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on such machines in Britain was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948, the latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook.

Early von Neumann-architecture computers

The First Draft described a design that was used by many universities and corporations to construct their computers.[16] Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.

Early stored-program computers

The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation.

  • The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent.[19][20] However it was partially electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory.[21]
  • The ARC2 developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London officially came online on May 12, 1948.[17] It featured the first rotating drum storage device.[22][23]
  • The Manchester Baby was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime.
  • The ENIAC was modified to run as a primitive read-only stored-program computer (using the Function Tables for program ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann.
  • The BINAC ran some test programs in February, March, and April 1949, although was not completed until September 1949.
  • The Manchester Mark 1 developed from the Baby project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949.
  • The EDSAC ran its first program on May 6, 1949.
  • The EDVAC was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951.
  • The CSIR Mk I ran its first program in November 1949.
  • The SEAC was demonstrated in April 1950.
  • The Pilot ACE ran its first program on May 10, 1950 and was demonstrated in December 1950.
  • The SWAC was completed in July 1950.
  • The Whirlwind was completed in December 1950 and was in actual use in April 1951.
  • The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.

Evolution

Computer system bus
Single system bus evolution of the architecture

Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to evolutions in their architecture. For example, memory-mapped I/O lets input and output devices be treated the same as memory.[24] A single system bus could be used to provide a modular system with lower cost. This is sometimes called a "streamlining" of the architecture.[25] In subsequent decades, simple microcontrollers would sometimes omit features of the model to lower cost and size. Larger computers added features for higher performance.

Design limitations

Von Neumann bottleneck

The shared bus between the program memory and data memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to move to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of CPU.

The von Neumann bottleneck was described by John Backus in his 1977 ACM Turing Award lecture. According to Backus:

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.[26][27]

Mitigations

There are several known methods for mitigating the Von Neumann performance bottleneck. For example, the following all can improve performance:

The problem can also be sidestepped somewhat by using parallel computing, using for example the non-uniform memory access (NUMA) architecture—this approach is commonly employed by supercomputers. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence. Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like FORTRAN were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers.

As of 1996, a database benchmark study found that three out of four CPU cycles were spent waiting for memory. Researchers expect that increasing the number of simultaneous instruction streams with multithreading or single-chip multiprocessing will make this bottleneck even worse.[28] In the context of multi-core processors, additional overhead is required to maintain cache coherence between processors and threads.

Self-modifying code

Aside from the von Neumann bottleneck, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a computer crash. Memory protection and other forms of access control can usually protect against both accidental and malicious program modification.

See also

References

  1. ^ a b c von Neumann, John (1945), First Draft of a Report on the EDVAC (PDF), archived from the original (PDF) on 2013-03-14, retrieved 2011-08-24
  2. ^ Ganesan 2009
  3. ^ Markgraf, Joey D. (2007), The Von Neumann Bottleneck, archived from the original on December 12, 2013
  4. ^ Copeland 2006, p. 104
  5. ^ MFTL (My Favorite Toy Language) entry Jargon File 4.4.7, retrieved 2008-07-11
  6. ^ Turing, Alan M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 (published 1937), 42, pp. 230–265, doi:10.1112/plms/s2-42.1.230 (and Turing, Alan M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem. A correction", Proceedings of the London Mathematical Society, 2 (published 1937), 43 (6), pp. 544–546, doi:10.1112/plms/s2-43.6.544)
  7. ^ "Electronic Digital Computers", Nature, 162: 487, September 25, 1948, doi:10.1038/162487a0, archived from the original on April 6, 2009, retrieved April 10, 2009
  8. ^ Lukoff, Herman (1979). From Dits to Bits: A personal history of the electronic computer. Portland, Oregon, USA: Robotics Press. ISBN 0-89661-002-0. LCCN 79-90567.
  9. ^ ENIAC project administrator Grist Brainerd's December 1943 progress report for the first period of the ENIAC's development implicitly proposed the stored program concept (while simultaneously rejecting its implementation in the ENIAC) by stating that "in order to have the simplest project and not to complicate matters," the ENIAC would be constructed without any "automatic regulation.".
  10. ^ Copeland 2006, p. 113
  11. ^ Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved 2010-01-27
  12. ^ Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved 2010-01-27 which cites Randell, Brian (1972), Meltzer, B.; Michie, D., eds., "On Alan Turing and the Origins of Digital Computers", Machine Intelligence, Edinburgh: Edinburgh University Press, 7: 10, ISBN 0-902383-26-4
  13. ^ Copeland 2006, pp. 108–111
  14. ^ Bowden 1953, pp. 176,177
  15. ^ Bowden 1953, p. 135
  16. ^ "Electronic Computer Project". Institute for Advanced Study. Retrieved 2011-05-26.
  17. ^ a b Campbell-Kelly, Martin (April 1982). "The Development of Computer Programming in Britain (1945 to 1955)". IEEE Annals of the History of Computing. 4 (2): 121–139. doi:10.1109/MAHC.1982.10016.
  18. ^ Robertson, James E. (1955), Illiac Design Techniques, report number UIUCDCS-R-1955-146, Digital Computer Laboratory, University of Illinois at Urbana-Champaign
  19. ^ Selective Sequence Electronic Calculator (USPTO Web site)
  20. ^ Selective Sequence Electronic Calculator (Google Patents)
  21. ^ Grosch, Herbert R. J. (1991), Computer: Bit Slices From a Life, Third Millennium Books, ISBN 0-88733-085-1
  22. ^ Lavington, Simon, ed. (2012). Alan Turing and his Contemporaries: Building the World's First Computers. London: British Computer Society. p. 61. ISBN 9781906124908.
  23. ^ Johnson, Roger (April 2008). "School of Computer Science & Information Systems: A Short History" (PDF). Birkbeck College. University of London. Retrieved 2017-07-23.
  24. ^ Bell, C. Gordon; Cady, R.; McFarland, H.; O'Laughlin, J.; Noonan, R.; Wulf, W. (1970), "A New Architecture for Mini-Computers—The DEC PDP-11" (PDF), Spring Joint Computer Conference, pp. 657–675
  25. ^ Null, Linda; Lobur, Julia (2010), The essentials of computer organization and architecture (3rd ed.), Jones & Bartlett Learning, pp. 36, 199–203, ISBN 978-1-4496-0006-8
  26. ^ Backus, John W. "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs". doi:10.1145/359576.359579.
  27. ^ Dijkstra, Edsger W. "E. W. Dijkstra Archive: A review of the 1977 Turing Award Lecture". Retrieved 2008-07-11.
  28. ^ Sites, Richard L.; Patt, Yale. "Architects Look to Processors of Future". Microprocessor report. 1996

Further reading

  • Bowden, B. V., ed. (1953), Faster Than Thought: A Symposium on Digital Computing Machines, London: Sir Isaac Pitman and Sons Ltd.
  • Rojas, Raúl; Hashagen, Ulf, eds. (2000), The First Computers: History and Architectures, MIT Press, ISBN 0-262-18197-5
  • Davis, Martin (2000), The universal computer: the road from Leibniz to Turing, New York: W. W. Norton & Company Inc., ISBN 0-393-04785-7 republished as: Davis, Martin (2001), Engines of Logic: Mathematicians and the Origin of the Computer, New York: W. W. Norton & Company, ISBN 978-0-393-32229-3
  • Can Programming be Liberated from the von Neumann Style?. Backus, John. 1977 ACM Turing Award Lecture. Communications of the ACM, August 1978, Volume 21, Number 8 Online PDF see details at http://www.cs.tufts.edu/~nr/backus-lecture.html
  • Bell, C. Gordon; Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. Massive (668 pages)
  • Copeland, Jack (2006), "Colossus and the Rise of the Modern Computer", in Copeland, B. Jack, Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4
  • Ganesan, Deepak (2009), The von Neumann Model (PDF), retrieved 2011-10-22
  • McCartney, Scott (1999). ENIAC: The Triumphs and Tragedies of the World's First Computer. Walker & Co. ISBN 0-8027-1348-3.
  • Goldstine, Herman H. (1972). The Computer from Pascal to von Neumann. Princeton University Press. ISBN 0-691-08104-2.
  • Shurkin, Joel (1984). Engines of the Mind: A history of the Computer. New York, London: W. W. Norton & Company. ISBN 0-393-01804-0.

External links

Cognitive computer

A cognitive computer combines artificial intelligence and machine-learning algorithms, in an approach which attempts to reproduce the behaviour of the human brain. It generally adopts a Neuromorphic engineering approach.

An example of a cognitive computer implemented by using neural networks and deep learning is provided by the IBM company's Watson machine. A subsequent development by IBM is the TrueNorth microchip architecture, which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. In 2017 Intel announced its own version of a cognitive chip in "Loihi", which will be available to university and research labs in 2018.

Computer hardware

Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, monitor, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes or modifications; whereas software is "soft" because it is easy to update or change. Intermediate between software and hardware is "firmware", which is software that is strongly coupled to the particular hardware of a computer system and thus the most difficult to change but also among the most stable with respect to consistency of interface. The progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing.

Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.

Connection Machine

A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at the Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence and symbolic processing, but later versions found greater success in the field of computational science.

Content Addressable Parallel Processor

A Content Addressable Parallel Processor (CAPP) is a type of parallel processor which uses content-addressing memory (CAM) principles. CAPPs are intended for bulk computation. The syntactic structure of their computing algorithm are simple, whereas the number of concurrent processes may be very large, only limited by the number of locations in the CAM. The best-known CAPP may be STARAN, completed in 1972; several similar systems were later built in other countries.

A CAPP is distinctly different from a Von Neumann architecture or classical computer that stores data in cells addressed individually by numeric address. The CAPP executes a stream of instructions that address memory based on the content (stored values) of the memory cells. As a parallel processor, it acts on all of the cells containing that content at once. The content of all matching cells can be changed simultaneously.

A typical CAPP might consist of an array of content-addressable memory of fixed word length, a sequential instruction store, and a general purpose computer of the Von Neumann architecture that is used to interface peripherals.

Control unit

The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor.It directs the operation of the other units by providing timing and control signals.

Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.

Dataflow architecture

Dataflow architecture is a computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture. Dataflow architectures do not have a program counter (in concept): the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution is unpredictable, i.e. behavior is nondeterministic.

Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing. It is also very relevant in many software architectures today including database engine designs and parallel computing frameworks.Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processor load balancing, synchronization and accesses to common resources.Meanwhile, there is a clash of terminology, since the term dataflow is used for a subarea of parallel programming: for dataflow programming.

First Draft of a Report on the EDVAC

The First Draft of a Report on the EDVAC (commonly shortened to First Draft) is an incomplete 101-page document written by John von Neumann and distributed on June 30, 1945 by Herman Goldstine, security officer on the classified ENIAC project. It contains the first published description of the logical design of a computer using the stored-program concept, which has controversially come to be known as the von Neumann architecture.

Freescale 68HC08

The 68HC08 (HC08 in short) is a broad family of 8-bit microcontrollers from Freescale Semiconductor (formerly Motorola Semiconductor).

HC08's are fully code-compatible with their predecessors, the Motorola 68HC05. Like all Motorola processors that share lineage from the 6800, they use the von Neumann architecture as well as memory-mapped I/O. This family has five CPU registers that are not part of the memory. One 8-bit accumulator A, a 16-bit index register H:X, a 16-bit stack pointer SP, a 16-bit program counter PC, and an 8-bit condition code register CCR. Some instructions refer to the different bytes in the H:X index register independently.

Among the HC08's there are dozens of processor families, each targeted to different embedded applications. Features and capabilities vary widely, from 8 to 64-pin processors, from LIN connectivity to USB 1.1. A typical and general purpose device from the HC08 family of units is the microcontroller M68HC908GP32.

The Freescale RS08 core is a simplified, "reduced-resource" version of the HC08.

The Freescale HCS08 core is the next generation of the same processors.

Freescale RS08

The RS08 [1] core is a reduced-resource version of the Freescale MC68HCS08 central processing unit (CPU), a member of the 6800 microprocessor family. It has been implemented in several microcontroller devices for embedded systems.

Compared to its sibling HC08 and Freescale S08 parts, it has a much-simplified design. The 'R' in its part numbers suggests "Reduced"; Freescale itself describes the core as "ultra-low-end". Typical implementations include fewer on-board peripherals and memory resources, have smaller packages (the smallest is the QFN6 package, at 3mm x 3mm x 1mm), and are priced under US $1. Aims of the simplified design include greater efficiency, greater cost-effectiveness for small-memory-size parts, and smaller die size.

The RS08 employs a von Neumann architecture with shared program and data bus; executing instructions from within data memory is possible. The device is not binary compatible with the S08 core, though the instruction opcodes and addressing modes are a subset of the S08. This allows an easy transition from the S08 core to the RS08 core for designers and engineers.

Short and Tiny addressing modes allow for more efficient access and manipulation of the most-commonly-used variables and registers. These instructions have single-byte instruction opcodes, reducing the amount of program memory required by their frequent use.

Die size is 30% smaller than the S08 core. The RS08 core uses the same bus structure as S08, making memory and peripheral module reuse possible. It offers a Background Debug Mode interface, a single-wire debugging interface that allows interactive control over the processor when installed in a target system.

Harvard architecture

The Harvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself.

Today, most processors implement such separate signal pathways for performance reasons, but actually implement a modified Harvard architecture, so they can support tasks like loading a program from disk storage as data and then executing it.

History of computer science

The history of computer science began long before our modern discipline of computer science usually appearing in forms like mathematics or physics. Developments in previous centuries alluded to the discipline that we now know as computer science. This progression, from mechanical inventions and mathematical theories towards modern computer concepts and machines, led to the development of a major academic field, massive technological advancement across Western Society, and the basis of a massive worldwide trade and culture.

IAS machine

The IAS machine was the first electronic computer to be built at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It is sometimes called the von Neumann machine, since the paper describing its design was edited by John von Neumann, a mathematics professor at both Princeton University and IAS. The computer was built from late 1945 until 1951 under his direction.

The general organization is called Von Neumann architecture, even though it was both conceived and implemented by others. The computer is in the collection of the Smithsonian National Museum of American History but is not currently on display.

ILLIAC I

The ILLIAC I (Illinois Automatic Computer), a pioneering computer built in 1952 by the University of Illinois, was the first computer built and owned entirely by a US educational institution.

The project was the brainchild of Ralph Meagher and Abraham H. Taub, who both were associated with Princeton's Institute for Advanced Study before coming to the University of Illinois. The ILLIAC I became operational on September 1, 1952. It was the second of two identical computers, the first of which was ORDVAC, also built at the University of Illinois. These two machines were the first pair of machines to run the same instruction set.

ILLIAC I was based on the IAS Von Neumann architecture as described by mathematician John von Neumann in his influential First Draft of a Report on the EDVAC. Unlike most computers of its era, the ILLIAC I and ORDVAC computers were twin copies of the same design, with software compatibility. The computer had 2,800 vacuum tubes, measured 10 ft (3 m) by 2 ft (0.6 m) by 8½ ft (2.6 m) (L×B×H), and weighed 4,000 pounds (2.0 short tons; 1.8 t). ILLIAC I was very powerful for its time; in 1956 it had more computing power than all of Bell Labs.

Because the lifetime of the tubes within ILLIAC was about a year, the machine was shut down every day for "preventive maintenance" when older vacuum tubes would be replaced in order to increase reliability. Visiting scholars from Japan assisted in the design of the ILLIAC series of computers, and later developed the MUSASINO-1 computer in Japan. ILLIAC I was retired in 1962, when the ILLIAC II became operational.

MANIAC I

The MANIAC (Mathematical Analyzer, Numerical Integrator, and Computer or Mathematical Analyzer, Numerator, Integrator, and Computer) was an early computer built under the direction of Nicholas Metropolis at the Los Alamos Scientific Laboratory. It was based on the von Neumann architecture of the IAS, developed by John von Neumann. As with all computers of its era, it was a one-of-a-kind machine that could not exchange programs with other computers (even other IAS machines). Metropolis chose the name MANIAC in the hope of stopping the rash of silly acronyms for machine names, although von Neumann may have suggested the name to him.

The MANIAC weighed about 1,000 pounds (0.50 short tons; 0.45 t).The first task assigned to the Los Alamos Maniac was to perform more exact and extensive calculations of the thermonuclear process.The MANIAC ran successfully in March 1952 and was shut down on July 15, 1958. However, it was apparently transferred to the University of New Mexico until it was retired in 1965. It was succeeded by MANIAC II in 1957.

A third version MANIAC III was built at the Institute for Computer Research at the University of Chicago in 1964.

A computer named MANIAC I was featured in the science fiction film The Magnetic Monster, although this was not the actual chassis of MANIAC I.

Modified Harvard architecture

The modified Harvard architecture is a variation of the Harvard computer architecture that allows the contents of the instruction memory to be accessed as if it were data. Most modern computers that are documented as Harvard architecture are, in fact, modified Harvard architecture.

Motorola 68HC05

The 68HC05 (HC05 in short) is a broad family of 8-bit microcontrollers from Freescale Semiconductor (formerly Motorola Semiconductor).

Like all Motorola processors that share lineage from the 6800, they use the von Neumann architecture as well as memory-mapped I/O. This family has five CPU registers that are not part of the memory: an 8-bit accumulator A, an 8-bit index register X, an 8-bit stack pointer SP with two most significant bits hardwired to 1, a 13-bit program counter PC, and an 8-bit condition code register CCR.

Among the HC05's there are several processor families, each targeted to different embedded applications.

The 68HC05 family broke ground with the introduction of the EEPROM-based MC68HC805C4 and MC68HC805B6 variants in the late 1980s. Using a serial bootloader, they could be programmed in-circuit with simple software running on a PC and a low current 19V supply (no programmer required).

The HC05 series is now considered legacy and is replaced by the HC(S)08 MCU series.

ORDVAC

The ORDVAC or Ordnance Discrete Variable Automatic Computer, an early computer built by the University of Illinois for the Ballistic Research Laboratory at Aberdeen Proving Ground, was based on the IAS architecture developed by John von Neumann, which came to be known as the von Neumann architecture. The ORDVAC was the first computer to have a compiler. ORDVAC passed its acceptance tests on March 6, 1952 at Aberdeen Proving Ground in Maryland. Its purpose was to perform ballistic trajectory calculations for the US Military. In 1992, the Ballistic Research Laboratory became a part of the US Army Research Laboratory.

Unlike the other computers of its era, the ORDVAC and ILLIAC I were twins and could exchange programs with each other. The later SILLIAC computer was a copy of the ORDVAC/ILLIAC series. J. P. Nash of the University of Illinois was a developer of both the ORDVAC and of the university's own identical copy, the ILLIAC, which was later renamed the ILLIAC I. Abe Taub, Sylvian Ray, and Donald B. Gillies assisted in the checkout of ORDVAC at Aberdeen Proving Ground. After ORDVAC was moved to Aberdeen, it was used remotely by telephone by the University of Illinois for up to eight hours per night. It was one of the first computers to be used remotely and probably the first to routinely be used remotely.

The ORDVAC used 2178 vacuum tubes. Its addition time was 72 microseconds and the multiplication time was 732 microseconds. Its main memory consisted of 1024 words of 40 bits each, stored using Williams tubes. It was a rare asynchronous machine, meaning that there was no central clock regulating the timing of the instructions. One instruction started executing when the previous one finished.

One of the ORDVAC programmers was Elsie Shutt.

ORDVAC and its successor at Aberdeen Proving Ground, BRLESC, used their own unique notation for hexadecimal numbers. Instead of the sequence A B C D E F universally used today, the digits ten to fifteen were represented by the letters K S N J F L (King Sized Numbers Just for Laughs), corresponding to the teleprinter characters on five-track paper tape.

SISD

In computing, SISD (single instruction stream, single data stream) is a computer architecture in which a single uni-core processor, executes a single instruction stream, to operate on data stored in a single memory. This corresponds to the von Neumann architecture.

SISD is one of the four main classifications as defined in Flynn's taxonomy. In this system, classifications are based upon the number of concurrent instructions and data streams present in the computer architecture. According to Michael J. Flynn, SISD can have concurrent processing characteristics. Pipelined processors and superscalar processors are common examples found in most modern SISD computers.Instructions are sent to the control unit from the Memory Module and are decoded and sent to the processing unit which processes on the data retrieved from Memory module and sents back to it.

TrueNorth

TrueNorth is a neuromorphic CMOS integrated circuit produced by IBM in 2014. It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion. Since memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von-Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation.

Models
Architecture
Instruction set
architectures
Execution
Parallelism
Processor
performance
Types
Word size
Core count
Components
Power
management
Related

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.