AI accelerator

An AI accelerator is a class of microprocessor[1] or computer system[2] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks.[3] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability.[4] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design. AI accelerators can be found in many devices such as smartphones, tablets, and computers all around the world. See the heading titled ¨Examples" for more examples.

History of AI acceleration

Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts

As early as 1993, digital signal processors were used as neural network accelerators e.g. to accelerate optical character recognition software.[5] In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[6][7][8] FPGA-based accelerators were also first explored in the 1990s for both inference[9] and training.[10] ANNA was a neural net CMOS accelerator developed by Yann LeCun.[11]

Heterogeneous computing

Heterogeneous computing refers to incorporating a number of specialized processors in a single system, or even a single chip, each optimized for a specific type of task. Architectures such as the cell microprocessor[12] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing 'throughput' over latency. The Cell microprocessor was subsequently applied to a number of tasks[13][14][15] including AI.[16][17][18]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low precision data types.[19]

Use of GPU

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[20][21][22] As of 2016, GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training[23] and inference in devices such as self-driving cars.[24] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from.[25] As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks.[26][27] Tensor cores are intended to speed up the training of neural networks.[27]

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks and software alongside each other.[9][10][28]

Microsoft has used FPGA chips to accelerate inference.[29][30] The application of FPGAs to AI acceleration motivated Intel to acquire Altera with the aim of integrating FPGAs in server CPUs, which would be capable of accelerating AI as well as general purpose tasks.[31]

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better than CPUs for AI related tasks, a factor of up to 10 in efficiency[32][33] may be gained with a more specific design, via an application-specific integrated circuit (ASIC). These accelerators employ strategies such as optimized memory use and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[34][35] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[36][37][38][39][40][41][42]

In-memory computing architectures

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[43] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[44] The system is based on phase-change memory arrays.[45]

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[46] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

Examples

Stand alone products

  • Google Tensor processing unit is an accelerator specifically designed by Google for its TensorFlow framework, which is extensively used for convolutional neural networks. It focuses on a high volume of 8-bit precision arithmetic. The initial first generation from 2015 focused on inference, while the second generation announced in May 2017 increased capability for neural network training also. The third-generation TPU was announced on 8 May 2018. On July 2018 the Edge TPU was announced. Edge TPU is Google’s purpose-built ASIC chip designed to run its TensorFlow Lite machine learning (ML) models at the edge.[47]
  • Adapteva epiphany is a many-core coprocessor featuring a network on a chip scratchpad memory model, suitable for a dataflow programming model, which should be suitable for many machine learning tasks.
  • Intel Nervana NNP (Neural Network Processor) (a.k.a. ”Lake Crest”), which Intel claims is the first commercially available chip with a purpose built architecture for deep learning. Facebook was a partner in the design process.[48][49]
  • Movidius Myriad 2 is a many-core VLIW AI accelerator complemented with video fixed function units.
  • Mobileye's EyeQ is a processor specialized for vision processing for self-driving cars[50]
  • NM500 is the latest as of 2016 in a series of accelerator chips for radial basis function neural nets from General Vision.[51]
  • Kendryte K210 contains 64-bit RISC-V CPU and KPU, a general-purpose neural network processor with built-in convolution, batch normalization, activation, and pooling operations.
  • Qualcomm announced Cloud AI 100, an inference accelerator. [1]
  • Habana Labs' Habana Goya (HL-1000) is for inference and currently in production. Habana Gaudi (HL-2000) is for training and will be sampling in 2019 Q2.

GPU based products

AI accelerating co-processors

  • Qualcomm's Hexagon DSPs since the Snapdragon 820 released in March 2015 using their Qualcomm Snapdragon Neural Processing Engine SDK.[57]
    • Qualcomm's Snapdragon 855 contains their 4th generation on-device AI engine, including a dedicated Tensor Accelerator.
  • Cadence's Tensilica IP is a family of neural network processor and neural network-optimized digital signal processor IP core. Such as the Tensilica Vision C5 DSP released in May 2017 and Tensilica Vision Q6 DSP released in April 2018.[60][61] The Tensilica DNA 100 Processor was announced in September 2018.[62]
  • Imagination Technologies' PowerVR 2NX NNA (Neural Net Accelerator) is an IP core from NEC (now Renesas) licensed for integration into chips, first announced September 2017.[63] On December 2018 PowerVR 3NX NNA was announced.[64]
  • Apple's Neural Engine is an AI accelerator core within Apple-designed processors. The Apple A11 Bionic SoC[65] released on September 2017 featured a dual core Neural Engine. The Apple A12 Bionic SoC released on September 2018 featured an octa core Neural Engine.
  • Samsung's Exynos 9820 has an integrated Neural Processing Unit (NPU). It allows the processor to perform AI-related functions seven times faster than its predecessor. From enhancing photos to advanced AR features, the Exynos 9820 with NPU expands AI capabilities of mobile devices.[66]
  • Cambricon Technologies's Machine Learning Unit (MLU) family of neural processors such as the MLU-100 and MLU-200.[67]
  • HiSilicon's Neural Processing Unit is a neural network accelerator within HiSilicon's Kirin SoCs. The Kirin 970[68] with a NPU from Cambricon Technologies was released in October, 2017. The Kirin 980 with a dual core NPU from Cambricon Technologies was released in October, 2018.
  • Google's Pixel Visual Core (PVC) is a fully programmable Image, Vision and AI processor for mobile devices. First featured in the Google Pixel 2 released in October, 2017.
  • Arm's ML Processor is dedicated IP for neural network model inferencing acceleration. First announced as Project Trillium in January 2018.[69]
  • CEVA's NeuPro family of AI processors. The NP500, NP1000, NP2000 and NP4000 were first announced on January 2018. Each containing one programmable vector DSP and one hardwired implementation of 8-bit or 16-bit neural network layers supporting neural nets with performances ranging from 2 TOPS thru 12.5 TOPS.[70]
  • Universal Multifunction Accelerator (UMA) by Manjeera Digital Systems in Hyderabad is an accelerator in a proprietary architecture based on Middle Stratum Operations.[71][72][73]

Research and unreleased products

  • In December 2017 Tesla Motors confirmed a rumour that it is developing an AI chip for autonomous driving. Jim Keller worked on this project between at least early 2016 and early 2018.[74]
  • MIT Eyeriss is an accelerator design aimed explicitly at convolutional neural networks, using a scratchpad memory and network-on-chip architecture.[75]
  • Georgia Tech has designed a neuro-inspired processor for performing online reinforcement learning for ultra-low power robotics. It employs mixed-signal design techniques to reduce the operating power.[76]
  • Nullhop is an accelerator designed at the Institute of Neuroinformatics of ETH Zürich and University of Zürich based on sparse representation of feature maps. The second generation of the architecture is commercialized by the university spin-off Synthara Technologies.[77][78]
  • Kalray is an accelerator for convolutional neural nets.[79]
  • SpiNNaker is a many-core design specialized for simulating a large neural network.
  • Graphcore IPU is a graph-based AI accelerator.[80]
  • DPU, by Wave Computing, a dataflow architecture[81]
  • STMicroelectronics at the start of 2017 presented a demonstrator SoC manufactured in a 28 nm process containing a deep CNN accelerator.[82]
  • TrueNorth is a manycore design based on spiking neurons rather than traditional arithmetic.[83][84]
  • Intel Loihi is an experimental neuromorphic chip.[85]
  • BrainChip in September 2017 introduced a commercial PCI Express card with a Xilinx Kintex Ultrascale FPGA running neuromorphic neural cores applying pattern recognition on 600 video images per second using 16 watts of power.[86]
  • IIT Madras is designing a spiking neuron accelerator for big-data analytics.[87]
  • Several memristor-based AI accelerators have been proposed which leverage in-memory computing capability of memristor.[4]
  • AlphaICs is designing an agent-based coprocessor called Real AI Processor (RAP) to enable perception and decision making in a chip.[88]

Potential applications

See also

References

  1. ^ "Intel unveils Movidius Compute Stick USB AI Accelerator". July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017.
  2. ^ "Inspurs unveils GX4 AI Accelerator". June 21, 2017.
  3. ^ "Google Developing AI Processors".Google using its own AI accelerators.
  4. ^ a b "A Survey of ReRAM-based Architectures for Processing-in-memory and Neural Networks", S. Mittal, Machine Learning and Knowledge Extraction, 2018
  5. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator".
  6. ^ "design of a connectionist network supercomputer".
  7. ^ "The end of general purpose computers (not)".This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)
  8. ^ Ramacher, U.; Raab, W.; Hachmann, J.A.U.; Beichter, J.; Bruls, N.; Wesseling, M.; Sicheneder, E.; Glass, J.; Wurz, A.; Manner, R. (1995). Proceedings of 9th International Parallel Processing Symposium. pp. 774–781. CiteSeerX 10.1.1.27.6410. doi:10.1109/IPPS.1995.395862. ISBN 978-0-8186-7074-9.
  9. ^ a b "Space Efficient Neural Net Implementation".
  10. ^ a b "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning" (PDF). 1996.
  11. ^ Application of the ANNA Neural Network Chip to High-Speed Character Recognition
  12. ^ "Synergistic Processing in Cell's Multicore Architecture". 2006.
  13. ^ De Fabritiis, G. (2007). "Performance of Cell processor for biomolecular simulations". Computer Physics Communications. 176 (11–12): 660–664. arXiv:physics/0611201. doi:10.1016/j.cpc.2007.02.107.
  14. ^ "Video Processing and Retrieval on Cell architecture". CiteSeerX 10.1.1.138.5133.
  15. ^ Benthin, Carsten; Wald, Ingo; Scherbaum, Michael; Friedrich, Heiko (2006). 2006 IEEE Symposium on Interactive Ray Tracing. pp. 15–23. CiteSeerX 10.1.1.67.8982. doi:10.1109/RT.2006.280210. ISBN 978-1-4244-0693-7.
  16. ^ "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF).
  17. ^ Kwon, Bomjun; Choi, Taiho; Chung, Heejin; Kim, Geonho (2008). 2008 5th IEEE Consumer Communications and Networking Conference. pp. 1030–1034. doi:10.1109/ccnc08.2007.235. ISBN 978-1-4244-1457-4.
  18. ^ Duan, Rubing; Strey, Alfred (2008). Euro-Par 2008 – Parallel Processing. Lecture Notes in Computer Science. 5168. pp. 665–675. doi:10.1007/978-3-540-85451-7_71. ISBN 978-3-540-85450-0.
  19. ^ "Improving the performance of video with AVX". February 8, 2012.
  20. ^ "microsoft research/pixel shaders/MNIST".
  21. ^ "how the gpu came to be used for general computation".
  22. ^ "imagenet classification with deep convolutional neural networks" (PDF).
  23. ^ "nvidia driving the development of deep learning". May 17, 2016.
  24. ^ "nvidia introduces supercomputer for self driving cars". January 6, 2016.
  25. ^ "how nvlink will enable faster easier multi GPU computing". November 14, 2014.
  26. ^ "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  27. ^ a b Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  28. ^ "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016.
  29. ^ "microsoft extends fpga reach from bing to deep learning". August 27, 2015.
  30. ^ Chung, Eric; Strauss, Karin; Fowers, Jeremy; Kim, Joo-Young; Ruwase, Olatunji; Ovtcharov, Kalin (February 23, 2015). "Accelerating Deep Convolutional Neural Networks Using Specialized Hardware" (PDF). Microsoft Research.
  31. ^ "A Survey of FPGA-based Accelerators for Convolutional Neural Networks", Mittal et al., NCAA, 2018
  32. ^ "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016.
  33. ^ "Chip could bring deep learning to mobile devices". www.sciencedaily.com. February 3, 2016. Retrieved September 13, 2016.
  34. ^ "Deep Learning with Limited Numerical Precision" (PDF).
  35. ^ Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV].
  36. ^ Khari Johnson (May 23, 2018). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved May 23, 2018. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  37. ^ Michael Feldman (May 23, 2018). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved May 23, 2018. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  38. ^ Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that’s being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  39. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved May 23, 2018. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  40. ^ Elmar Haußmann (April 26, 2018). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Retrieved May 23, 2018. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  41. ^ Tensorflow Authors (February 28, 2018). "ResNet-50 using BFloat16 on TPU". Google. Retrieved May 23, 2018.
  42. ^ Joshua V. Dillon; Ian Langmore; Dustin Tran; Eugene Brevdo; Srinivas Vasudevan; Dave Moore; Brian Patton; Alex Alemi; Matt Hoffman; Rif A. Saurous (November 28, 2017). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed 2018-05-23. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  43. ^ Abu Sebastian; Tomas Tuma; Nikolaos Papandreou; Manuel Le Gallo; Lukas Kull; Thomas Parnell; Evangelos Eleftheriou (2017). "Temporal correlation detection using computational phase-change memory". Nature Communications. 8. arXiv:1706.00511. doi:10.1038/s41467-017-01481-9.
  44. ^ "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018.
  45. ^ Carlos Ríos; Nathan Youngblood; Zengguang Cheng; Manuel Le Gallo; Wolfram H.P. Pernice; C David Wright; Abu Sebastian; Harish Bhaskaran (2018). "In-memory computing on a photonic platform". arXiv:1801.06228 [cs.ET].
  46. ^ "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256".
  47. ^ Kundu, Kishalaya (July 26, 2018). "Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018". Beebom. Retrieved February 2, 2019.
  48. ^ Kampman, Jeff (October 17, 2017). "Intel unveils purpose-built Neural Network Processor for deep learning". Tech Report. Retrieved October 18, 2017.
  49. ^ "Intel Nervana Neural Network Processors (NNP) Redefine AI Silicon". Retrieved October 20, 2017.
  50. ^ "The Evolution of EyeQ".
  51. ^ "NM500, Neuromorphic chip with 576 neurons". Archived from the original on October 3, 2017. Retrieved October 3, 2017.
  52. ^ "Nvidia goes beyond the GPU for AI with Volta".
  53. ^ "The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX". AnandTech.
  54. ^ "nvidia dgx-1" (PDF).
  55. ^ Frumusanu, Andrei. "Investigating NVIDIA's Jetson AGX: A Look at Xavier and Its Carmel Cores". www.anandtech.com. Retrieved February 2, 2019.
  56. ^ Smith, Ryan (December 12, 2016). "AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming in 2017". Anandtech. Retrieved December 12, 2016.
  57. ^ a b "On-Device AI with Qualcomm Snapdragon Neural Processing Engine SDK". Qualcomm Developer Network. Retrieved February 2, 2019.
  58. ^ "NEC SX-Aurora TSUBASA".
  59. ^ "AI Acceleration-with-NEC's New Vector Computer".
  60. ^ "Cadence Unveils Industry's First Neural Network DSP IP for Automotive, Surveillance, Drone and Mobile Markets".
  61. ^ Frumusanu, Andrei. "Cadence Announces Tensilica Vision Q6 DSP". www.anandtech.com. Retrieved February 2, 2019.
  62. ^ Frumusanu, Andrei. "Cadence Announces The Tensilica DNA 100 IP: Bigger Artificial Intelligence". www.anandtech.com. Retrieved February 2, 2019.
  63. ^ "The highest performance neural network inference accelerator".
  64. ^ Oh, Nate. "Imagination Announces PowerVR Series9XTP, Series9XMP, and Series9XEP GPU Cores". www.anandtech.com. Retrieved February 2, 2019.
  65. ^ "The iPhone X's new neural engine exemplifies Apple's approach to AI". The Verge. Retrieved September 23, 2017.
  66. ^ "Exynos 9 Series (9820) - The Next-level Processor for the Mobile Future". Retrieved March 31, 2019.
  67. ^ Cutress, Ian. "Cambricon, Makers of Huawei's Kirin NPU IP, Build A Big AI Chip and PCIe Card". www.anandtech.com. Retrieved February 2, 2019.
  68. ^ "HUAWEI Reveals the Future of Mobile AI at IFA 2017".
  69. ^ Cutress, Ian. "Hot Chips 2018: Arm's Machine Learning Core Live Blog". www.anandtech.com. Retrieved February 2, 2019.
  70. ^ "A Family of AI Processors for Deep Learning at the Edge".
  71. ^ Manjeera Digital System, UMA. "Universal Multifunction Accelerator". Manjeera Digital Systems. Retrieved June 28, 2018.
  72. ^ Manjeera Digital Systems, Universal Multifunction Accelerator. "Revolutionise Processing". Indian Express. Retrieved June 28, 2018.
  73. ^ AI Chip, UMA (May 10, 2018). "AI Chip from Hyderabad" (News Paper). Telangana Today. Retrieved June 28, 2018.
  74. ^ Lambert, Fred (December 8, 2017). "Elon Musk confirms that Tesla is working on its own new AI chip led by Jim Keller".
  75. ^ Chen, Yu-Hsin; Krishna, Tushar; Emer, Joel; Sze, Vivienne (2016). "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks". IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers. pp. 262–263.
  76. ^ "Mixed-signal Processing Powers Bio-mimetic CMOS Chip to Enable Neural Learning in Autonomous Micro-Robots | IEN".
  77. ^ Aimar, Alessandro; et al. (2017). "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps". arXiv:1706.01406 [cs.CV].
  78. ^ "Synthara Technologies".
  79. ^ "kalray MPPA" (PDF).
  80. ^ "Graphcore Technology".
  81. ^ "Wave Computing's DPU architecture". August 23, 2017.
  82. ^ "A 2.9 TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems" (PDF).
  83. ^ "yann lecun on IBM truenorth".argues that spiking neurons have never produced leading quality results, and that 8-16 bit precision is optimal, pushes the competing 'neuflow' design
  84. ^ "IBM cracks open new era of neuromorphic computing". TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches
  85. ^ "Intel's New Self-Learning Chip Promises to Accelerate Artificial Intelligence".
  86. ^ "BrainChip Accelerator". Archived from the original on October 3, 2017. Retrieved October 3, 2017.
  87. ^ "India preps RISC-V Processors - Shakti targets servers, IoT, analytics". The Shakti project now includes plans for at least six microprocessor designs as well as associated fabrics and an accelerator chip
  88. ^ "AlphaICs".
  89. ^ "drive px".
  90. ^ "design of a machine vision system for weed control" (PDF). Archived from the original (PDF) on June 23, 2010. Retrieved June 17, 2016.
  91. ^ "qualcomm research brings server class machine learning to every data devices". October 2015.
  92. ^ "movidius powers worlds most intelligent drone". March 16, 2016.

External links

ssl

Arithmetic logic unit

An arithmetic logic unit (ALU) is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. An ALU is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs). A single CPU, FPU or GPU may contain multiple ALUs.

The inputs to an ALU are the data to be operated on, called operands, and a code indicating the operation to be performed; the ALU's output is the result of the performed operation. In many designs, the ALU also has status inputs or outputs, or both, which convey information about a previous operation or the current operation, respectively, between the ALU and external status registers.

Cognitive computer

A cognitive computer combines artificial intelligence and machine-learning algorithms, in an approach which attempts to reproduce the behaviour of the human brain. It generally adopts a Neuromorphic engineering approach.

An example of a cognitive computer implemented by using neural networks and deep learning is provided by the IBM company's Watson machine. A subsequent development by IBM is the TrueNorth microchip architecture, which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. In 2017 Intel announced its own version of a cognitive chip in "Loihi", which will be available to university and research labs in 2018.

Cognitive computing

Cognitive computing (CC) describes technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.

Compute kernel

In computing, a compute kernel is a routine compiled for high throughput accelerators (such as graphics processing units (GPUs), digital signal processors (DSPs) or field-programmable gate arrays (FPGAs)), separate from but used by a main program (typically running on a central processing unit). They are sometimes called compute shaders, sharing execution units with vertex shaders and pixel shaders on GPUs, but are not limited to execution on one class of device, or graphics APIs.

Coprocessor

A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating point arithmetic, graphics, signal processing, string processing, cryptography or I/O interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it.

Glossary of artificial intelligence

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.

Hardware acceleration

In computing, hardware acceleration is the use of computer hardware specially made to perform some functions more efficiently than is possible in software running on a general-purpose CPU. Any transformation of data or routine that can be computed, can be calculated purely in software running on a generic CPU, purely in custom-made hardware, or in some mix of both. An operation can be computed faster in application-specific hardware designed or programmed to compute the operation than specified in software and performed on a general-purpose computer processor. Each approach has advantages and disadvantages. The implementation of computing tasks in hardware to decrease latency and increase throughput is known as hardware acceleration.

Typical advantages of software include more rapid development (leading to faster times to market), lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of hardware include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification and times to market. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as GPUs, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuit (ASICs).

Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as needed as in software solutions. With the advent of reprogrammable logic devices such as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processing control flow.

Hazard (computer architecture)

In the domain of central processing unit (CPU) design, hazards are problems with the instruction pipeline in CPU microarchitectures when the next instruction cannot execute in the following clock cycle, and can potentially lead to incorrect computation results. Three common types of hazards are data hazards, structural hazards, and control hazards (branching hazards).There are several methods used to deal with hazards, including pipeline stalls/pipeline bubbling, operand forwarding, and in the case of out-of-order execution, the scoreboarding method and the Tomasulo algorithm.

Massively parallel processor array

A massively parallel processor array, also known as a multi purpose processor array (MPPA) is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Microarchitecture

In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.Computer architecture is the combination of microarchitecture and instruction set architecture.

Neural network software

Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and in some cases, a wider array of adaptive systems such as artificial intelligence and machine learning.

Neuromorphic engineering

Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead, in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors,, spintronic memories, threshold switches, and transistors.A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.

Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.

Physical neural network

A physical neural network is a type of artificial neural network in which an electrically adjustable resistance material is used to emulate the function of a neural synapse. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches which simulate neural networks. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.

Software Guard Extensions

Intel Software Guard Extensions (SGX) is a set of security-related instruction codes that are built into some modern Intel central processing units (CPUs). They allow user-level as well as operating system code to define private regions of memory, called enclaves, whose contents are protected and unable to be either read or saved by any process outside the enclave itself, including processes running at higher privilege levels. SGX is disabled by default and must be opted in to by the user through their BIOS settings on a supported system.SGX involves encryption by the CPU of a portion of memory. The enclave is decrypted on the fly only within the CPU itself, and even then, only for code and data running from within the enclave itself. The processor thus protects the code from being "spied on" or examined by other code. The code and data in the enclave utilise a threat model in which the enclave is trusted but no process outside it (including the operating system itself and any hypervisor), can be trusted and these are all treated as potentially hostile. The enclave contents are unable to be read by any code outside the enclave, other than in its encrypted form.SGX is designed to be useful for implementing secure remote computation, secure web browsing, and digital rights management (DRM). Other applications include concealment of proprietary algorithms and of encryption keys.

TensorFlow

TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.‍  TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015.

Tensor processing unit

A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning.

TrueNorth

TrueNorth is a neuromorphic CMOS integrated circuit produced by IBM in 2014. It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion. Since memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von-Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation.

Vision processing unit

A vision processing unit (VPU) is (as of 2018) an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.

Zeroth (software)

Zeroth is a platform for brain-inspired computing from Qualcomm. It is based around a neural processing unit (NPU) AI accelerator chip and a software API to interact with the platform. It makes a form of machine learning known as deep learning available to mobile devices. It is used for image and sound processing, including speech recognition. The software operates locally rather than as a cloud application.Mobile chip maker Qualcomm announced in March 2015 that it would bundle the software with its next major mobile device chip, the Snapdragon 820 processor.

Theory
Applications
Implementations
Architectures
Related

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.