TensorFlow

TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.[4] It is used for both research and production at Google.‍[4]:min 0:15/2:17[5]:p.2[4]:0:26/2:17 It is a standard expectation in the industry to have experience in TensorFlow to work in machine learning.

TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015.[1][6]

TensorFlow
TensorFlowLogo
Developer(s)Google Brain Team[1]
Initial releaseNovember 9, 2015
Stable release
1.12.0[2] / November 5, 2018
Repositorygithub.com/tensorflow/tensorflow
Written inPython, C++, CUDA
PlatformLinux, macOS, Windows, Android, JavaScript[3]
TypeMachine learning library
LicenseApache License 2.0
Websitewww.tensorflow.org

History

DistBelief

Starting in 2011, Google Brain built DistBelief as a proprietary machine learning system based on deep learning neural networks. Its use grew rapidly across diverse Alphabet companies in both research and commercial applications.[5][7] Google assigned multiple computer scientists, including Jeff Dean, to simplify and refactor the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow.[8] In 2009, the team, led by Geoffrey Hinton, had implemented generalized backpropagation and other improvements which allowed generation of neural networks with substantially higher accuracy, for instance a 25% reduction in errors in speech recognition.[9]

TensorFlow

TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017.[10] While the reference implementation runs on single devices, TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units).[11] TensorFlow is available on 64-bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS.

Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.

TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as tensors. During the Google I/O Conference in June 2016, Jeff Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were from Google.[12]

Tensor processing unit (TPU)

In May 2016, Google announced its Tensor Processing Unit (TPU), an application-specific integrated circuit (a hardware chip) built specifically for machine learning and tailored for TensorFlow. TPU is a programmable AI accelerator designed to provide high throughput of low-precision arithmetic (e.g., 8-bit), and oriented toward using or running models rather than training them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an order of magnitude better-optimized performance per watt for machine learning.[13]

In May 2017, Google announced the second-generation, as well as the availability of the TPUs in Google Compute Engine.[14] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.

In February 2018, Google announced that they were making TPUs available in beta on the Google Cloud Platform.[15]

Edge TPU

In July 2018 the Edge TPU was announced. Edge TPU is Google’s purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones[16] known as edge computing.

TensorFlow Lite

In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite,[17]. In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices.

Pixel Visual Core (PVC)

In October 2017, Google released the Google Pixel 2 which featured their Pixel Visual Core (PVC), a fully programmable Image, Vision and AI processor for mobile devices. The PVC supports TensorFlow for machine learning (and Halide for image processing).

Applications

Google officially released RankBrain on October 26, 2015, backed by TensorFlow.

Google also released Colaboratory, which is a TensorFlow Jupyter notebook environment that requires no setup to use.[18]

Machine Learning Crash Course (MLCC)

On March 1, 2018, Google released its Machine Learning Crash Course (MLCC). Originally designed to help equip Google employees with practical artificial intelligence and machine learning fundamentals, Google rolled out its free TensorFlow workshops in several cities around the world before finally releasing the course to the public.[19]

Features

TensorFlow provides stable Python[20] and C APIs;[21] and without API backwards compatibility guarantee: C++, Go, Java,[22] JavaScript[3] and Swift (early release).[23] Third party packages are available for C#,[24] Haskell,[25] Julia,[26] R,[27] Scala,[28] Rust,[29] OCaml,[30] and Crystal.[31]

Applications

Original photo (left) and with TensorFlow neural style applied (right)

TorontoMusicGarden10
TorontoMusicGarden10-TensorFlow2

Among the applications for which TensorFlow is the foundation, are automated image-captioning software, such as DeepDream.[32] RankBrain now handles a substantial number of search queries, replacing and supplementing traditional static algorithm-based search results.[33]

References

  1. ^ a b "Credits". TensorFlow.org. Retrieved November 10, 2015.
  2. ^ "TensorFlow Release". Retrieved 14 November 2018.
  3. ^ a b "TensorFlow.js". Retrieved 28 June 2018. TensorFlow.js has an API similar to the TensorFlow Python API, however it does not support all of the functionality of the TensorFlow Python API.
  4. ^ a b c "TensorFlow: Open source machine learning" "It is machine learning software being used for various kinds of perceptual and language understanding tasks" — Jeffrey Dean, minute 0:47 / 2:17 from Youtube clip
  5. ^ a b Dean, Jeff; Monga, Rajat; et al. (November 9, 2015). "TensorFlow: Large-scale machine learning on heterogeneous systems" (PDF). TensorFlow.org. Google Research. Retrieved November 10, 2015.
  6. ^ Metz, Cade (November 9, 2015). "Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine". Wired. Retrieved November 10, 2015.
  7. ^ Perez, Sarah (November 9, 2015). "Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More". TechCrunch. Retrieved November 11, 2015.
  8. ^ Oremus, Will (November 11, 2015). "What Is TensorFlow, and Why Is Google So Excited About It?". Slate. Retrieved November 11, 2015.
  9. ^ Ward-Bailey, Jeff (November 25, 2015). "Google chairman: We're making 'real progress' on artificial intelligence". CSMonitor. Retrieved November 25, 2015.
  10. ^ "Tensorflow Release 1.0.0".
  11. ^ Metz, Cade (November 10, 2015). "TensorFlow, Google's Open Source AI, Points to a Fast-Changing Hardware World". Wired. Retrieved November 11, 2015.
  12. ^ Machine Learning: Google I/O 2016 Minute 07:30/44:44 accessdate=2016-06-05
  13. ^ Jouppi, Norm. "Google supercharges machine learning tasks with TPU custom chip". Google Cloud Platform Blog. Retrieved May 19, 2016.
  14. ^ "Build and train machine learning models on our new Google Cloud TPUs". Google. May 17, 2017. Retrieved May 18, 2017.
  15. ^ "Cloud TPU machine learning accelerators now available in beta". Google Cloud Platform Blog. Retrieved 2018-02-12.
  16. ^ Kundu, Kishalaya (2018-07-26). "Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018". Beebom. Retrieved 2019-02-02.
  17. ^ "Google's new machine learning framework is going to put more AI on your phone".
  18. ^ "Colaboratory – Google". research.google.com. Retrieved 2018-11-10.
  19. ^ "Machine Learning Crash Course with TensorFlow APIs". Google.
  20. ^ "All symbols in TensorFlow | TensorFlow". TensorFlow. Retrieved 2018-02-18.
  21. ^ "TensorFlow Version Compatibility | TensorFlow". TensorFlow. Retrieved 2018-05-10. Some API functions are explicitly marked as "experimental" and can change in backward incompatible ways between minor releases. These include other languages
  22. ^ "API Documentation". Retrieved 2018-06-27.
  23. ^ "Swift for TensorFlow". Retrieved 28 June 2018. Swift for TensorFlow is an early stage research project. It has been released to enable open source development and is not yet ready for general use by machine learning developers. The API is subject to change at any time.
  24. ^ Icaza, Miguel de (2018-02-17), TensorFlowSharp: TensorFlow API for .NET languages, retrieved 2018-02-18
  25. ^ haskell: Haskell bindings for TensorFlow, tensorflow, 2018-02-17, retrieved 2018-02-18
  26. ^ "malmaud/TensorFlow.jl". GitHub. Retrieved 28 June 2018.
  27. ^ tensorflow: TensorFlow for R, RStudio, 2018-02-17, retrieved 2018-02-18
  28. ^ Platanios, Anthony (2018-02-17), tensorflow_scala: TensorFlow API for the Scala Programming Language, retrieved 2018-02-18
  29. ^ rust: Rust language bindings for TensorFlow, tensorflow, 2018-02-17, retrieved 2018-02-18
  30. ^ Mazare, Laurent (2018-02-16), tensorflow-ocaml: OCaml bindings for TensorFlow, retrieved 2018-02-18
  31. ^ "fazibear/tensorflow.cr". GitHub. Retrieved 2018-10-10.
  32. ^ Byrne, Michael (November 11, 2015). "Google Offers Up Its Entire Machine Learning Library as Open-Source Software". Vice. Retrieved November 11, 2015.
  33. ^ Woollaston, Victoria (November 25, 2015). "Google releases TensorFlow – Search giant makes its artificial intelligence software available to the public". DailyMail. Retrieved November 25, 2015.

External links

AI accelerator

An AI accelerator is a class of microprocessor or computer system designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design. AI accelerators can be found in many devices such as smartphones, tablets, and computers all around the world. See the heading titled ¨Examples" for more examples.

Bfloat16 floating-point format

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use.

The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs, Google Cloud TPUs, and TensorFlow.

Comparison of deep-learning software

The following table compares notable software frameworks, libraries and computer programs for deep learning.

Dataflow

Dataflow is a term used in computing which has various meanings depending on application and the context in which the term is used. In the context of software architecture, data flow relates to stream processing or reactive programming.

Deeplearning4j

Eclipse Deeplearning4j is a deep learning programming library written for Java and the Java virtual machine (JVM) and a computing framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark.Deeplearning4j is open-source software released under Apache License 2.0, developed mainly by a machine learning group headquartered in San Francisco and Tokyo and led by Adam Gibson. It is supported commercially by the startup Skymind, which bundles DL4J, Tensorflow, Keras and other deep learning libraries in an enterprise distribution called the Skymind Intelligence Layer. Deeplearning4j was contributed to the Eclipse Foundation in October 2017.

Differentiable programming

Differentiable programming is a programming paradigm in which the programs can be differentiated throughout, usually via automatic differentiation. This allows for gradient based optimization of parameters in the program, often via gradient descent. Differentiable programming has found use in areas such as combining deep learning with physics engines in robotics, differentiable ray tracing, and image processing.

Google.ai

Google.ai is a division of Google dedicated solely to artificial intelligence. It was announced at Google I/O 2017 by CEO Sundar Pichai.

Ilya Sutskever

Ilya Sutskever is a computer scientist working in machine learning and currently serving as the Chief scientist of OpenAI.He has made several major contributions to the field of deep learning. He is the co-inventor of famous AlexNet, a convolutional neural network. He invented Sequence to Sequence Learning, together with Oriol Vinyals and Quoc Le. Sutskever is also co-inventor of AlphaGo, and TensorFlow.

Jeff Dean (computer scientist)

Jeffrey Adgate "Jeff" Dean (born July 1968) is an American computer scientist and software engineer. He is currently the lead of Google.ai, Google's AI division.

Keras

Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. It was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System), and its primary author and maintainer is François Chollet, a Google engineer. Chollet also is the author of the XCeption deep neural network model.

In 2017, Google's TensorFlow team decided to support Keras in TensorFlow's core library. Chollet explained that Keras was conceived to be an interface rather than a standalone machine-learning framework. It offers a higher-level, more intuitive set of abstractions that make it easy to develop deep learning models regardless of the computational backend used. Microsoft added a CNTK backend to Keras as well, available as of CNTK v2.0.

Neural Network Exchange Format

Neural Network Exchange Format (NNEF) is an artificial neural network data exchange format developed by the Khronos Group. It is intended to reduce machine learning deployment fragmentation by enabling a rich mix of neural network training tools and inference engines to be used by applications across a diverse range of devices and platforms.

OpenCV

OpenCV (Open source computer vision) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source BSD license.

OpenCV supports the deep learning frameworks TensorFlow, Torch/PyTorch and Caffe.

Probabilistic programming language

A probabilistic programming language (PPL) is a programming language designed to describe probabilistic models and then perform inference in those models. PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible. Probabilistic programming represents an attempt to "[unify] general purpose programming with probabilistic modeling."Probabilistic reasoning is a foundational technology of machine learning. It is used by companies such as Google, Amazon.com and Microsoft. Probabilistic reasoning has been used for predicting stock prices, recommending movies, diagnosing computers, detecting cyber intrusions and image detection.PPLs often extend from a basic language. The choice of underlying basic language depends on the similarity of the model to the basic language's ontology, as well as commercial considerations and personal preference. For instance, Dimple and Chimple are based on Java, Infer.NET is based on .NET framework, while PRISM extends from Prolog. However, some PPLs such as WinBUGS and Stan offer a self-contained language, with no obvious origin in another language.Several PPLs are in active development, including some in beta test.

PyMC3

PyMC3 is a Python package for Bayesian statistical modeling and probabilistic machine learning which focuses on advanced Markov chain Monte Carlo and variational fitting algorithms. It is a rewrite from scratch of the previous version of the PyMC software. Unlike PyMC2, which had used Fortran extensions for performing computations, PyMC3 relies on Theano for automatic differentiation and also for computation optimization and dynamic C compilation. PyMC3, together with Stan, are the most popular probabilistic programming tools. PyMC3 is an open source project, developed by the community and fiscally sponsored by NumFocus.PyMC3 has been used to solve inference problems in several scientific domains, including astronomy, molecular biology, crystallography, chemistry, ecology and psychology. Previous versions of PyMC were also used widely, for example in climate science, public health, neuroscience, and parasitology.After Theano announced plans to discontinue development in 2017, the PyMC3 team decided in 2018 to develop a new version of PyMC named PyMC4, and pivot to TensorFlow Probability as its computational backend. Until the new version is in beta, PyMC3 will continue to be the primary target of development efforts, and both it, and Theano as its backend, will be supported by the PyMC3 team for an extended period of time.

Sanjay Ghemawat

Sanjay Ghemawat (born 1966 in West Lafayette, Indiana) is an Indian American computer scientist and software engineer. He is currently a Senior Fellow at Google in the Systems Infrastructure Group. Ghemawat's work at Google, much of it in close collaboration with Jeff Dean, has included big data processing model MapReduce, the Google File System, and databases Bigtable and Spanner. Wired have described him as one of the "most important software engineers of the internet age".

SpaCy

spaCy ( spay-SEE) is an open-source software library for advanced Natural Language Processing, written in the programming languages Python and Cython. The library is published under the MIT license and currently offers statistical neural network models for English, German, Spanish, Portuguese, French, Italian, Dutch and multi-language NER, as well as tokenization for various other languages.Unlike NLTK, which is widely used for teaching and research, spaCy focuses on providing software for production usage. As of version 1.0, spaCy also supports deep learning workflows that allow connecting statistical models trained by popular machine learning libraries like TensorFlow, Keras, Scikit-learn or PyTorch. spaCy's machine learning library, Thinc, is also available as a separate open-source Python library. On November 7, 2017, version 2.0 was released. It features convolutional neural network models for part-of-speech tagging, dependency parsing and named entity recognition, as well as API improvements around training and updating models, and constructing custom processing pipelines.

Tensor processing unit

A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning.

U-Net

The U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg, Germany. The network is based on the fully convolutional network and its architecture was modified and extended to work with fewer training images and to yield more precise segmentations. Segmentation of a 512*512 image takes less than a second on a recent GPU.

XLA

XLA may refer to:

XLA (singer) (born 1981), Canadian indie singer

.xla, a file format for Microsoft Excel add-ins

XLA, the ICAO three letter callsign of former airline XL Airways UK

X-linked agammaglobulinemia, an immune deficiency

Xbox Live Avatar, a character representing a user of the Xbox video game consoles

Xin Los Angeles, a 2006 container ship registered in Hong Kong

Dow XLA elastic fiber, a marketing name for Lastol

XLA (Accelerated Linear Algebra), a domain-specific compiler for linear algebra that is used to optimize TensorFlow computations

Open-source
Proprietary

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.