Bigtable

Bigtable is a compressed, high performance, proprietary data storage system built on Google File System, Chubby Lock Service, SSTable (log-structured storage like LevelDB) and a few other Google technologies. On May 6, 2015, a public version of Bigtable was made available as a service. Bigtable also underlies Google Cloud Datastore, which is available as a part of the Google Cloud Platform.[1][2]

Google Bigtable
Developer(s)Google Inc.
Initial releaseFebruary 2005
Written in
C++ (core), Java, Python, Go, Ruby
PlatformGoogle Cloud Platform
TypeCloud Storage
LicenseProprietary
Websitecloud.google.com/bigtable/

History

Bigtable development began in 2004[3] and is now used by a number of Google applications, such as web indexing,[4] MapReduce, which is often used for generating and modifying data stored in Bigtable,[5] Google Maps,[6] Google Book Search, "My Search History", Google Earth, Blogger.com, Google Code hosting, YouTube,[7] and Gmail.[8] Google's reasons for developing its own database include scalability and better control of performance characteristics.[9]

Google's Spanner RDBMS is layered on an implementation of Bigtable with a Paxos group for two-phase commits to each table. Google F1 was built using Spanner to replace an implementation based on MySQL.[10]

Design

Bigtable is one of the prototypical examples of a wide column store. It maps two arbitrary string values (row key and column key) and timestamp (hence three-dimensional mapping) into an associated arbitrary byte array. It is not a relational database and can be better defined as a sparse, distributed multi-dimensional sorted map.[4]:1 Bigtable is designed to scale into the petabyte range across "hundreds or thousands of machines, and to make it easy to add more machines [to] the system and automatically start taking advantage of those resources without any reconfiguration".[11] For example, Google's copy of the web can be stored in a bigtable where the row key is a domain-reversed URL, and columns describe various properties of a web page, with one particular column holding the page itself. The page column can have several timestamped versions describing different copies of the web page timestamped by when they were fetched. Each cell of a bigtable can have zero or more timestamped versions of the data. Another function of the timestamp is to allow for both versioning and garbage collection of expired data.

Tables are split into multiple tablets – segments of the table are split at certain row keys so that each tablet is a few hundred megabytes or a few gigabytes in size. A bigtable is somewhat like a mapreduce worker pool in that thousands to hundreds of thousands of tablet shards may be served by hundreds to thousands of BigTable servers. When Table size threaten to grow beyond a specified limit, the tablets may be compressed using the algorithm BMDiff[12][13] and the Zippy compression algorithm[14] publicly known and open-sourced as Snappy,[15] which is a less space-optimal variation of LZ77 but more efficient in terms of computing time. The locations in the GFS of tablets are recorded as database entries in multiple special tablets, which are called "META1" tablets. META1 tablets are found by querying the single "META0" tablet, which typically resides on a server of its own since it is often queried by clients as to the location of the "META1" tablet which itself has the answer to the question of where the actual data is located. Like GFS's master server, the META0 server is not generally a bottleneck since the processor time and bandwidth necessary to discover and transmit META1 locations is minimal and clients aggressively cache locations to minimize queries.

See also

References

  1. ^ "Announcing Google Cloud Bigtable: The same database that powers Google Search, Gmail and Analytics is now available on Google Cloud Platform". Google Blog. May 6, 2015. Retrieved September 21, 2016.
  2. ^ "Get started with Google Cloud Datastore - a fast, powerful, NoSQL database".
  3. ^ Kumar, Aswini, Whitchcock, Andrew, ed., Google's Bigtable, First an overview. Bigtable has been in development since early 2004 and has been in active use for about eight months (about February 2005)..
  4. ^ a b Chang et al. 2006.
  5. ^ Chang et al. 2006, p. 3: ‘Bigtable can be used with MapReduce, a framework for running large-scale parallel computations developed at Google. We have written a set of wrappers that allow a Bigtable to be used both as an input source and as an output target for MapReduce jobs’
  6. ^ Whitchcock, Andrew, Google's Bigtable, There are currently around 100 cells for services such as Print, Search History, Maps, and Orkut.
  7. ^ Cordes, Kyle (2007-07-12), YouTube Scalability (talk), Their new solution for thumbnails is to use Google’s Bigtable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition..
  8. ^ "How Entities and Indexes are Stored", Google App Engine, Google Code.
  9. ^ Chang et al. 2006, Conclusion: ‘We have described Bigtable, a distributed system for storing structured data at Google... Our users like the performance and high availability provided by the Bigtable implementation, and that they can scale the capacity of their clusters by simply adding more machines to the system as their resource demands change over time... Finally, we have found that there are significant advantages to building our own storage solution at Google. We have gotten a substantial amount of flexibility from designing our own data model for Bigtable.’
  10. ^ Shute, Jeffrey ‘Jeff’; Oancea, Mircea; Ellner, Stephan; Handy, Benjamin ‘Ben’; Rollins, Eric; Samwel, Bart; Vingralek, Radek; Whipkey, Chad; Chen, Xin; Jegerlehner, Beat; Littlefield, Kyle; Tong, Phoenix (2012), "Summary; F1 — the Fault-Tolerant Distributed RDBMS Supporting Google's Ad Business", Research (presentation), Sigmod: Google, p. 19, We've moved a large and critical application suite from MySQL to F1.
  11. ^ "Google File System and Bigtable", Radar (World Wide Web log), Database War Stories (7), O’Reilly, May 2006.
  12. ^ "Google Bigtable, Compression, Zippy and BMDiff". 2008-10-12. Archived from the original on 1 May 2013. Retrieved 14 April 2015..
  13. ^ McIlroy, Bentley. Data compression using long common strings. DCC '99. IEEE..
  14. ^ "Google's Bigtable", Outer court (Weblog), 2005-10-23.
  15. ^ "Snappy", Code (project), Google.

Bibliography

External links

Apache Accumulo

Apache Accumulo is a highly scalable sorted, distributed key-value store based on Google's Bigtable. It is a system built on top of Apache Hadoop, Apache ZooKeeper, and Apache Thrift. Written in Java, Accumulo has cell-level access labels and server-side programming mechanisms. According to DB-Engines ranking, Accumulo is the third most popular NoSQL wide column store behind Apache Cassandra and HBase and the 61st most popular database engine of any type as of 2018.

Apache Cassandra

Apache Cassandra is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.

Apache HBase

HBase is an open-source, non-relational, distributed database modeled after Google's Bigtable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS (Hadoop Distributed File System) or Alluxio, providing Bigtable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data (small amounts of information caught within a large collection of empty or unimportant data, such as finding the 50 largest items in a group of 2 billion records, or finding the non-zero items representing less than 0.1% of a huge collection).

HBase features compression, in-memory operation, and Bloom filters on a per-column basis as outlined in the original Bigtable paper. Tables in HBase can serve as the input and output for MapReduce jobs run in Hadoop, and may be accessed through the Java API but also through REST, Avro or Thrift gateway APIs. HBase is a column-oriented key-value data store and has been idolized widely because of its lineage with Hadoop and HDFS. HBase runs on top of HDFS and is well-suited for faster read and write operations on large datasets with high throughput and low input/output latency.

HBase is not a direct replacement for a classic SQL database, however Apache Phoenix project provides a SQL layer for HBase as well as JDBC driver that can be integrated with various analytics and business intelligence applications. The Apache Trafodion project provides a SQL query engine with ODBC and JDBC drivers and distributed ACID transaction protection across multiple statements, tables and rows that use HBase as a storage engine.

HBase is now serving several data-driven websites but Facebook's Messaging Platform recently migrated from HBase to MyRocks. Unlike relational and traditional databases, HBase does not support SQL scripting; instead the equivalent is written in Java, employing similarity with a MapReduce application.

In the parlance of Eric Brewer’s CAP Theorem, HBase is a CP type system.

Apache Trafodion

Apache Trafodion is an open-source Top-Level Project at the Apache Software Foundation. It was originally developed by the information technology division of Hewlett-Packard Company and HP Labs to provide the SQL query language on Apache HBase targeting big data transactional or operational workloads. The project was named after the Welsh word for transactions.

Comparison of structured storage software

Structured storage is computer storage for structured data, often in the form of a distributed database. Computer software formally known as structured storage systems include Apache Cassandra, Google's Bigtable and Apache HBase.

DataNucleus

DataNucleus (formerly known as Java Persistent Objects JPOX) is an open source project (under the Apache 2 license) which provides software products around data management in Java. The DataNucleus project started in 2008 (the JPOX project started in 2003 and was relaunched as DataNucleus in 2008 with broader scope).

DataNucleus Access Platform is a fully compliant implementation of the Java Data Objects (JDO) 1.0, 2.0, 2.1, 2.2, 3.0, 3.1, 3.2 specifications (JSR 0012, JSR 0243) and the Java Persistence API (JPA) 1.0, 2.0, 2.1, 2.2 specifications (JSR 0220, JSR 0317, JSR 0338), providing transparent persistence of Java objects. It supports persistence to the widest range of datastores of any Java persistence software, supporting all of the main object-relational mapping (ORM) patterns, allows querying using either JDOQL, JPQL or SQL, and comes with its own byte-code enhancer. It allows persistence to relational datastores (RDBMS), object-based datastores (db4o, NeoDatis ODB), document-based storage (XML, Excel, OpenDocument spreadsheets), web-based storage (JSON, Google Storage, Amazon Simple Storage Service), map-based datastores (HBase, Google's Bigtable, Apache Cassandra), graph-based datastores (Neo4j), document stores (MongoDB) as well as other types of datastores (e.g. LDAP). Its plugins are OSGi-compliant so can be used equally in an OSGi environment.

DataNucleus Access Platform is also utilised by the persistence layer behind Google App Engine for Java, and VMForce (cloud offering from Salesforce.com and VMWare).

Distributed data store

A distributed data store is a computer network where information is stored on more than one node, often in a replicated fashion. It is usually specifically used to refer to either a distributed database where users store information on a number of nodes, or a computer network in which users store information on a number of peer network nodes.

Google Cloud Datastore

Google Cloud Datastore (Cloud Datastore) is a highly scalable, fully managed NoSQL database service offered by Google on the Google Cloud Platform. Cloud Datastore is built upon Google's Bigtable and Megastore technology.

Google File System

Google File System (GFS or GoogleFS) is a proprietary distributed file system developed by Google to provide efficient, reliable access to data using large clusters of commodity hardware. A new version of Google File System code named Colossus was released in 2010.

Hypertable

Hypertable was an open-source software project to implement a database management system inspired by publications on the design of Google's Bigtable.

Hypertable runs on top of a distributed file system such as the Apache HDFS, GlusterFS or the CloudStore Kosmos File System (KFS). It is written almost entirely in C++ as the developers believed it had significant performance advantages over Java.Hypertable software was originally developed at the company Zvents before 2008.

Doug Judd was a promoter of Hypertable.

In January 2009, Baidu, the Chinese language search engine, became a project sponsor.

A version 0.9.2.1 was described in a blog in February, 2009.

Development ended in March, 2016.

LevelDB

LevelDB is an open-source on-disk key-value store written by Google fellows Jeffrey Dean and Sanjay Ghemawat. Inspired by Bigtable, LevelDB is hosted on GitHub under the New BSD License and has been ported to a variety of Unix-based systems, macOS, Windows, and Android.

Log-structured merge-tree

In computer science, the log-structured merge-tree (or LSM tree) is a data structure with performance characteristics that make it attractive for providing indexed access to files with high insert volume, such as transactional log data. LSM trees, like other search trees, maintain key-value pairs. LSM trees maintain data in two or more separate structures, each of which is optimized for its respective underlying storage medium; data is synchronized between the two structures efficiently, in batches.

One simple version of the LSM tree is a two-level LSM tree.

As described by Patrick O'Neil, a two-level LSM tree comprises two tree-like structures, called C0 and C1. C0 is smaller and entirely resident in memory, whereas C1 is resident on disk. New records are inserted into the memory-resident C0 component. If the insertion causes the C0 component to exceed a certain size threshold, a contiguous segment of entries is removed from C0 and merged into C1 on disk. The performance characteristics of LSM trees stem from the fact that each component is tuned to the characteristics of its underlying storage medium, and that data is efficiently migrated across media in rolling batches, using an algorithm reminiscent of merge sort.

Most LSM trees used in practice employ multiple levels. Level 0 is kept in main memory, and might be represented using a tree. The on-disk data is organized into sorted runs of data. Each run contains data sorted by the index key. A run can be represented on disk as a single file, or alternatively as a collection of files with non-overlapping key ranges. To perform a query on a particular key to get its associated value, one must search in the Level 0 tree and each run.

A particular key may appear in several runs, and what that means for a query depends on the application. Some applications simply want the newest key-value pair with a given key. Some applications must combine the values in some way to get the proper aggregate value to return. For example, in Apache Cassandra, each value represents a row in a database, and different versions of the row may have different sets of columns.In order to keep down the cost of queries, the system must avoid a situation where there are too many runs.

Extensions to the 'levelled' method to incorporate B+ tree structures have been suggested, for example bLSM and Diff-Index.LSM trees are used in data stores such as Bigtable, HBase, LevelDB, MongoDB, SQLite4, Tarantool ,

RocksDB, WiredTiger, Apache Cassandra, InfluxDB and VictoriaMetrics.

Mangler pattern

Mangler is a software design pattern. A Mangler is a pattern that performs multiple operations over a series of data, similar to the MapReduce function inside of Bigtable and Amazon's Dynamo. Typically, a mangler is fed a series of Maps from which it performs its internal operations and passes its internal state/data to an external Filter.

A typical usage of the Mangler Pattern is during internal search operations. When parsing a query from an end-user, the system will try and strip out a series of un-needed tokens, reassembling the original query into a more usable, functional query.

An important distinction between the Mangler and other patterns is the "Modify in place" optimization, pioneered by the pattern's creator.

This pattern was created by Dr. John Watson, during his tenure at TransUnion's Research and Development Lab.

Programming languages used in most popular websites

The most popular (i.e., the most visited) websites have in common that they are dynamic websites. Their development typically involves server-side coding, client-side coding and database technology. The programming languages applied to deliver similar dynamic web content however vary vastly between sites.

*data on programming languages are based on:

HTTP Header information

Request for file types

Sanjay Ghemawat

Sanjay Ghemawat (born 1966 in West Lafayette, Indiana) is an Indian American computer scientist and software engineer. He is currently a Senior Fellow at Google in the Systems Infrastructure Group. Ghemawat's work at Google, much of it in close collaboration with Jeff Dean, has included big data processing model MapReduce, the Google File System, and databases Bigtable and Spanner. Wired have described him as one of the "most important software engineers of the internet age".

Snappy (compression)

Snappy (previously known as Zippy) is a fast data compression and decompression library written in C++ by Google based on ideas from LZ77 and open-sourced in 2011. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. Compression speed is 250 MB/s and decompression speed is 500 MB/s using a single core of a circa 2011 "Westmere" 2.26 GHz Core i7 processor running in 64-bit mode. The compression ratio is 20–100% lower than gzip.Snappy is widely used in Google projects like Bigtable, MapReduce and in compressing data for Google's internal RPC systems. It can be used in open-source projects like MariaDB ColumnStore, Cassandra, Hadoop, LevelDB, MongoDB, RocksDB, Lucene. Decompression is tested to detect any errors in the compressed stream. Snappy does not use inline assembler (except some optimizations) and is portable.

Spanner (database)

Spanner is Google's globally distributed NewSQL database. Google describes Spanner as a not pure relational database system because each table must have a primary key column.

Wide column store

A wide column store is a type of NoSQL database. It uses tables, rows, and columns, but unlike a relational database, the names and format of the columns can vary from row to row in the same table. A wide column store can be interpreted as a two-dimensional key-value store.As such two-level structures do not use a columnar data layout, wide column stores such as Bigtable and Apache Cassandra are not column stores in the original sense of the term. In genuine column stores, a columnar data layout is adopted such that each column is stored separately on disk. Wide column stores do often support the notion of column families that are stored separately. However, each such column family typically contains multiple columns that are used together, similar to traditional relational database tables. Within a given column family, all data is stored in a row-by-row fashion, such that the columns for a given row are stored together, rather than each column being stored separately. Wide column stores that support column families are also known as column family databases.

Overview
Advertising
Communication
Software
Platforms
Hardware
Development
tools
Publishing
Search
(timeline)
Events
People
Other
Related

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.