Hypertext Transfer Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems.[1] HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate hypertext and the World Wide Web.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP still in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and then again by the RFC 7230 family of RFCs in 2014.

A later version, the successor HTTP/2, was standardized in 2015 (and HTTP/3 is its proposed successor (Internet Draft), that builds on HTTP/2[2][3]), and is now supported by major web servers and browsers over Transport Layer Security (TLS) using Application-Layer Protocol Negotiation (ALPN) extension[4] where TLS 1.2 or newer is required.[5]

Hypertext Transfer Protocol
International standardRFC 7230
Developed byinitially CERN; IETF, W3C
Introduced1991
Superseded byHTTP/2

Technical overview

Internet1
URL beginning with the HTTP scheme and the WWW domain name label

HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content.

HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers.

HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its definition presumes an underlying and reliable transport layer protocol,[6] and Transmission Control Protocol (TCP) is commonly used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol (UDP), for example in HTTPU and Simple Service Discovery Protocol (SSDP).

HTTP resources are identified and located on the network by Uniform Resource Locators (URLs), using the Uniform Resource Identifiers (URI's) schemes http and https. For example, including all optional components:

URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents.

HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, scripts, stylesheets, etc after the page has been delivered. HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead.

History

The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web. The first version of the protocol had only one method, namely GET, which would request a page from a server.[7] The response from the server was always an HTML page.[8]

The first documented version of HTTP was HTTP V0.9 (1991). Dave Raggett led the HTTP Working Group (HTTP WG) in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.[9][10] RFC 1945 officially introduced and recognized HTTP V1.0 in 1996.

The HTTP WG planned to publish new standards in December 1995[11] and the support for pre-standard HTTP/1.1 based on the then developing RFC 2068 (called HTTP-NG) was rapidly adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena,[12] Netscape 2.0,[12] Netscape Navigator Gold 2.01,[12] Mosaic 2.7, Lynx 2.5, and in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant.[13] The HTTP/1.1 standard as defined in RFC 2068 was officially released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999.

In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616:

  • RFC 7230, HTTP/1.1: Message Syntax and Routing
  • RFC 7231, HTTP/1.1: Semantics and Content
  • RFC 7232, HTTP/1.1: Conditional Requests
  • RFC 7233, HTTP/1.1: Range Requests
  • RFC 7234, HTTP/1.1: Caching
  • RFC 7235, HTTP/1.1: Authentication

HTTP/2 was published as RFC 7540 in May 2015.

Year HTTP Version
1991 0.9
1996 1.0
1997 1.1
2015 2.0
2018 3.0

HTTP session

An HTTP session is a sequence of network request-response transactions. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a server (typically port 80, occasionally port 8080; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for a client's request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own. The body of this message is typically the requested resource, although an error message or other information may also be returned.[1]

Persistent connections

In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request. Such persistent connections reduce request latency perceptibly, because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism.

Version 1.1 of the protocol also made bandwidth optimization improvements to HTTP/1.0. For example, HTTP/1.1 introduced chunked transfer encoding to allow content on persistent connections to be streamed rather than buffered. HTTP pipelining further reduces lag time, allowing clients to send multiple requests before waiting for each response. Another addition to the protocol was byte serving, where a server transmits just the portion of a resource explicitly requested by a client.

HTTP session state

HTTP is a stateless protocol. A stateless protocol does not require the HTTP server to retain information or status about each user for the duration of multiple requests. However, some web applications implement states or server side sessions using for instance HTTP cookies or hidden variables within web forms.

HTTP authentication

HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge-response mechanism whereby the server identifies and issues a challenge before serving the requested content.

HTTP provides a general framework for access control and authentication, via an extensible set of challenge-response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information.[14]

Authentication realms

The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI.[14]

Message format

The client and server communicate by sending plain-text (ASCII) messages. The client sends requests to the server and the server sends responses.

Request message

The request message consists of the following:

  • a request line (e.g., GET /images/logo.png HTTP/1.1, which requests a resource called /images/logo.png from the server.)
  • request header fields (e.g., Accept-Language: en).
  • an empty line
  • an optional message body

The request line and other header fields must each end with <CR><LF> (that is, a carriage return character followed by a line feed character). The empty line must consist of only <CR><LF> and no other whitespace.[15] In the HTTP/1.1 protocol, all header fields except Host are optional.

A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in RFC 1945.[16]

Request methods

Http request telnet ubuntu
An HTTP 1.1 request made using telnet. The request message, response header section, and response body are highlighted.

HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb, nor is OPTIONS or HEAD a verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification[17] defined the GET, HEAD and POST methods and the HTTP/1.1 specification[18] added five new methods: OPTIONS, PUT, DELETE, TRACE and CONNECT. By being specified in these documents, their semantics are well-known and can be depended on. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined and this allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined 7 new methods and RFC 5789 specified the PATCH method.

Method names are case sensitive[19][20]. This is in contrast to HTTP header field names which are case-insensitive[21].

GET
The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations."[22] See safe methods below.
HEAD
The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.
POST
The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a data-handling process; or an item to add to a database.[23]
PUT
The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.[24]
DELETE
The DELETE method deletes the specified resource.
TRACE
The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.
OPTIONS
The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.
CONNECT
[25] The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.[26][27] See HTTP CONNECT method.
PATCH
The PATCH method applies partial modifications to a resource.[28]

All general-purpose HTTP servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification.[29]

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause non-trivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.[30]

One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism.[31][30]

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.[1]

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as cross-site tracing; for that reason, common security advice is for it to be disabled in the server configuration.[32] Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled.[32]

HTTP method RFC Request has Body Response has Body Safe Idempotent Cacheable
GET RFC 7231 Optional Yes Yes Yes Yes
HEAD RFC 7231 Optional No Yes Yes Yes
POST RFC 7231 Yes Yes No No Yes
PUT RFC 7231 Yes Yes No Yes No
DELETE RFC 7231 Optional Yes No Yes No
CONNECT RFC 7231 Optional Yes No No No
OPTIONS RFC 7231 Optional Yes Yes Yes No
TRACE RFC 7231 No Yes Yes Yes No
PATCH RFC 5789 Yes Yes No No No

Response message

The response message consists of the following:

  • a status line which includes the status code and reason message (e.g., HTTP/1.1 200 OK, which indicates that the client's request succeeded.)
  • response header fields (e.g., Content-Type: text/html)
  • an empty line
  • an optional message body

The status line and other header fields must all end with <CR><LF>. The empty line must consist of only <CR><LF> and no other whitespace.[15] This strict requirement for <CR><LF> is relaxed somewhat within message bodies for consistent use of other system linebreaks such as <CR> or <LF> alone.[33]

Status codes

In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The way the user agent handles the response depends primarily on the code, and secondarily on the other response header fields. Custom status codes can be used, for if the user agent encounters a code it does not recognize, it can use the first digit of the code to determine the general class of the response.[34]

The standard reason phrases are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable. HTTP status code is primarily divided into five groups for better explanation of request and responses between client and server as named:

  • Informational 1XX
  • Successful 2XX
  • Redirection 3XX
  • Client Error 4XX
  • Server Error 5XX

Encrypted connections

The most popular way of establishing an encrypted HTTP connection is HTTPS.[35] Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent.[36][37][38]

Example session

Below is a sample conversation between an HTTP client and an HTTP server running on www.example.com, port 80. As mentioned in the previous sections, all the data is sent in a plain-text (ASCII) encoding, using a two-byte CR LF ('\r\n') line ending at the end of each line.

Client request

GET / HTTP/1.1
Host: www.example.com

A client request (consisting in this case of the request line and only one header field) is followed by a blank line, so that the request ends with a double newline, each in the form of a carriage return followed by a line feed. The "Host" field distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (The "/" means /index.html if there is one.)

Server response

HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 138
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

<html>
  <head>
    <title>An Example Page</title>
  </head>
  <body>
    <p>Hello World, this is a very simple HTML document.</p>
  </body>
</html>

The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. Content-Type specifies the Internet media type of the data conveyed by the HTTP message, while Content-Length indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field Accept-Ranges: bytes. This is useful, if the client needs to have only certain portions[39] of a resource sent by the server, which is called byte serving. When Connection: close is sent, it means that the web server will close the TCP connection immediately after the transfer of this response.

Most of the header lines are optional. When Content-Length is missing the length is determined in other ways. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Identity encoding without Content-Length reads content until the socket is closed.

A Content-Encoding like gzip can be used to compress the transmitted data.

Similar protocols

The Gopher protocol was a content delivery protocol that was displaced by HTTP in the early 1990s. The SPDY protocol is an alternative to HTTP developed at Google, it is superseded by the new HTTP protocol, HTTP/2.

See also

References

  1. ^ a b c d Fielding, Roy T.; Gettys, James; Mogul, Jeffrey C.; Nielsen, Henrik Frystyk; Masinter, Larry; Leach, Paul J.; Berners-Lee, Tim (June 1999). Hypertext Transfer Protocol – HTTP/1.1. IETF. doi:10.17487/RFC2616. RFC 2616.
  2. ^ Bishop, Mike. "Hypertext Transfer Protocol (HTTP) over QUIC". tools.ietf.org. Retrieved 2018-11-19.
  3. ^ Cimpanu, Catalin. "HTTP-over-QUIC to be renamed HTTP/3 | ZDNet". ZDNet. Retrieved 2018-11-19.
  4. ^ "Transport Layer Security (TLS) Application-Layer Protocol Negotiation Extension". IETF. July 2014. RFC 7301.
  5. ^ Belshe, M.; Peon, R.; Thomson, M. "Hypertext Transfer Protocol Version 2, Use of TLS Features". Retrieved 2015-02-10.
  6. ^ "Overall Operation". RFC 2616. p. 12. sec. 1.4. doi:10.17487/RFC2616. RFC 2616.
  7. ^ Berners-Lee, Tim. "HyperText Transfer Protocol". World Wide Web Consortium. Retrieved 31 August 2010.
  8. ^ Tim Berners-Lee. "The Original HTTP as defined in 1991". World Wide Web Consortium. Retrieved 24 July 2010.
  9. ^ Raggett, Dave. "Dave Raggett's Bio". World Wide Web Consortium. Retrieved 11 June 2010.
  10. ^ Raggett, Dave; Berners-Lee, Tim. "Hypertext Transfer Protocol Working Group". World Wide Web Consortium. Retrieved 29 September 2010.
  11. ^ Raggett, Dave. "HTTP WG Plans". World Wide Web Consortium. Retrieved 29 September 2010.
  12. ^ a b c Simon Spero. "Progress on HTTP-NG". World Wide Web Consortium. Retrieved 11 June 2010.
  13. ^ "HTTP/1.1". Webcom.com Glossary entry. Archived from the original on 2001-11-21. Retrieved 2009-05-29.
  14. ^ a b Fielding, Roy T.; Reschke, Julian F. (June 2014). Hypertext Transfer Protocol (HTTP/1.1): Authentication. IETF. doi:10.17487/RFC7235. RFC 7235.
  15. ^ a b "HTTP Message". RFC 2616. p. 31. sec. 4. doi:10.17487/RFC2616. RFC 2616.
  16. ^ "Apache Week. HTTP/1.1". 090502 apacheweek.com
  17. ^ Berners-Lee, Tim; Fielding, Roy T.; Nielsen, Henrik Frystyk. "Method Definitions". Hypertext Transfer Protocol – HTTP/1.0. IETF. pp. 30–32. sec. 8. doi:10.17487/RFC1945. RFC 1945.
  18. ^ "Method Definitions". RFC 2616. pp. 51–57. sec. 9. doi:10.17487/RFC2616. RFC 2616.
  19. ^ RFC-7210 section 3.1.1
  20. ^ RFC-7231 section 4.1
  21. ^ RFC-7230 section 3.2
  22. ^ Jacobs, Ian (2004). "URIs, Addressability, and the use of HTTP GET and POST". Technical Architecture Group finding. W3C. Retrieved 26 September 2010.
  23. ^ "POST". RFC 2616. p. 54. sec. 9.5. doi:10.17487/RFC2616. RFC 2616.
  24. ^ "PUT". RFC 2616. p. 55. sec. 9.6. doi:10.17487/RFC2616. RFC 2616.
  25. ^ "CONNECT". Hypertext Transfer Protocol – HTTP/1.1. IETF. June 1999. p. 57. sec. 9.9. doi:10.17487/RFC2616. RFC 2616. Retrieved 23 February 2014.
  26. ^ Khare, Rohit; Lawrence, Scott (May 2000). Upgrading to TLS Within HTTP/1.1. IETF. doi:10.17487/RFC2817. RFC 2817.
  27. ^ "Vulnerability Note VU#150227: HTTP proxy default configurations allow arbitrary TCP connections". US-CERT. 2002-05-17. Retrieved 2007-05-10.
  28. ^ Dusseault, Lisa; Snell, James M. (March 2010). PATCH Method for HTTP. IETF. doi:10.17487/RFC5789. RFC 5789.
  29. ^ "Method". RFC 2616. p. 36. sec. 5.1.1. doi:10.17487/RFC2616. RFC 2616.
  30. ^ a b Ediger, Brad (2007-12-21). Advanced Rails: Building Industrial-Strength Web Apps in Record Time. O'Reilly Media, Inc. p. 188. ISBN 978-0596519728. A common mistake is to use GET for an action that updates a resource. [...] This problem came into the Rails public eye in 2005, when the Google Web Accelerator was released.
  31. ^ Cantrell, Christian (2005-06-01). "What Have We Learned From the Google Web Accelerator?". Adobe Blogs. Adobe. Archived from the original on 2017-08-19.
  32. ^ a b "Cross Site Tracing". OWASP. Retrieved 2016-06-22.
  33. ^ "Canonicalization and Text Defaults". RFC 2616. sec. 3.7.1. doi:10.17487/RFC2616. RFC 2616.
  34. ^ "Status-Line". RFC 2616. p. 39. sec. 6.1. doi:10.17487/RFC2616. RFC 2616.
  35. ^ Canavan, John (2001). Fundamentals of Networking Security. Norwood, MA: Artech House. pp. 82–83. ISBN 9781580531764.
  36. ^ Zalewski, Michal. "Browser Security Handbook". Retrieved 30 April 2015.
  37. ^ "Chromium Issue 4527: implement RFC 2817: Upgrading to TLS Within HTTP/1.1". Retrieved 30 April 2015.
  38. ^ "Mozilla Bug 276813 – [RFE] Support RFC 2817 / TLS Upgrade for HTTP 1.1". Retrieved 30 April 2015.
  39. ^ Luotonen, Ari; Franks, John (February 22, 1996). Byte Range Retrieval Extension to HTTP. IETF. I-D draft-ietf-http-range-retrieval-00.
  40. ^ Nottingham, Mark (October 2010). Web Linking. IETF. doi:10.17487/RFC5988. RFC 5988.
  41. ^ "Hypertext Transfer Protocol Bis (httpbis) – Charter". IETF. 2012.

External links

Basic access authentication

In the context of an HTTP transaction, basic access authentication is a method for an HTTP user agent (e.g. a web browser) to provide a user name and password when making a request. In basic HTTP authentication, a request contains a header field of the form Authorization: Basic , where credentials is the base64 encoding of id and password joined by a single colon (:).

It is specified in RFC 7617 from 2015, which obsoletes RFC 2617 from 1999.

CURL

cURL (pronounced 'curl') is a computer software project providing a library (libcurl) and command-line tool (curl) for transferring data using various protocols. It was first released in 1997. The name stands for "Client URL". The original author and lead developer is the Swedish developer Daniel Stenberg.

HTTP/2

HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the Hypertext Transfer Protocol working group httpbis (where bis means "second") of the Internet Engineering Task Force. HTTP/2 is the first new version of HTTP since HTTP 1.1, which was standardized in RFC 2068 in 1997. The Working Group presented HTTP/2 to IESG for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on February 17, 2015. The HTTP/2 specification was published as RFC 7540 in May 2015.The standardization effort was supported by Chrome, Opera, Firefox, Internet Explorer 11, Safari, Amazon Silk, and Edge browsers. Most major browsers had added HTTP/2 support by the end of 2015.According to W3Techs, as of May 2019, 37.3% of the top 10 million websites supported HTTP/2.HTTP/3 is the proposed successor (Internet Draft) to HTTP/2, that builds on it.

HTTP/3

HTTP/3 is the upcoming third major version of the Hypertext Transfer Protocol used to exchange binary information on the World Wide Web. HTTP/3 is based on previous RFC draft "Hypertext Transfer Protocol (HTTP) over QUIC". QUIC is an experimental transport layer network protocol initially developed by Google where user space congestion control is used over User Datagram Protocol (UDP).

On 28 October 2018 in a mailing list discussion, Mark Nottingham, Chair of the IETF HTTP and QUIC Working Groups, made the official request to rename HTTP-over-QUIC as HTTP/3 to "clearly identify it as another binding of HTTP semantics to the wire protocol ... so people understand its separation from QUIC" and pass its development from the QUIC Working Group to the HTTP Working Group after finalizing and publishing the draft. In the subsequent discussions that followed and stretched over several days, Nottingham's proposal was accepted by fellow IETF members, who in November 2018 gave their official seal of approval that HTTP-over-QUIC become HTTP/3.

HTTPS

Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS), or, formerly, its predecessor, Secure Sockets Layer (SSL). The protocol is therefore also often referred to as HTTP over TLS, or HTTP over SSL.

The principal motivation for HTTPS is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit. It protects against man-in-the-middle attacks. The bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. In practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor.

Historically, HTTPS connections were primarily used for payment transactions on the World Wide Web, e-mail and for sensitive transactions in corporate information systems. Since 2018, HTTPS is used more often by web users than the original non-secure HTTP, primarily to protect page authenticity on all types of websites; secure accounts; and keep user communications, identity, and web browsing private.

HTTP 301

The HTTP response status code 301 Moved Permanently is used for permanent URL redirection, meaning current links or records using the URL that the response is received for should be updated. The new URL should be provided in the Location field included with the response. The 301 redirect is considered a best practice for upgrading users from HTTP to HTTPS.RFC 2616 states that:

If a client has link-editing capabilities, it should update all references to the Request URL.

The response is cacheable unless indicated otherwise.

Unless the request method was HEAD, the entity should contain a small hypertext note with a hyperlink to the new URL(s).

If the 301 status code is received in response to a request of any type other than GET or HEAD, the client must ask the user before redirecting.

HTTP 302

The HTTP response status code 302 Found is a common way of performing URL redirection. The HTTP/1.0 specification (RFC 1945) initially defined this code, and gave it the description phrase "Moved Temporarily" rather than "Found".

An HTTP response with this status code will additionally provide a URL in the header field Location. This is an invitation to the user agent (e.g. a web browser) to make a second, otherwise identical, request to the new URL specified in the location field. The end result is a redirection to the new URL.

Many web browsers implemented this code in a manner that violated this standard, changing the request type of the new request to GET, regardless of the type employed in the original request (e.g. POST). For this reason, HTTP/1.1 (RFC 2616) added the new status codes 303 and 307 to disambiguate between the two behaviours, with 303 mandating the change of request type to GET, and 307 preserving the request type as originally sent. Despite the greater clarity provided by this disambiguation, the 302 code is still employed in web frameworks to preserve compatibility with browsers that do not implement the HTTP/1.1 specification.As a consequence, RFC 7231 (the update of RFC 2616) changes the definition to allow user agents to rewrite POST to GET.

HTTP 403

HTTP 403 is a standard HTTP status code communicated to clients by an HTTP server to indicate that access to the requested (valid) URL by the client is Forbidden for some reason. The server understood the request, but will not fulfil it due to client related issues. There are a number of sub-status error codes that provide a more specific reason for responding with the 403 status code.

HTTP 404

The HTTP 404, 404 Not Found, 404, Page Not Found, or Server Not Found error message is a Hypertext Transfer Protocol (HTTP) standard response code, in computer network communications, to indicate that the browser was able to communicate with a given server, but the server could not find what was requested.

The website hosting server will typically generate a "404 Not Found" web page when a user attempts to follow a broken or dead link; hence the 404 error is one of the most recognizable errors encountered on the World Wide Web.

HTTP location

The HTTP Location header field is returned in responses from an HTTP server under two circumstances:

To ask a web browser to load a different web page (URL redirection). In this circumstance, the Location header should be sent with an HTTP status code of 3xx. It is passed as part of the response by a web server when the requested URI has:

Moved temporarily;

Moved permanently; or

Processed a request, e.g. a POSTed form, and is providing the result of that request at a different URI

To provide information about the location of a newly created resource. In this circumstance, the Location header should be sent with an HTTP status code of 201 or 202.An obsolete version of the HTTP 1.1 specifications (IETF RFC 2616) required a complete absolute URI for redirection. The IETF HTTP working group found that the most popular web browsers tolerate the passing of a relative URL and, consequently, the updated HTTP 1.1 specifications (IETF RFC 7231) relaxed the original constraint, allowing the use of relative URLs in Location headers.

HTTP persistent connection

HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.

HTTP referer

The HTTP referer (originally a misspelling of referrer) is an optional HTTP header field that identifies the address of the webpage (i.e. the URI or IRI) that linked to the resource being requested. By checking the referrer, the new webpage can see where the request originated.

In the most common situation this means that when a user clicks a hyperlink in a web browser, the browser sends a request to the server holding the destination webpage. The request may include the referer field, which indicates the last page the user was on (the one where they clicked the link).

Referer logging is used to allow websites and web servers to identify where people are visiting them from, for promotional or statistical purposes.The default behaviour of referer leaking puts websites at risk of privacy and security breaches.

List of HTTP header fields

HTTP header fields are components of the header section of request and response messages in the Hypertext Transfer Protocol (HTTP). They define the operating parameters of an HTTP transaction.

List of HTTP status codes

This is a list of Hypertext Transfer Protocol (HTTP) response status codes. Status codes are issued by a server in response to a client's request made to the server. It includes codes from IETF Request for Comments (RFCs), other specifications, and some additional codes used in some common applications of the Hypertext Transfer Protocol (HTTP). The first digit of the status code specifies one of five standard classes of responses. The message phrases shown are typical, but any human-readable alternative may be provided. Unless otherwise stated, the status code is part of the HTTP/1.1 standard (RFC 7231).The Internet Assigned Numbers Authority (IANA) maintains the official registry of HTTP status codes.Microsoft Internet Information Services (IIS) sometimes uses additional decimal sub-codes for more specific information, however these sub-codes only appear in the response payload and in documentation, not in the place of an actual HTTP status code.

All HTTP response status codes are separated into five classes (or categories). The first digit of the status code defines the class of response. The last two digits do not have any class or categorization role. There are five values for the first digit:

1xx (Informational): The request was received, continuing process

2xx (Successful): The request was successfully received, understood, and accepted

3xx (Redirection): Further action needs to be taken in order to complete the request

4xx (Client Error): The request contains bad syntax or cannot be fulfilled

5xx (Server Error): The server failed to fulfill an apparently valid request

POST (HTTP)

In computing, POST is a request method supported by HTTP used by the World Wide Web. By design, the POST request method requests that a web server accepts the data enclosed in the body of the request message, most likely for storing it. It is often used when uploading a file or when submitting a completed web form.

In contrast, the HTTP GET request method retrieves information from the server. As part of a GET request, some data can be passed within the URL's query string, specifying (for example) search terms, date ranges, or other information that defines the query.

As part of a POST request, an arbitrary amount of data of any type can be sent to the server in the body of the request message. A header field in the POST request usually indicates the message body's Internet media type.

Secure Hypertext Transfer Protocol

Secure Hypertext Transfer Protocol (S-HTTP) is an obsolete alternative to the HTTPS protocol for encrypting web communications carried over HTTP. It was developed by Eric Rescorla and Allan M. Schiffman, and published in 1999 as RFC 2660.

Web browsers typically use HTTP to communicate with web servers, sending and receiving information without encrypting it. For sensitive transactions, such as Internet e-commerce or online access to financial accounts, the browser and server must encrypt this information.

HTTPS and S-HTTP were both defined in the mid-1990s to address this need. S-HTTP was used by Spyglass's web server, while Netscape and Microsoft supported HTTPS rather than S-HTTP, leading to HTTPS becoming the de facto standard mechanism for securing web communications.

User agent

In computing, a user agent is software (a software agent) that is acting on behalf of a user. One common use of the term refers to a web browser that "retrieves, renders and facilitates end user interaction with Web content".There are other uses of the term "user agent". For example, an email reader is a mail user agent. In many cases, a user agent acts as a client in a network protocol used in communications within a client–server distributed computing system. In particular, the Hypertext Transfer Protocol (HTTP) identifies the client software originating the request, using a user-agent header, even when the client is not operated by a user. The Session Initiation Protocol (SIP) protocol (based on HTTP) followed this usage. In the SIP, the term user agent refers to both end points of a communications session.

Web cache

A web cache (or HTTP cache) is an information technology for the temporary storage (caching) of Web documents, such as web pages, images, and other types of Web multimedia, to reduce server lag. A web cache system stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met. A web cache system can refer either to an appliance, or to a computer program.

Webhook

A webhook in web development is a method of augmenting or altering the behavior of a web page, or web application, with custom callbacks. These callbacks may be maintained, modified, and managed by third-party users and developers who may not necessarily be affiliated with the originating website or application. The term "webhook" was coined by Jeff Lindsay in 2007 from the computer programming term hook.The format is usually JSON. The request is done as a HTTP POST request. The receiving endpoint can choose to whitelist certain IP addresses from known sources. The webhook can include information about what type of event it is, and a secret or signature to verify the webhook. GitHub and Stripe signs their requests using a HMAC signature included in as HTTP header. Facebook signs their requests using SHA-1.

Background
Sub-topics
Applications
Related topics
Standards
Official
Unofficial
Server-side
Client-side
Topics

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.