Types of mutations that can be introduced by random, site-directed, combinatorial, or insertional mutagenesis
In molecular biology, mutagenesis is an important laboratory technique whereby DNA mutations are deliberately engineered to produce libraries of mutant genes, proteins, strains of bacteria, or other genetically modified organisms.
The various constituents of a gene, as well as its regulatory elements
and its gene products, may be mutated so that the functioning of a
genetic locus, process, or product can be examined in detail. The
mutation may produce mutant proteins with interesting properties or
enhanced or novel functions that may be of commercial use. Mutant
strains may also be produced that have practical application or allow
the molecular basis of a particular cell function to be investigated.
Many methods of mutagenesis exist today. Initially, the kind of
mutations artificially induced in the laboratory were entirely random
using mechanisms such as UV irradiation. Random mutagenesis cannot
target specific regions or sequences of the genome; however, with the
development of site-directed mutagenesis, more specific changes can be made. Since 2013, development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has allowed for the editing or mutagenesis of a genome in vivo.
Site-directed mutagenesis has proved useful in situations that random
mutagenesis is not. Other techniques of mutagenesis include
combinatorial and insertional mutagenesis. Mutagenesis that is not
random can be used to clone DNA, investigate the effects of mutagens, and engineer proteins.
It also has medical applications such as helping immunocompromised
patients, research and treatment of diseases including HIV and cancers,
and curing of diseases such as beta thalassemia.
Random mutagenesis
How DNA libraries generated by random mutagenesis
sample sequence space. The amino acid substituted into a given position
is shown. Each dot or set of connected dots is one member of the
library. Error-prone PCR randomly mutates some residues to other amino
acids. Alanine scanning replaces each residue of the protein with
alanine, one-by-one. Site saturation substitutes each of the 20 possible
amino acids (or some subset of them) at a single position, one-by-one.
Early approaches to mutagenesis relied on methods which produced
entirely random mutations. In such methods, cells or organisms are
exposed to mutagens such as UV radiation or mutagenic chemicals, and mutants with desired characteristics are then selected. Hermann Muller discovered in 1927 that X-rays can cause genetic mutations in fruit flies, and went on to use the mutants he created for his studies in genetics. For Escherichia coli, mutants may be selected first by exposure to UV radiation, then plated onto an agar medium. The colonies formed are then replica-plated, one in a rich medium,
another in a minimal medium, and mutants that have specific nutritional
requirements can then be identified by their inability to grow in the
minimal medium. Similar procedures may be repeated with other types of
cells and with different media for selection.
A number of methods for generating random mutations in specific proteins were later developed to screen for mutants with interesting or improved properties. These methods may involve the use of doped nucleotides in oligonucleotide synthesis, or conducting a PCR
reaction in conditions that enhance misincorporation of nucleotides
(error-prone PCR), for example by reducing the fidelity of replication
or using nucleotide analogues. A variation of this method for integrating non-biased mutations in a gene is sequence saturation mutagenesis. PCR products which contain mutation(s) are then cloned into an expression vector and the mutant proteins produced can then be characterised.
In a European Union law (as 2001/18 directive), this kind of mutagenesis may be used to produce GMOs but the products are exempted from regulation: no labeling, no evaluation.
Prior to the development site-directed mutagenesis techniques, all
mutations made were random, and scientists had to use selection for the
desired phenotype to find the desired mutation. Random mutagenesis
techniques has an advantage in terms of how many mutations can be
produced; however, while random mutagenesis can produce a change in
single nucleotides, it does not offer much control as to which
nucleotide is being changed.
Many researchers therefore seek to introduce selected changes to DNA in
a precise, site-specific manner. Early attempts uses analogs of
nucleotides and other chemicals were first used to generate localized point mutations. Such chemicals include aminopurine, which induces an AT to GC transition, while nitrosoguanidine, bisulfite, and N4-hydroxycytidine may induce a GC to AT transition.
These techniques allow specific mutations to be engineered into a
protein; however, they are not flexible with respect to the kinds of
mutants generated, nor are they as specific as later methods of
site-directed mutagenesis and therefore have some degree of randomness.
Other technologies such as cleavage of DNA at specific sites on the
chromosome, addition of new nucleotides, and exchanging of base pairs it
is now possible to decide where mutations can go.
Simplified
diagram of the site directed mutagenic technique using pre-fabricated
oligonucleotides in a primer extension reaction with DNA polymerase
Current techniques for site-specific mutation originates from the
primer extension technique developed in 1978. Such techniques commonly
involve using pre-fabricated mutagenic oligonucleotides in a primer extension reaction with DNA polymerase. This methods allows for point mutation or deletion or insertion
of small stretches of DNA at specific sites. Advances in methodology
have made such mutagenesis now a relatively simple and efficient
process.
Newer and more efficient methods of site directed mutagenesis are
being constantly developed. For example, a technique called "Seamless
ligation cloning extract" (or SLiCE for short) allows for the cloning of
certain sequences of DNA within the genome, and more than one DNA
fragment can be inserted into the genome at once.
Site directed mutagenesis allows the effect of specific mutation
to be investigated. There are numerous uses; for example, it has been
used to determine how susceptible certain species were to chemicals that
are often used In labs. The experiment used site directed mutagenesis
to mimic the expected mutations of the specific chemical. The mutation
resulted in a change in specific amino acids and the effects of this
mutation were analyzed.
Site
saturation mutagenesis is a type of site-directed mutagenesis. This
image shows the saturation mutagenesis of a single position in a
theoretical 10-residue protein. The wild type version of the protein is
shown at the top, with M representing the first amino acid methionine,
and * representing the termination of translation. All 19 mutants of the
isoleucine at position 5 are shown below.
The site-directed approach may be done systematically in such techniques as alanine scanning mutagenesis, whereby residues are systematically mutated to alanine in order to identify residues important to the structure or function of a protein. Another comprehensive approach is site saturation mutagenesis where one codon or a set of codons may be substituted with all possible amino acids at the specific positions.
Combinatorial mutagenesis
Combinatorial
mutagenesis is a site-directed protein engineering technique whereby
multiple mutants of a protein can be simultaneously engineered based on
analysis of the effects of additive individual mutations. It provides a useful method to assess the combinatorial effect of a large number of mutations on protein function. Large numbers of mutants may be screened for a particular characteristic by combinatorial analysis.
In this technique, multiple positions or short sequences along a DNA
strand may be exhaustively modified to obtain a comprehensive library of
mutant proteins.
The rate of incidence of beneficial variants can be improved by
different methods for constructing mutagenesis libraries. One approach
to this technique is to extract and replace a portion of the DNA
sequence with a library of sequences containing all possible
combinations at the desired mutation site. The content of the inserted
segment can include sequences of structural significance, immunogenic
property, or enzymatic function. A segment may also be inserted randomly
into the gene in order to assess structural or functional significance
of a particular part of a protein.
The insertion of one or more base pairs, resulting in DNA mutations, is also known as insertional mutagenesis.
Engineered mutations such as these can provide important information in
cancer research, such as mechanistic insights into the development of
the disease. Retroviruses and transposons are the chief instrumental
tools in insertional mutagenesis. Retroviruses, such as the mouse
mammory tumor virus and murine leukemia virus, can be used to identify
genes involved in carcinogenesis and understand the biological pathways
of specific cancers.
Transposons, chromosomal segments that can undergo transposition, can
be designed and applied to insertional mutagenesis as an instrument for
cancer gene discovery.
These chromosomal segments allow insertional mutagenesis to be applied
to virtually any tissue of choice while also allowing for more
comprehensive, unbiased depth in DNA sequencing.
Researchers have found four mechanisms of insertional mutagenesis
that can be used on humans. the first mechanism is called enhancer
insertion. Enhancers boost transcription of a particular gene by
interacting with a promoter of that gene. This particular mechanism was
first used to help severely immunocompromised patients I need of bone
marrow. Gammaretroviruses carrying enhancers were then inserted into
patients. The second mechanism is referred to as promoter insertion.
Promoters provide our cells with the specific sequences needed to begin
translation. Promoter insertion has helped researchers learn more about
the HIV virus. The third mechanism is gene inactivation. An example of
gene inactivation is using insertional mutagenesis to insert a
retrovirus that disrupts the genome of the T cell in leukemia patients
and giving them a specific antigen called CAR allowing the T cells to
target cancer cells. The final mechanisms is referred to as mRNA 3' end
substitution. Our genes occasionally undergo point mutations causing
beta-thalassemia that interrupts red blood cell function. To fix this
problem the correct gene sequence for the red blood cells are introduced
and a substitution is made.
Homologous recombination
Homologous recombination
can be used to produce specific mutation in an organism. Vector
containing DNA sequence similar to the gene to be modified is introduced
to the cell, and by a process of recombination replaces the target gene
in the chromosome. This method can be used to introduce a mutation or
knock out a gene, for example as used in the production of knockout mice.
Since 2013, the development of CRISPR-Cas9
technology has allowed for the efficient introduction of different
types of mutations into the genome of a wide variety of organisms. The
method does not require a transposon insertion site, leaves no marker,
and its efficiency and simplicity has made it the preferred method for genome editing.
Gene synthesis
As the cost of DNA oligonucleotide synthesis falls, artificial synthesis of a complete gene
is now a viable method for introducing mutations into a gene. This
method allows for extensive mutation at multiple sites, including the
complete redesign of the codon usage of a gene to optimise it for a
particular organism.
HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.
Development of HTTP was initiated by Tim Berners-Lee at CERN
in 1989 and summarized in a simple document describing the behavior of a
client and a server using the first HTTP version, named 0.9. That version was subsequently developed, eventually becoming the public 1.0.
HTTP/1 was finalized and fully documented (as version 1.0) in 1996. It evolved (as version 1.1) in 1997 and then its specifications were updated in 1999, 2014, and 2022. Its secure variant named HTTPS is used by more than 85% of websites.
HTTP/2, published in 2015, provides a more efficient expression of HTTP's semantics "on the wire". As of August 2024, it is supported by 66.2% of websites (35.3% HTTP/2 + 30.9% HTTP/3 with backwards compatibility) and supported by almost all web browsers (over 98% of users). It is also supported by major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required.
HTTP/3, the successor to HTTP/2, was published in 2022. As of February 2024, it is now used on 30.9% of websites and is supported by most web browsers, i.e. (at least partially) supported by 97% of users. HTTP/3 uses QUIC instead of TCP
for the underlying transport protocol. Like HTTP/2, it does not
obsolete previous major versions of the protocol. Support for HTTP/3 was
added to Cloudflare and Google Chrome first, and is also enabled in Firefox.
HTTP/3 has lower latency for real-world web pages, if enabled on the
server, and loads faster than with HTTP/2, in some cases over three
times faster than HTTP/1.1 (which is still commonly only enabled).
Technical overview
HTTP functions as a request–response protocol in the client–server model. A web browser, for example, may be the client whereas a process, named web server, running on a computer hosting one or more websites may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content or performs other functions on behalf of the client, returns a response
message to the client. The response contains completion status
information about the request and may also contain requested content in
its message body.
A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content.
HTTP is designed to permit intermediate network elements to
improve or enable communications between clients and servers.
High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers
to improve response time. Web browsers cache previously accessed web
resources and reuse them, whenever possible, to reduce network traffic.
HTTP proxy servers at private network
boundaries can facilitate communication for clients without a globally
routable address, by relaying messages with external servers.
To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the HTTP headers (found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server).
In HTTP/1.0 a separate TCP connection to the same server is made for every resource request.
In HTTP/1.1 instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images, scripts, stylesheets, etc.).
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, especially under high traffic conditions.
HTTP/2
is a revision of previous HTTP/1.1 in order to maintain the same
client–server model and the same protocol methods but with these
differences in order:
to use a compressed binary representation of metadata (HTTP
headers) instead of a textual one, so that headers require much less
space;
to use a single TCP/IP (usually encrypted) connection per accessed server domain instead of 2 to 8 TCP/IP connections;
to use one or more bidirectional streams per TCP/IP connection in
which HTTP requests and responses are broken down and transmitted in
small packets to almost solve the problem of the HOLB (head-of-line blocking).
to add a push capability to allow server application to send data to
clients whenever new data is available (without forcing clients to
request periodically new data to server by using polling methods).
HTTP/2 communications therefore experience much less latency and, in
most cases, even higher speeds than HTTP/1.1 communications.
HTTP/3 is a revision of previous HTTP/2 in order to use QUIC
+ UDP transport protocols instead of TCP. Before that version, TCP/IP
connections were used; but now, only the IP layer is used (which UDP,
like TCP, builds on). This slightly improves the average speed of
communications and to avoid the occasional (very rare) problem of TCP
connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "head of line blocking").
The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a client user interface called web browser.
Berners-Lee designed HTTP in order to help with the adoption of his
other idea: the "WorldWideWeb" project, which was first proposed in
1989, now known as the World Wide Web.
The first web server went live in 1990. The protocol used had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page.
In 1991, the first documented official version of HTTP was written as
a plain document, less than 700 words long, and this version was named
HTTP/0.9, which supported only GET method, allowing clients to only
retrieve HTML documents from the server, but not supporting any other
file formats or information upload.
HTTP/1.0-draft
Since 1992, a new document was written to specify the evolution of
the basic protocol towards its next full version. It supported both the
simple request method of the 0.9 version and the full GET request that
included the client HTTP version. This was the first of the many
unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0.
W3C HTTP Working Group
After having decided that new features of HTTP protocol were required and that they had to be fully documented as official RFCs, in early 1995 the HTTP Working Group (HTTP WG, led by Dave Raggett)
was constituted with the aim to standardize and expand the protocol
with extended operations, extended negotiation, richer meta-information,
tied with a security protocol which became more efficient by adding
additional methods and header fields.
The HTTP WG planned to revise and publish new versions of the
protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many
revisions, that timeline lasted much more than one year.
The HTTP WG planned also to specify a far future version of HTTP
called HTTP-NG (HTTP Next Generation) that would have solved all
remaining problems, of previous versions, related to performances, low
latency responses, etc. but this work started only a few years later and
it was never completed.
HTTP/1.0
In May 1996, RFC1945
was published as a final HTTP/1.0 revision of what had been used in
previous 4 years as a pre-standard HTTP/1.0-draft which was already used
by many web browsers and web servers.
In early 1996 developers started to even include unofficial
extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.)
into their products by using drafts of the upcoming HTTP/1.1
specifications.
HTTP/1.1
Since early 1996, major web browsers and web server developers also
started to implement new features specified by pre-standard HTTP/1.1
drafts specifications. End-user adoption of the new versions of
browsers and servers was rapid. In March 1996, one web hosting company
reported that over 40% of browsers in use on the Internet used the new
HTTP/1.1 header "Host" to enable virtual hosting, and that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant.
In January 1997, RFC2068 was officially released as HTTP/1.1 specifications.
In June 1999, RFC2616 was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications.
W3C HTTP-NG Working Group
Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an HTTP-NG Working Group
was formed to develop a new HTTP protocol named HTTP-NG (HTTP New
Generation). A few proposals / drafts were produced for the new
protocol to use multiplexing
of HTTP transactions inside a single TCP/IP connection, but in 1999,
the group stopped its activity passing the technical problems to IETF.
IETF HTTP Working Group restarted
In 2007, the IETF HTTP Working Group
(HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify
previous HTTP/1.1 specifications and secondly to write and refine future
HTTP/2 specifications (named httpbis).
SPDY: an unofficial HTTP protocol developed by Google
In 2009, Google, a private company, announced that it had developed and tested a new HTTP binary protocol named SPDY. The implicit aim was to greatly speed up web traffic (specially between future web browsers and its servers).
SPDY was indeed much faster than HTTP/1.1 in many tests and so it was quickly adopted by Chromium and then by other major web browsers.
Some of the ideas about multiplexing HTTP streams over a single
TCP/IP connection were taken from various sources, including the work of
W3C HTTP-NG Working Group.
HTTP/2
In January–March 2012, HTTP Working Group (HTTPbis) announced the
need to start to focus on a new HTTP/2 protocol (while finishing the
revision of HTTP/1.1 specifications), maybe taking in consideration
ideas and work done for SPDY.
After a few months about what to do to develop a new version of HTTP, it was decided to derive it from SPDY.
In May 2015, HTTP/2 was published as RFC7540 and quickly adopted by all web browsers already supporting SPDY and more slowly by web servers.
2014 updates to HTTP/1.1
In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting RFC2616:
In RFC7230 Appendix-A, HTTP/0.9 was deprecated for servers supporting HTTP/1.1 version (and higher):
Since
HTTP/0.9 did not support header fields in a request, there is no
mechanism for it to support name-based virtual hosts (selection of
resource by inspection of the Host header field). Any server that implements name-based virtual hosts ought to disable support for HTTP/0.9.
Most requests that appear to be HTTP/0.9 are, in fact, badly
constructed HTTP/1.x requests caused by a client failing to properly
encode the request-target.
Since 2016 many product managers and developers of user agents
(browsers, etc.) and web servers have begun planning to gradually
deprecate and dismiss support for HTTP/0.9 protocol, mainly for the
following reasons:
it is so simple that an RFC document was never written (there is only the original document);
it has no HTTP headers and lacks many other features that nowadays are required for minimal security reasons;
it has not been widespread since 1999..2000 (because of HTTP/1.0 and
HTTP/1.1) and is commonly used only by some very old network hardware,
i.e. routers, etc.
HTTP/3
In 2020, the first drafts HTTP/3 were published and major web browsers and web servers started to adopt it.
On 6 June 2022, IETF standardized HTTP/3 as RFC9114.
Updates and refactoring in 2022
In June 2022, a batch of RFCs was published, deprecating many of the
previous documents and introducing a few minor changes and a refactoring
of HTTP semantics description into a separate document.
RFC9218, Extensible Prioritization Scheme for HTTP
HTTP data exchange
HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server. In HTTP implementations, TCP/IP connections are used using well-known ports (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also List of TCP and UDP port numbers). In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol QUIC over UDP is used.
Request and response messages through connections
Data is exchanged through a sequence of request–response messages which are exchanged by a session layer transport connection.
An HTTP client initially tries to connect to a server establishing a
connection (real or virtual). An HTTP(S) server listening on that port
accepts the connection and then waits for a client's request message.
The client sends its HTTP request message. Upon receiving the request
the server sends back an HTTP response message, which includes header(s)
plus a body if it is required. The body of this response message is
typically the requested resource, although an error message or other
information may also be returned. At any time (for many reasons) client
or server can close the connection. Closing a connection is usually
advertised in advance by using one or more HTTP headers in the last
request/response message sent to server or client.
In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent.
In HTTP/1.0, as stated in RFC 1945, the TCP/IP connection should always be closed by server after a response has been sent.
In HTTP/1.1 a keep-alive-mechanism was officially
introduced so that a connection could be reused for more than one
request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection
after the first request has been sent. Another positive side effect is
that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism.
HTTP/1.1 added also HTTP pipelining
in order to further reduce lag time when using persistent connections
by allowing clients to send multiple requests before waiting for each
response. This optimization was never considered really safe because a
few web servers and many proxy servers, specially transparent proxy servers placed in Internet / Intranets
between clients and servers, did not handle pipelined requests properly
(they served only the first request discarding the others, they closed
the connection because they saw more data after the first request or
some proxies even returned responses out of order etc.). Because of
this, only HEAD and some GET requests (i.e. limited to real file
requests and so with URLs without query string used as a command, etc.) could be pipelined in a safe and idempotent
mode. After many years of struggling with the problems introduced by
enabling pipelining, this feature was first disabled and then removed
from most browsers also because of the announced adoption of HTTP/2.
HTTP/2 extended the usage of persistent connections by
multiplexing many concurrent requests/responses through a single TCP/IP
connection.
HTTP/3 does not use TCP/IP connections but QUIC + UDP (see also: technical overview).
Content retrieval optimizations
HTTP/0.9
A requested resource was always sent in its entirety.
HTTP/1.0
HTTP/1.0 added headers to manage resources cached by client in order
to allow conditional GET requests; in practice a server has to return
the entire content of the requested resource only if its last modified
time is not known by client or if it changed since last full response to
GET request. One of these headers, "Content-Encoding", was added to
specify whether the returned content of a resource was or was not compressed.
If the total length of the content of a resource was not known in
advance (i.e. because it was dynamically generated, etc.) then the
header "Content-Length: number" was not present in HTTP
headers and the client assumed that when server closed the connection,
the content had been sent in its entirety. This mechanism could not
distinguish between a resource transfer successfully completed and an
interrupted one (because of a server / network error or something else).
HTTP/1.1
HTTP/1.1 introduced:
new headers to better manage the conditional retrieval of cached resources.
chunked transfer encoding
to allow content to be streamed in chunks in order to reliably send it
even when the server does not know its length in advance (i.e. because
it is dynamically generated, etc.).
byte range serving,
where a client can request only one or more portions (ranges of bytes)
of a resource (i.e. the first part, a part in the middle or in the end
of the entire content, etc.) and the server usually sends only the
requested part(s). This is useful to resume an interrupted download
(when a file is very large), when only a part of a content has to be
shown or dynamically added to the already visible part by a browser
(i.e. only the first or the following n comments of a web page) in order
to spare time, bandwidth and system resources, etc.
HTTP/2, HTTP/3
Both HTTP/2 and HTTP/3 have kept the above mentioned features of HTTP/1.1.
HTTP authentication
HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication
which operate via a challenge–response mechanism whereby the server
identifies and issues a challenge before serving the requested content.
HTTP provides a general framework for access control and
authentication, via an extensible set of challenge–response
authentication schemes, which can be used by a server to challenge a
client request and by a client to provide authentication information.
The authentication mechanisms described above belong to the HTTP
protocol and are managed by client and server HTTP software (if
configured to require authentication before allowing client access to
one or more web resources), and not by the web applications using a web application session.
Authentication realms
The HTTP Authentication specification also provides an arbitrary,
implementation-specific construct for further dividing resources common
to a given root URI.
The realm value string, if present, is combined with the canonical root
URI to form the protection space component of the challenge. This in
effect allows the server to define separate authentication scopes under
one root URI.
HTTP application session
HTTP is a stateless protocol.
A stateless protocol does not require the web server to retain
information or status about each user for the duration of multiple
requests.
To start an application user session, an interactive authentication via web application login must be performed. To stop a user session a logout operation must be requested by user. These kind of operations do not use HTTP authentication but a custom managed web application authentication.
HTTP/1.1 request messages
Request messages are sent by a client to a target server.
Request syntax
A client sends request messages to the server, which consist of:
a request line, consisting of the case-sensitive request method, a space, the requested URI, another space, the protocol version, a carriage return, and a line feed, e.g.:
GET/images/logo.pngHTTP/1.1
zero or more request header fields
(at least 1 or more headers in case of HTTP/1.1), each consisting of
the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
Host: www.example.com
Accept-Language: en
an empty line, consisting of a carriage return and a line feed;
In the HTTP/1.1 protocol, all header fields except Host: hostname are optional.
A request line containing only the path name is accepted by
servers to maintain compatibility with HTTP clients before the HTTP/1.0
specification in RFC1945.
Request methods
An HTTP/1.1 request made using telnet. The request message, response header section, and response body are highlighted.
HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb)
to indicate the desired action to be performed on the identified
resource. What this resource represents, whether pre-existing data or
data that is generated dynamically, depends on the implementation of the
server. Often, the resource corresponds to a file or the output of an
executable residing on the server. The HTTP/1.0 specification
defined the GET, HEAD, and POST methods as well as listing the PUT,
DELETE, LINK and UNLINK methods under additional methods. However, the
HTTP/1.1 specification
formally defined and added five new methods: PUT, DELETE, CONNECT,
OPTIONS, and TRACE. Any client can use any method and the server can be
configured to support any combination of methods. If a method is unknown
to an intermediate, it will be treated as an unsafe and non-idempotent
method. There is no limit to the number of methods that can be defined,
which allows for future methods to be specified without breaking
existing infrastructure. For example, WebDAV defined seven new methods and RFC5789 specified the PATCH method.
Method names are case sensitive. This is in contrast to HTTP header field names which are case-insensitive.
GET
The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) For retrieving resources without making changes, GET is preferred over POST, as they can be addressed through a URL. This enables bookmarking and sharing and makes GET responses eligible for caching, which can save bandwidth. The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations." See safe methods below.
HEAD
The HEAD method requests that the target resource transfer a
representation of its state, as for a GET request, but without the
representation data enclosed in the response body. This is useful for
retrieving the representation metadata in the response header, without
having to transfer the entire representation. Uses include checking
whether a page is available through the status code and quickly finding the size of a file (Content-Length).
POST
The POST method
requests that the target resource process the representation enclosed
in the request according to the semantics of the target resource. For
example, it is used for posting a message to an Internet forum, subscribing to a mailing list, or completing an online shopping transaction.
PUT
The PUT method requests that the target resource create or update
its state with the state defined by the representation enclosed in the
request. A distinction from POST is that the client specifies the target
location on the server.
DELETE
The DELETE method requests that the target resource delete its state.
CONNECT
The CONNECT method requests that the intermediary establish a TCP/IP tunnel to the origin server identified by the request target. It is often used to secure connections through one or more HTTP proxies with TLS. See HTTP CONNECT method.
OPTIONS
The OPTIONS method requests that the target resource transfer the
HTTP methods that it supports. This can be used to check the
functionality of a web server by requesting '*' instead of a specific
resource.
TRACE
The TRACE method requests that the target resource transfer the
received request in the response body. That way a client can see what
(if any) changes or additions have been made by intermediaries.
PATCH
The PATCH method
requests that the target resource modify its state according to the
partial update defined in the representation enclosed in the request.
This can save bandwidth by updating a part of a file or document without
having to transfer it entirely.
All general-purpose web servers are required to implement at least
the GET and HEAD methods, and all other methods are considered optional
by the specification.
A request method is safe if a request with that method has no
intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE
are defined as safe. In other words, safe methods are intended to be read-only. Safe methods can still have side effects not seen by the client, such as appending request information to a log file or charging an advertising account.
In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH
are not safe. They may modify the state of the server or have other
effects such as sending an email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.
Despite the prescribed safety of GET requests, in practice their
handling by the server is not technically limited in any way. Careless
or deliberately irregular programming can allow GET requests to cause
non-trivial changes on the server. This is discouraged because of the
problems which can occur when web caching, search engines,
and other automated agents make unintended changes on the server. For
example, a website might allow deletion of a resource through a URL such
as https://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article. A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make.
One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism.
A request method is idempotent if multiple identical requests
with that method have the same effect as a single such request. The
methods PUT and DELETE, and safe methods are defined as idempotent. Safe
methods are trivially idempotent, since they are intended to have no
effect on the server whatsoever; the PUT and DELETE methods, meanwhile,
are idempotent since successive identical requests will be ignored. A
website might, for instance, set up a PUT endpoint to modify a user's
recorded email address. If this endpoint is configured correctly, any
requests which ask to change a user's email address to the same email
address which is already recorded—e.g. duplicate requests following a
successful request—will have no effect. Similarly, a request to DELETE a
certain user will have no effect if that user has already been deleted.
In contrast, the methods POST, CONNECT, and PATCH are not
necessarily idempotent, and therefore sending an identical POST request
multiple times may further modify the state of the server or have
further effects, such as sending multiple emails.
In some cases this is the desired effect, but in other cases it may
occur accidentally. A user might, for example, inadvertently send
multiple POST requests by clicking a button again if they were not given
clear feedback that the first click was being processed. While web browsers may show alert dialog boxes
to warn users in some cases where reloading a page may re-submit a POST
request, it is generally up to the web application to handle cases
where a POST request should not be submitted more than once.
Note that whether or not a method is idempotent is not enforced
by the protocol or web server. It is perfectly possible to write a web
application in which (for example) a database insert or other
non-idempotent action is triggered by a GET or other request. To do so
against recommendations, however, may result in undesirable
consequences, if a user agent assumes that repeating the same request is safe when it is not.
A request method is cacheable if responses to requests with
that method may be stored for future reuse. The methods GET, HEAD, and
POST are defined as cacheable.
In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable.
Request header fields allow the client to pass additional information
beyond the request line, acting as request modifiers (similarly to the
parameters of a procedure). They give information about the client,
about the target resource, or about the expected handling of the
request.
HTTP/1.1 response messages
A response message is sent by a server to a client as a reply to its former request message.
Response syntax
A server sends response messages to the client, which consist of:
zero or more response header fields, each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
Content-Type: text/html
an empty line, consisting of a carriage return and a line feed;
In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase
(such as "Not Found"). The response status code is a three-digit
integer code representing the result of the server's attempt to
understand and satisfy the client's corresponding request. The way the
client handles the response depends primarily on the status code, and
secondarily on the other response header fields. Clients may not
understand all registered status codes but they must understand their
class (given by the first digit of the status code) and treat an
unrecognized status code as being equivalent to the x00 status code of
that class.
The standard reason phrases are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase
to the user to provide further information about the nature of the
problem. The standard also allows the user agent to attempt to interpret
the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable.
The first digit of the status code defines its class:
1XX (informational)
The request was received, continuing process.
2XX (successful)
The request was successfully received, understood, and accepted.
3XX (redirection)
Further action needs to be taken in order to complete the request.
4XX (client error)
The request contains bad syntax or cannot be fulfilled.
5XX (server error)
The server failed to fulfill an apparently valid request.
The response header fields allow the server to pass additional
information beyond the status line, acting as response modifiers. They
give information about the server or about further access to the target
resource or related resources.
Each response header field has a defined meaning which can be
further refined by the semantics of the request method or response
status code.
HTTP/1.1 example of request / response transaction
Below is a sample HTTP transaction between an HTTP/1.1 client and an HTTP/1.1 server running on www.example.com, port 80.
A client request (consisting in this case of the request line and a few headers that can be reduced to only the "Host: hostname" header) is followed by a blank line, so that the request ends with a double end of line, each in the form of a carriage return followed by a line feed. The "Host: hostname" header value distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (A "/" (slash) will usually fetch a /index.html file if there is one.)
Server response
HTTP/1.1200OKDate:Mon, 23 May 2005 22:38:34 GMTContent-Type:text/html; charset=UTF-8Content-Length:155Last-Modified:Wed, 08 Jan 2003 23:11:55 GMTServer:Apache/1.3.3.7 (Unix) (Red-Hat/Linux)ETag:"3f80f-1b6-3e1cb03b"Accept-Ranges:bytesConnection:close<html><head><title>An Example Page</title></head><body><p>Hello World, this is a very simple HTML document.</p></body></html>
The ETag
(entity tag) header field is used to determine if a cached version of
the requested resource is identical to the current version of the
resource on the server. "Content-Type" specifies the Internet media type of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called byte serving. When "Connection: close" is sent, it means that the web server will close the TCP connection immediately after the end of the transfer of this response.
Most of the header lines are optional but some are mandatory. When header "Content-Length: number"
is missing in a response with an entity body then this should be
considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1
if header "Transfer-Encoding: chunked" is present. Chunked
transfer encoding uses a chunk size of 0 to mark the end of the content.
Some old implementations of HTTP/1.0 omitted the header "Content-Length"
when the length of the body entity was not known at the beginning of
the response and so the transfer of data to client continued until
server closed the socket.
A "Content-Encoding: gzip" can be used to inform the client that the body entity part of the transmitted data is compressed by gzip algorithm.
Encrypted connections
The most popular way of establishing an encrypted HTTP connection is HTTPS. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent.
Similar protocols
The Gopher protocol is a content delivery protocol that was displaced by HTTP in the early 1990s.
The SPDY protocol is an alternative to HTTP developed at Google, superseded by HTTP/2.
The Gemini protocol is a Gopher-inspired protocol which mandates privacy-related features.