Search This Blog

Monday, February 17, 2025

HTTP

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/HTTP

International standard
Developed byInitially CERN; IETF, W3C
Introduced1991; 34 years ago
Websitehttpwg.org/specs/

HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP version, named 0.9. That version was subsequently developed, eventually becoming the public 1.0.

Development of early HTTP Requests for Comments (RFCs) started a few years later in a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF.

HTTP/1 was finalized and fully documented (as version 1.0) in 1996. It evolved (as version 1.1) in 1997 and then its specifications were updated in 1999, 2014, and 2022. Its secure variant named HTTPS is used by more than 85% of websites.

HTTP/2, published in 2015, provides a more efficient expression of HTTP's semantics "on the wire". As of August 2024, it is supported by 66.2% of websites (35.3% HTTP/2 + 30.9% HTTP/3 with backwards compatibility) and supported by almost all web browsers (over 98% of users). It is also supported by major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required.

HTTP/3, the successor to HTTP/2, was published in 2022. As of February 2024, it is now used on 30.9% of websites and is supported by most web browsers, i.e. (at least partially) supported by 97% of users. HTTP/3 uses QUIC instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. Support for HTTP/3 was added to Cloudflare and Google Chrome first, and is also enabled in Firefox. HTTP/3 has lower latency for real-world web pages, if enabled on the server, and loads faster than with HTTP/2, in some cases over three times faster than HTTP/1.1 (which is still commonly only enabled).

Technical overview

HTTP functions as a request–response protocol in the client–server model. A web browser, for example, may be the client whereas a process, named web server, running on a computer hosting one or more websites may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content.

HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, whenever possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers.

To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the HTTP headers (found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server).

HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its definition presumes an underlying and reliable transport layer protocol. In HTTP/3, the Transmission Control Protocol (TCP) is no longer used, but the older versions are still more used and they most commonly use TCP. They have also been adapted to use unreliable protocols such as the User Datagram Protocol (UDP), which HTTP/3 also (indirectly) always builds on, for example in HTTPU and Simple Service Discovery Protocol (SSDP).

HTTP resources are identified and located on the network by Uniform Resource Locators (URLs), using the Uniform Resource Identifiers (URIs) schemes http and https. As defined in RFC 3986, URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents.

In HTTP/1.0 a separate TCP connection to the same server is made for every resource request.

In HTTP/1.1 instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images, scripts, stylesheets, etc.).

HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, especially under high traffic conditions.

HTTP/2 is a revision of previous HTTP/1.1 in order to maintain the same client–server model and the same protocol methods but with these differences in order:

  • to use a compressed binary representation of metadata (HTTP headers) instead of a textual one, so that headers require much less space;
  • to use a single TCP/IP (usually encrypted) connection per accessed server domain instead of 2 to 8 TCP/IP connections;
  • to use one or more bidirectional streams per TCP/IP connection in which HTTP requests and responses are broken down and transmitted in small packets to almost solve the problem of the HOLB (head-of-line blocking).
  • to add a push capability to allow server application to send data to clients whenever new data is available (without forcing clients to request periodically new data to server by using polling methods).

HTTP/2 communications therefore experience much less latency and, in most cases, even higher speeds than HTTP/1.1 communications.

HTTP/3 is a revision of previous HTTP/2 in order to use QUIC + UDP transport protocols instead of TCP. Before that version, TCP/IP connections were used; but now, only the IP layer is used (which UDP, like TCP, builds on). This slightly improves the average speed of communications and to avoid the occasional (very rare) problem of TCP connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "head of line blocking").

History

Tim Berners-Lee

The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a client user interface called web browser. Berners-Lee designed HTTP in order to help with the adoption of his other idea: the "WorldWideWeb" project, which was first proposed in 1989, now known as the World Wide Web.

The first web server went live in 1990. The protocol used had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page.

Summary of HTTP milestone versions

Version Year introduced Current status Usage in August 2024 Support in August 2024
HTTP/0.9 1991 Obsolete 0 100%
HTTP/1.0 1996 Obsolete 0 100%
HTTP/1.1 1997 Standard 33.8% 100%
HTTP/2 2015 Standard 35.3% 66.2%
HTTP/3 2022 Standard 30.9% 30.9%

HTTP/0.9

In 1991, the first documented official version of HTTP was written as a plain document, less than 700 words long, and this version was named HTTP/0.9, which supported only GET method, allowing clients to only retrieve HTML documents from the server, but not supporting any other file formats or information upload.

HTTP/1.0-draft

Since 1992, a new document was written to specify the evolution of the basic protocol towards its next full version. It supported both the simple request method of the 0.9 version and the full GET request that included the client HTTP version. This was the first of the many unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0.

W3C HTTP Working Group

After having decided that new features of HTTP protocol were required and that they had to be fully documented as official RFCs, in early 1995 the HTTP Working Group (HTTP WG, led by Dave Raggett) was constituted with the aim to standardize and expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.

The HTTP WG planned to revise and publish new versions of the protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many revisions, that timeline lasted much more than one year.

The HTTP WG planned also to specify a far future version of HTTP called HTTP-NG (HTTP Next Generation) that would have solved all remaining problems, of previous versions, related to performances, low latency responses, etc. but this work started only a few years later and it was never completed.

HTTP/1.0

In May 1996, RFC 1945 was published as a final HTTP/1.0 revision of what had been used in previous 4 years as a pre-standard HTTP/1.0-draft which was already used by many web browsers and web servers.

In early 1996 developers started to even include unofficial extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.) into their products by using drafts of the upcoming HTTP/1.1 specifications.

HTTP/1.1

Since early 1996, major web browsers and web server developers also started to implement new features specified by pre-standard HTTP/1.1 drafts specifications. End-user adoption of the new versions of browsers and servers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet used the new HTTP/1.1 header "Host" to enable virtual hosting, and that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant.

In January 1997, RFC 2068 was officially released as HTTP/1.1 specifications.

In June 1999, RFC 2616 was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications.

W3C HTTP-NG Working Group

Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an HTTP-NG Working Group was formed to develop a new HTTP protocol named HTTP-NG (HTTP New Generation). A few proposals / drafts were produced for the new protocol to use multiplexing of HTTP transactions inside a single TCP/IP connection, but in 1999, the group stopped its activity passing the technical problems to IETF.

IETF HTTP Working Group restarted

In 2007, the IETF HTTP Working Group (HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify previous HTTP/1.1 specifications and secondly to write and refine future HTTP/2 specifications (named httpbis).

SPDY: an unofficial HTTP protocol developed by Google

In 2009, Google, a private company, announced that it had developed and tested a new HTTP binary protocol named SPDY. The implicit aim was to greatly speed up web traffic (specially between future web browsers and its servers).

SPDY was indeed much faster than HTTP/1.1 in many tests and so it was quickly adopted by Chromium and then by other major web browsers.

Some of the ideas about multiplexing HTTP streams over a single TCP/IP connection were taken from various sources, including the work of W3C HTTP-NG Working Group.

HTTP/2

In January–March 2012, HTTP Working Group (HTTPbis) announced the need to start to focus on a new HTTP/2 protocol (while finishing the revision of HTTP/1.1 specifications), maybe taking in consideration ideas and work done for SPDY.

After a few months about what to do to develop a new version of HTTP, it was decided to derive it from SPDY.

In May 2015, HTTP/2 was published as RFC 7540 and quickly adopted by all web browsers already supporting SPDY and more slowly by web servers.

2014 updates to HTTP/1.1

In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting RFC 2616:

  • RFC 7230, HTTP/1.1: Message Syntax and Routing
  • RFC 7231, HTTP/1.1: Semantics and Content
  • RFC 7232, HTTP/1.1: Conditional Requests
  • RFC 7233, HTTP/1.1: Range Requests
  • RFC 7234, HTTP/1.1: Caching
  • RFC 7235, HTTP/1.1: Authentication

HTTP/0.9 Deprecation

In RFC 7230 Appendix-A, HTTP/0.9 was deprecated for servers supporting HTTP/1.1 version (and higher):

Since HTTP/0.9 did not support header fields in a request, there is no mechanism for it to support name-based virtual hosts (selection of resource by inspection of the Host header field). Any server that implements name-based virtual hosts ought to disable support for HTTP/0.9. Most requests that appear to be HTTP/0.9 are, in fact, badly constructed HTTP/1.x requests caused by a client failing to properly encode the request-target.

Since 2016 many product managers and developers of user agents (browsers, etc.) and web servers have begun planning to gradually deprecate and dismiss support for HTTP/0.9 protocol, mainly for the following reasons:

  • it is so simple that an RFC document was never written (there is only the original document);
  • it has no HTTP headers and lacks many other features that nowadays are required for minimal security reasons;
  • it has not been widespread since 1999..2000 (because of HTTP/1.0 and HTTP/1.1) and is commonly used only by some very old network hardware, i.e. routers, etc.

HTTP/3

In 2020, the first drafts HTTP/3 were published and major web browsers and web servers started to adopt it.

On 6 June 2022, IETF standardized HTTP/3 as RFC 9114.

Updates and refactoring in 2022

In June 2022, a batch of RFCs was published, deprecating many of the previous documents and introducing a few minor changes and a refactoring of HTTP semantics description into a separate document.

HTTP data exchange

HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server. In HTTP implementations, TCP/IP connections are used using well-known ports (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also List of TCP and UDP port numbers). In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol QUIC over UDP is used.

Request and response messages through connections

Data is exchanged through a sequence of request–response messages which are exchanged by a session layer transport connection. An HTTP client initially tries to connect to a server establishing a connection (real or virtual). An HTTP(S) server listening on that port accepts the connection and then waits for a client's request message. The client sends its HTTP request message. Upon receiving the request the server sends back an HTTP response message, which includes header(s) plus a body if it is required. The body of this response message is typically the requested resource, although an error message or other information may also be returned. At any time (for many reasons) client or server can close the connection. Closing a connection is usually advertised in advance by using one or more HTTP headers in the last request/response message sent to server or client.

Persistent connections

In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent.

In HTTP/1.0, as stated in RFC 1945, the TCP/IP connection should always be closed by server after a response has been sent.

In HTTP/1.1 a keep-alive-mechanism was officially introduced so that a connection could be reused for more than one request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism.

HTTP/1.1 added also HTTP pipelining in order to further reduce lag time when using persistent connections by allowing clients to send multiple requests before waiting for each response. This optimization was never considered really safe because a few web servers and many proxy servers, specially transparent proxy servers placed in Internet / Intranets between clients and servers, did not handle pipelined requests properly (they served only the first request discarding the others, they closed the connection because they saw more data after the first request or some proxies even returned responses out of order etc.). Because of this, only HEAD and some GET requests (i.e. limited to real file requests and so with URLs without query string used as a command, etc.) could be pipelined in a safe and idempotent mode. After many years of struggling with the problems introduced by enabling pipelining, this feature was first disabled and then removed from most browsers also because of the announced adoption of HTTP/2.

HTTP/2 extended the usage of persistent connections by multiplexing many concurrent requests/responses through a single TCP/IP connection.

HTTP/3 does not use TCP/IP connections but QUIC + UDP (see also: technical overview).

Content retrieval optimizations

HTTP/0.9
A requested resource was always sent in its entirety.
HTTP/1.0
HTTP/1.0 added headers to manage resources cached by client in order to allow conditional GET requests; in practice a server has to return the entire content of the requested resource only if its last modified time is not known by client or if it changed since last full response to GET request. One of these headers, "Content-Encoding", was added to specify whether the returned content of a resource was or was not compressed.
If the total length of the content of a resource was not known in advance (i.e. because it was dynamically generated, etc.) then the header "Content-Length: number" was not present in HTTP headers and the client assumed that when server closed the connection, the content had been sent in its entirety. This mechanism could not distinguish between a resource transfer successfully completed and an interrupted one (because of a server / network error or something else).
HTTP/1.1
HTTP/1.1 introduced:
  • new headers to better manage the conditional retrieval of cached resources.
  • chunked transfer encoding to allow content to be streamed in chunks in order to reliably send it even when the server does not know its length in advance (i.e. because it is dynamically generated, etc.).
  • byte range serving, where a client can request only one or more portions (ranges of bytes) of a resource (i.e. the first part, a part in the middle or in the end of the entire content, etc.) and the server usually sends only the requested part(s). This is useful to resume an interrupted download (when a file is very large), when only a part of a content has to be shown or dynamically added to the already visible part by a browser (i.e. only the first or the following n comments of a web page) in order to spare time, bandwidth and system resources, etc.
HTTP/2, HTTP/3
Both HTTP/2 and HTTP/3 have kept the above mentioned features of HTTP/1.1.

HTTP authentication

HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge–response mechanism whereby the server identifies and issues a challenge before serving the requested content.

HTTP provides a general framework for access control and authentication, via an extensible set of challenge–response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information.

The authentication mechanisms described above belong to the HTTP protocol and are managed by client and server HTTP software (if configured to require authentication before allowing client access to one or more web resources), and not by the web applications using a web application session.

Authentication realms

The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI.

HTTP application session

HTTP is a stateless protocol. A stateless protocol does not require the web server to retain information or status about each user for the duration of multiple requests.

Some web applications need to manage user sessions, so they implement states, or server side sessions, using for instance HTTP cookies or hidden variables within web forms.

To start an application user session, an interactive authentication via web application login must be performed. To stop a user session a logout operation must be requested by user. These kind of operations do not use HTTP authentication but a custom managed web application authentication.

HTTP/1.1 request messages

Request messages are sent by a client to a target server.

Request syntax

A client sends request messages to the server, which consist of:

  • a request line, consisting of the case-sensitive request method, a space, the requested URI, another space, the protocol version, a carriage return, and a line feed, e.g.:
GET /images/logo.png HTTP/1.1
  • zero or more request header fields (at least 1 or more headers in case of HTTP/1.1), each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
Host: www.example.com
Accept-Language: en
  • an empty line, consisting of a carriage return and a line feed;
  • an optional message body.

In the HTTP/1.1 protocol, all header fields except Host: hostname are optional.

A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in RFC 1945.

Request methods

An HTTP/1.1 request made using telnet. The request message, response header section, and response body are highlighted.

HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification defined the GET, HEAD, and POST methods as well as listing the PUT, DELETE, LINK and UNLINK methods under additional methods. However, the HTTP/1.1 specification formally defined and added five new methods: PUT, DELETE, CONNECT, OPTIONS, and TRACE. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined, which allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined seven new methods and RFC 5789 specified the PATCH method.

Method names are case sensitive. This is in contrast to HTTP header field names which are case-insensitive.

GET
The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) For retrieving resources without making changes, GET is preferred over POST, as they can be addressed through a URL. This enables bookmarking and sharing and makes GET responses eligible for caching, which can save bandwidth. The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations." See safe methods below.

HEAD
The HEAD method requests that the target resource transfer a representation of its state, as for a GET request, but without the representation data enclosed in the response body. This is useful for retrieving the representation metadata in the response header, without having to transfer the entire representation. Uses include checking whether a page is available through the status code and quickly finding the size of a file (Content-Length).

POST
The POST method requests that the target resource process the representation enclosed in the request according to the semantics of the target resource. For example, it is used for posting a message to an Internet forum, subscribing to a mailing list, or completing an online shopping transaction.

PUT
The PUT method requests that the target resource create or update its state with the state defined by the representation enclosed in the request. A distinction from POST is that the client specifies the target location on the server.

DELETE
The DELETE method requests that the target resource delete its state.

CONNECT
The CONNECT method requests that the intermediary establish a TCP/IP tunnel to the origin server identified by the request target. It is often used to secure connections through one or more HTTP proxies with TLS. See HTTP CONNECT method.

OPTIONS
The OPTIONS method requests that the target resource transfer the HTTP methods that it supports. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

TRACE
The TRACE method requests that the target resource transfer the received request in the response body. That way a client can see what (if any) changes or additions have been made by intermediaries.

PATCH
The PATCH method requests that the target resource modify its state according to the partial update defined in the representation enclosed in the request. This can save bandwidth by updating a part of a file or document without having to transfer it entirely.

All general-purpose web servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification.

Properties of request methods
Request method RFC Request has payload body Response has payload body Safe Idempotent Cacheable
GET RFC 9110 Optional Yes Yes Yes Yes
HEAD RFC 9110 Optional No Yes Yes Yes
POST RFC 9110 Yes Yes No No Yes
PUT RFC 9110 Yes Yes No Yes No
DELETE RFC 9110 Optional Yes No Yes No
CONNECT RFC 9110 Optional Yes No No No
OPTIONS RFC 9110 Optional Yes Yes Yes No
TRACE RFC 9110 No Yes Yes Yes No
PATCH RFC 5789 Yes Yes No No No

Safe methods

A request method is safe if a request with that method has no intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE are defined as safe. In other words, safe methods are intended to be read-only. Safe methods can still have side effects not seen by the client, such as appending request information to a log file or charging an advertising account.

In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH are not safe. They may modify the state of the server or have other effects such as sending an email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Careless or deliberately irregular programming can allow GET requests to cause non-trivial changes on the server. This is discouraged because of the problems which can occur when web caching, search engines, and other automated agents make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as https://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article. A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make.

One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism.

Idempotent methods

A request method is idempotent if multiple identical requests with that method have the same effect as a single such request. The methods PUT and DELETE, and safe methods are defined as idempotent. Safe methods are trivially idempotent, since they are intended to have no effect on the server whatsoever; the PUT and DELETE methods, meanwhile, are idempotent since successive identical requests will be ignored. A website might, for instance, set up a PUT endpoint to modify a user's recorded email address. If this endpoint is configured correctly, any requests which ask to change a user's email address to the same email address which is already recorded—e.g. duplicate requests following a successful request—will have no effect. Similarly, a request to DELETE a certain user will have no effect if that user has already been deleted.

In contrast, the methods POST, CONNECT, and PATCH are not necessarily idempotent, and therefore sending an identical POST request multiple times may further modify the state of the server or have further effects, such as sending multiple emails. In some cases this is the desired effect, but in other cases it may occur accidentally. A user might, for example, inadvertently send multiple POST requests by clicking a button again if they were not given clear feedback that the first click was being processed. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether or not a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. To do so against recommendations, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

Cacheable methods

A request method is cacheable if responses to requests with that method may be stored for future reuse. The methods GET, HEAD, and POST are defined as cacheable.

In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable.

Request header fields

Request header fields allow the client to pass additional information beyond the request line, acting as request modifiers (similarly to the parameters of a procedure). They give information about the client, about the target resource, or about the expected handling of the request.

HTTP/1.1 response messages

A response message is sent by a server to a client as a reply to its former request message.

Response syntax

A server sends response messages to the client, which consist of:

  • a status line, consisting of the protocol version, a space, the response status code, another space, a possibly empty reason phrase, a carriage return and a line feed, e.g.:
    HTTP/1.1 200 OK
    
  • zero or more response header fields, each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
    Content-Type: text/html
    
  • an empty line, consisting of a carriage return and a line feed;
  • an optional message body.

Response status codes

In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The response status code is a three-digit integer code representing the result of the server's attempt to understand and satisfy the client's corresponding request. The way the client handles the response depends primarily on the status code, and secondarily on the other response header fields. Clients may not understand all registered status codes but they must understand their class (given by the first digit of the status code) and treat an unrecognized status code as being equivalent to the x00 status code of that class.

The standard reason phrases are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable.

The first digit of the status code defines its class:

1XX (informational)
The request was received, continuing process.
2XX (successful)
The request was successfully received, understood, and accepted.
3XX (redirection)
Further action needs to be taken in order to complete the request.
4XX (client error)
The request contains bad syntax or cannot be fulfilled.
5XX (server error)
The server failed to fulfill an apparently valid request.

Response header fields

The response header fields allow the server to pass additional information beyond the status line, acting as response modifiers. They give information about the server or about further access to the target resource or related resources.

Each response header field has a defined meaning which can be further refined by the semantics of the request method or response status code.

HTTP/1.1 example of request / response transaction

Below is a sample HTTP transaction between an HTTP/1.1 client and an HTTP/1.1 server running on www.example.com, port 80.

Client request

GET / HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive

A client request (consisting in this case of the request line and a few headers that can be reduced to only the "Host: hostname" header) is followed by a blank line, so that the request ends with a double end of line, each in the form of a carriage return followed by a line feed. The "Host: hostname" header value distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (A "/" (slash) will usually fetch a /index.html file if there is one.)

Server response

HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 155
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

<html>
  <head>
    <title>An Example Page</title>
  </head>
  <body>
    <p>Hello World, this is a very simple HTML document.</p>
  </body>
</html>

The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. "Content-Type" specifies the Internet media type of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called byte serving. When "Connection: close" is sent, it means that the web server will close the TCP connection immediately after the end of the transfer of this response.

Most of the header lines are optional but some are mandatory. When header "Content-Length: number" is missing in a response with an entity body then this should be considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1 if header "Transfer-Encoding: chunked" is present. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Some old implementations of HTTP/1.0 omitted the header "Content-Length" when the length of the body entity was not known at the beginning of the response and so the transfer of data to client continued until server closed the socket.

A "Content-Encoding: gzip" can be used to inform the client that the body entity part of the transmitted data is compressed by gzip algorithm.

Encrypted connections

The most popular way of establishing an encrypted HTTP connection is HTTPS. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent.

Similar protocols

  • The Gopher protocol is a content delivery protocol that was displaced by HTTP in the early 1990s.
  • The SPDY protocol is an alternative to HTTP developed at Google, superseded by HTTP/2.
  • The Gemini protocol is a Gopher-inspired protocol which mandates privacy-related features.

Sunday, February 16, 2025

Designer drug

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Designer_drug

A designer drug is a structural or functional analog of a controlled substance that has been designed to mimic the pharmacological effects of the original drug, while avoiding classification as illegal and/or detection in standard drug tests. Designer drugs include psychoactive substances that have been designated by the European Union, Australia, and New Zealand, as new psychoactive substances (NPS) as well as analogs of performance-enhancing drugs such as designer steroids.

Some of these designer drugs were originally synthesized by academic or industrial researchers in an effort to discover more potent derivatives with fewer side effects and shorter duration (and possibly also because it is easier to apply for patents for new molecules) and were later co-opted for recreational use. Other designer drugs were prepared for the first time in clandestine laboratories. Because the efficacy and safety of these substances have not been thoroughly evaluated in animal and human trials, the use of some of these drugs may result in unexpected side effects.

The development of designer drugs may be considered a subfield of drug design. The exploration of modifications to known active drugs—such as their structural analogues, stereoisomers, and derivatives—yields drugs that may differ significantly in effects from their "parent" drug (e.g., showing increased potency, or decreased side effects). In some instances, designer drugs have similar effects to other known drugs, but have completely dissimilar chemical structures (e.g. JWH-018 vs THC). Despite being a very broad term, applicable to almost every synthetic drug, it is often used to connote synthetic recreational drugs, sometimes even those which have not been designed at all (e.g. LSD, the psychedelic side effects of which were discovered unintentionally).

In some jurisdictions, drugs that are highly similar in structure to a prohibited drug are illegal to trade regardless of that drug's legal status (or indeed whether or not the structurally similar analogue has similar pharmacological effects). In other jurisdictions, their trade is a legal grey area, making them grey market goods. Some jurisdictions may have analogue laws which ban drugs similar in chemical structure to other prohibited drugs, while some designer drugs may be prohibited irrespective of the legal status of structurally similar drugs; in both cases, their trade may take place on the black market.

History

United States

1920s–1930s

Following the passage of the second International Opium Convention in 1925, which specifically banned morphine and the diacetyl ester of morphine, heroin, a number of alternative esters of morphine quickly started to be manufactured and sold. The most notable of these were dibenzoylmorphine and acetylpropionylmorphine, which have virtually identical effects to heroin but were not covered by the Opium Convention. This then led the Health Committee of the League of Nations to pass several resolutions attempting to bring these new drugs under control, ultimately leading in 1930 to the first broad analogues provisions extending legal control to all esters of morphine, oxycodone and hydromorphone. Another early example of what could loosely be termed designer drug use, was during the Prohibition era in the 1930s, when diethyl ether was sold and used as an alternative to illegal alcoholic beverages in a number of countries.

1960s–1970s

During the 1960s and 1970s, a number of new synthetic hallucinogens were introduced, with a notable example being the sale of highly potent tablets of DOM in San Francisco in 1967. There was little scope to prosecute people over drug analogues at this time, with new compounds instead being added to the controlled drug schedules one by one as they became a problem. One significant court case from this period was in 1973, when Tim Scully and Nicholas Sand were prosecuted for making the acetyl amide of LSD, known as ALD-52. At this time ALD-52 was not a controlled drug, but they were convicted on the grounds that in order to make ALD-52, they would have had to be in possession of LSD, which was illegal. The late 1970s also saw the introduction of various analogues of phencyclidine (PCP) to the illicit market.

1980s–early 1990s

The modern use of the term designer drug was coined in the 1980s to refer to various synthetic opioid drugs, based mostly on the fentanyl molecule (such as α-methylfentanyl). The term gained widespread popularity when MDMA (ecstasy) experienced a popularity boom in the mid-1980s. When the term was coined in the 1980s, a wide range of narcotics were being sold as heroin on the black market. Many were based on fentanyl or meperidine. One, MPPP, was found in some cases to contain an impurity called MPTP, which caused brain damage that could result in a syndrome identical to late stage Parkinson's disease, from only a single dose. Other problems were highly potent fentanyl analogues that caused many accidental overdoses.

Because the government was powerless to prosecute people for these drugs until after they had been marketed successfully, laws were passed to give the DEA power to emergency schedule chemicals for a year, with an optional 6-month extension, while gathering evidence to justify permanent scheduling, as well as the analogue laws mentioned previously. Emergency-scheduling power was used for the first time for MDMA. In this case, the DEA scheduled MDMA as a Schedule I drug and retained this classification after review, even though their own judge ruled that MDMA should be classified Schedule III on the basis of its demonstrated uses in medicine. The emergency scheduling power has subsequently been used for a variety of other drugs including 2C-B, AMT, and BZP. In 2004, a piperazine drug, TFMPP, became the first drug that had been emergency-scheduled to be denied permanent scheduling and revert to legal status.

The late 1980s and early 1990s also saw the re-emergence of methamphetamine in the United States as a widespread public health issue, leading to increasing controls on precursor chemicals in an attempt to cut down on domestic manufacture of the drug. This led to several alternative stimulant drugs emerging, the most notable ones being methcathinone and 4-methylaminorex, but, despite attracting enough attention from authorities to provoke legal scheduling of these compounds, their distribution was relatively limited in extent and methamphetamine continued to dominate the illicit synthetic stimulant market overall.

Late 1990s–2004

In the late 1990s and early 2000s, there was a huge explosion in designer drugs being sold over the internet. The term and concept of "research chemicals" was coined by some marketers of designer drugs (in particular, of psychedelic drugs in the tryptamine and phenethylamine family). The idea was that, by selling the chemicals as for "scientific research" rather than human consumption, the intent clause of the U.S. analogue drug laws would be avoided. Nonetheless, the DEA raided multiple suppliers, first JLF Primary Materials, and then multiple vendors (such as RAC Research) several years later in Operation Web Tryp. This process was accelerated greatly when vendors began advertising via search engines like Google by linking their sites to searches on key words such as chemical names and terms like psychedelic or hallucinogen. Widespread discussion of consumptive use and the sources for the chemicals in public forums also drew the attention of the media and authorities.

In 2004, the US Drug Enforcement Administration raided and shut down several Internet-based research chemical vendors in an operation called Web Tryp. With help from the authorities in India and China, two chemical manufacturers were also closed. Many other internet-based vendors promptly stopped doing business, even though their products were still legal throughout much of the world.

Most substances that were sold as "research chemicals" in this period of time are hallucinogens and bear a chemical resemblance to drugs such as psilocybin and mescaline. As with other hallucinogens, these substances are often taken for the purposes of facilitating spiritual processes, mental reflection or recreation. Some research chemicals on the market were not psychoactive, but can be used as precursors in the synthesis of other potentially psychoactive substances, for example, 2C-H, which could be used to make 2C-B and 2C-I among others. Extensive surveys of structural variations have been conducted by pharmaceutical corporations, universities and independent researchers over the last century, from which some of the presently available research chemicals derive. One particularly notable researcher is Alexander Shulgin, who presented syntheses and pharmacological explorations of hundreds of substances in the books TiHKAL and PiHKAL (co-authored with Ann Shulgin), and served as an expert witness for the defense in several court cases against manufacturers of psychoactive drugs.

The majority of chemical suppliers sold research chemicals in bulk form as powder, not as pills, as selling in pill form would invalidate the claims that they were being sold for non-consumptive research. Active dosages vary widely from substance to substance, ranging from micrograms to hundreds of milligrams, but while it is critical for the end user to weigh doses with a precision scale, instead of guessing ("eyeballing"), many users did not do this and this led to many emergency room visits and several deaths, which were a prominent factor leading to the emergency scheduling of several substances and eventually Operation Web Tryp. Some compounds such as 2C-B and 5-Meo-DiPT did eventually increase in popularity to the point that they were sold in pill form to reach a wider market, and acquired popular street names ("Nexus" and "Foxy," respectively). Once a chemical reaches this kind of popularity, it is usually just a matter of time before it is added to the list of scheduled (i.e., illegal) drugs.

The late 1990s and early 2000s also saw the first widespread use of novel anabolic steroids by athletes in competition. Steroids had been banned by the International Olympic Committee since 1976, but due to the large number of different anabolic agents available for human and veterinary use, the ability of laboratories to test for all available drugs had always lagged behind the ability of athletes to find new compounds to use. The introduction of increasingly formalised testing procedures, especially with the creation of the World Anti-Doping Agency in 1999, made it much more difficult for athletes to get away with using these drugs without detection, which then led to the synthesis of novel and potent anabolic steroid drugs such as tetrahydrogestrinone (THG), which were not detectable by the standard tests.

2005–2021

While through recent history most designer drugs had been either opioids, hallucinogens, or anabolic steroids, the range of possible compounds is limited only by the scientific and patent literature, and recent years have been characterised by a broadening of the range of compounds sold as designer drugs. These have included a wide variety of designer stimulants such as geranamine, mephedrone, MDPV and desoxypipradrol, several designer sedatives such as methylmethaqualone and premazepam, and designer analogues of sildenafil (Viagra), which have been reported as active compounds in "herbal" aphrodisiac products. Designer cannabinoids are another recent development, with two compounds JWH-018 and (C8)-CP 47,497 initially found in December 2008 as active components of "herbal smoking blends" sold as legal alternatives to marijuana. Subsequently, a growing range of synthetic cannabinoid agonists have continued to appear, including by 2010, novel compounds such as RCS-4, RCS-8, and AB-001, which had never been reported in the literature, and appear to have been invented by designer drug manufacturers themselves. Another novel development is the use of research ligands for cosmetic rather than strictly recreational purposes, such as grey-market internet sales of the non-approved alpha-melanocyte-stimulating hormone tanning drugs known as melanotan peptides.

"...what is new is the wide range of substances now being explored, the aggressive marketing of products that have been intentionally mislabelled, the growing use of the internet, and the speed at which the market reacts to control measures."

— EMCDDA director Wolfgang Goetz (November 2009).

Mephedrone and the cathinones marked somewhat of a turning point for designer drugs, turning them from little known, ineffective substances sold in head shops to powerful substances able to compete with classical drugs on the black market. Mephedrone especially experienced a somewhat meteoric rise in popularity in 2009 and the resulting media panic resulted in its prohibition in multiple countries. Following this there was a considerable emergence of other cathinones which attempted to mimic the effects of mephedrone, and with a newly attracted customer base, plenty of money to drive innovation.

Subsequently, the market rapidly expanded, with more and more substances being detected every year. In 2009, the EMCDDA's early warning system discovered 24 new drugs. In 2010, it found another 41; in 2011, another 49; and in 2012, there were 73 more. In 2013, a further 81 were identified: a total of 268 new drugs in just four years. These have not been limited to cathinones, with 35% being cannabinoids and the rest being composed of stimulants, benzodiazepines, psychedelics, dissociatives and to a lesser extent, every other class of drugs, even ibogoids and nootropics. The largest group of drugs being monitored by the EMCDDA is synthetic cannabinoids, with 209 different synthetic cannabinoids reported between 2008 and 2021 - including 11 new cannabinoids identified for the first time in 2020.

2022–present

In the early 2020s, safety and legal difficulty of regulating peptides spurred the growth of grey-market synthetic peptide hormone vendors. These peptides are marketed as non-recreational and sold for their purported anti-aging, performance enhancing and cosmetic benefits, such vendors may employ medical professionals using legal ambiguity for their operations.

Terminology

Many terms exist other than "designer drug" often depending on the context and geographical region. For example, the term new psychoactive substance (NPS) is more commonly used in academic settings, and in regions such as Australia, New Zealand, and European Union, including United Kingdom (UK).

Common names

In the UK to avoid being controlled by the Medicines Act, designer drugs such as mephedrone have been described as "plant food", despite the compounds having no history of being used for these purposes.

In the US, similar descriptions ("bath salts" is the most common) have been used to describe mephedrone as well as methylone and methylenedioxypyrovalerone (MDPV). Combined with labeling that they are "not for human consumption," these descriptions are an attempt to skirt the Federal Analog Act which forbids drugs that are "substantially similar" to already classified drugs from being sold for human use.

Synthetic cannabinoids are known under a variety of names including K2, Spice, Black Mamba, Bombay Blue, Genie, Zohai, Banana Cream Nuke, Krypton, and Lava Red. They are often called "synthetic marijuana," "herbal incense," or "herbal smoking blends" and often labeled "not for human consumption."

Safety

The safety of research chemicals is untested and little if any research has been done on the toxicology or pharmacology of most of these drugs. Few, if any, human or animal studies have been done. Many research compounds have produced unexpected side-effects and adverse incidents due to the lack of screening for off-target effects prior to marketing; both bromo-dragonfly and mephedrone seem to be capable of producing pronounced vasoconstriction under some circumstances, which has resulted in several deaths, although the mechanism remains unclear. Substituted phenethylamines such as the 2C family and substituted amphetamines such as the DOx family have also caused a limited number of deaths.

Law

Due to the recent development of many designer drugs, laws banning or regulating their use have not been developed yet, and in recent cases novel drugs have appeared directly in response to legislative action, to replace a similar compound that had recently been banned. Many of the chemicals fall under the various drug analogue legislations in certain countries, but most countries have no general analogue act or equivalent legislation and so novel compounds may fall outside of the law after only minor structural modifications.

In the United States, the Controlled Substances Act was amended by the Controlled Substance Analogue Enforcement of 1986, which attempted to ban designer drugs pre-emptively by making it illegal to manufacture, sell, or possess chemicals that were substantially similar in chemistry and pharmacology to Schedule I or Schedule II drugs.

Other countries have dealt with the issue differently. In some, the new drugs are banned as they become a concern, as in Germany, Canada, the United Kingdom, and Sweden. In Sweden, the police and customs may also seize drugs that are not on the list of drugs covered by the anti-drug laws if the police suspect that the purpose of the holding is related to drug abuse. Following a decision by a prosecutor, the police may destroy the seized drugs.

In Ireland, the Criminal Justice (Psychoactive Substances) Act 2010 bans substances based on their psychoactive effect, and was introduced as a catch-all to address the time lag between new substances appearing and their being banned individually. In the United Kingdom, the Psychoactive Substances Act 2016 adopts a similar approach.

Some countries, such as Australia, have enacted generic bans but based on chemical structure rather than psychoactive effect: if a chemical fits a set of rules regarding substitutions and alterations of an already-banned drug, then it too is banned. Brazil adopted the same model as Australia, in a recent ruling from ANVISA, which is responsible to define what constitute drugs.

Temporary class drug

A temporary class drug is a relatively new status for controlled drugs, which has been adopted in some jurisdictions, notably New Zealand and the United Kingdom, to attempt to bring newly synthesized designer drugs under legal control. The controlled drug legislation in these jurisdictions requires drug scheduling decisions to follow an evidence-based process, where the harms of the drug are assessed and reviewed so that an appropriate legal status can be assigned. Since many designer drugs sold in recent years have had little or no published research that could help inform such a decision, they have been widely sold as "legal highs", often for months, before sufficient evidence accumulates to justify placing them on the controlled drug schedules.

List

Homophobia

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Homophobia ...