Search This Blog

Sunday, February 2, 2020

Internet protocol suite (updated)

From Wikipedia, the free encyclopedia

The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is commonly known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). During its development, versions of it were known as the Department of Defense (DoD) model because the development of the networking method was funded by the United States Department of Defense through DARPA. Its implementation is a protocol stack.

The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.

The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.

History


Early research

Diagram of the first internetworked connection
 
An SRI International Packet Radio Van, used for the first three-way internetworked transmission.

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Hubert Zimmermann, Gérard Le Lann  and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974.

Initially, the Transmission Control Program managed both datagram transmissions and routing, but as experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development., and the research group of Robert Metcalfe at Xerox PARC. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. The Transmission Control Program was split into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested.

DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6). 

Early Implementation

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983.

A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Adoption

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol.

The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.

In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.

IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.

Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky  of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which ran atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983-4. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).

The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. Microsoft released a native TCP/IP stack in Windows 95. This event helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).

The British academic network JANET converted to TCP/IP in 1991.

Formal specification and standards

The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF). 

The characteristic architecture of the Internet Protocol Suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specification of the suite is RFC 1122, which broadly outlines four abstraction layers. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet Protocol Suite predates the OSI model, a more comprehensive reference framework for general networking systems. 

Key architectural principles


Conceptual data flow in a simple network topology of two hosts (A and B) connected by a link between their respective routers. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. After establishment of this pipe, most details of the communication are hidden from each process, as the underlying principles of communication are implemented in the lower protocol layers. In analogy, at the transport layer the communication appears as host-to-host, without knowledge of the application data structures and the connecting routers, while at the internetworking layer, individual network boundaries are traversed at each router.
 
Encapsulation of application data descending through the layers described in RFC 1122

The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.

The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features."

Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level. 

An early architectural document, RFC 1122, emphasizes architectural principles over layering. RFC 1122, titled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles and does not emphasize layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows:
  • The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client-server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
  • The transport layer performs host-to-host communications on either the local network or remote networks separated by routers.[28] It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
  • The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
  • The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to affect transmission of Internet layer datagrams to next-neighbor hosts.

Link layer

The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations, but also virtual link layers such as virtual private networks and networking tunnels.

The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled both in the software device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the Internet layer packets for transmission, and finally transmit the frames over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to link layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist, and are not explicitly defined in the TCP/IP model. 

The link layer in the TCP/IP model has corresponding functions in Layer 2 of the Open Systems Interconnection (OSI) model. 

Internet layer

The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing.

The Internet Protocol performs two basic functions:
  • Host addressing and identification: This is accomplished with a hierarchical IP addressing system.
  • Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by forwarding them to the next network router closer to the final destination.
The internet layer is not only agnostic of data structures at the transport layer, but it also does not distinguish between operation of the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.

Some of the protocols carried by IP, such as ICMP which is used to transmit diagnostic information, and IGMP which is used to manage IP Multicast data, are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model. The TCP/IP model's internet layer corresponds to layer three of the Open Systems Interconnection (OSI) model, where it is referred to as the network layer. 

The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts' computers, and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.

Transport layer

The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity, meaning it provides end-to-end message transfer services that are independent of the structure of user data and the logistics of exchanging information for any particular specific purpose and independent of the underlying network. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers). End-to-end message transmission or connecting applications at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP.

For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service announcements or directory services.

Because IP provides only a best effort delivery, some transport layer protocols offer reliability. However, IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC).

For example, the TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
  • data arrives in-order
  • data has minimal error (i.e., correctness)
  • duplicate data is discarded
  • lost or discarded packets are resent
  • includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented—not byte-stream-oriented like TCP—and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications.

The User Datagram Protocol is a connectionless datagram protocol. Like IP, it is a best effort, "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is designed for real-time data such as streaming audio and video.

The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications.

The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the Open Systems Interconnection (OSI) model, also called the transport layer. 

Application layer

The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower level protocols. This may include some basic network support services such as protocols for routing and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP or UDP messages), which in turn use lower layer protocols to effect actual data transfer.

The TCP/IP model does not consider the specifics of formatting and presenting data, and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). Such functions are the realm of libraries and application programming interfaces.

Application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. Application layer protocols are often associated with particular client-server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.

The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications must interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for network address translator (NAT) traversal to consider the application payload.

The application layer in the TCP/IP model is often compared as equivalent to a combination of the fifth (Session), sixth (Presentation), and the seventh (Application) layers of the Open Systems Interconnection (OSI) model.

Furthermore, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.

Comparison of TCP/IP and OSI layering

The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.

Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or the entire TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether a hardware layer is assumed below the link layer.

Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model, since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.

The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".

For example, the session and presentation layers of the OSI suite are considered to be included to the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session layer functionality is also realized with the port numbering of the TCP and UDP protocols, which cover the transport layer in the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange. 

Conflicts are apparent also in the original OSI model, ISO 7498, when not considering the annexes to this model, e.g., the ISO 7498/4 Management Framework, or the ISO 8648 Internal Organization of the Network layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP

IETF protocols can be encapsulated recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunneling at the network layer. 

Implementations

The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and IGMPv6 and is often accompanied by an integrated IPSec security layer. 

Application programmers are typically concerned only with interfaces in the application layer and often also in the transport layer, while the layers below are services provided by the TCP/IP stack in the operating system. Most IP implementations are accessible to programmers through sockets and APIs

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems, and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines.

Microcontroller firmware in the network adapter typically handles link issues, supported by driver software in the operating system. Non-programmable analog and digital electronics are normally in charge of the physical components below the link layer, typically using an application-specific integrated circuit (ASIC) chipset for each network interface or other physical standard. High-performance routers are to a large extent based on fast non-programmable digital electronics, carrying out link level switching.

Submarine sandwich

From Wikipedia, the free encyclopedia
  (Redirected from Hoagie)
https://en.wikipedia.org/wiki/Submarine_sandwich#Hoagie
 
Submarine sandwich
Submarine sandwich with toppings and dijon mustard.jpg
A submarine sandwich
Place of originUnited States
Region or stateNortheast
Main ingredientsMultiple
VariationsMultiple

A submarine sandwich, also known as a sub, hoagie, hero, or grinder, is a type of sandwich consisting of a length of bread or roll split lengthwise and filled with a variety of meats, cheeses, vegetables, and condiments. The sandwich has no standardized name, with over a dozen variations used around the world.

The terms submarine and sub are widespread and not assignable to any certain region, though many of the localized terms are clustered in the northeastern United States.

History and etymology

The Italian sandwich originated in several different Italian American communities in the Northeastern United States from the late 19th to mid-20th centuries. Portland, Maine, claims to be the birthplace of the Italian sandwich and it is considered Maine's signature sandwich. The popularity of this Italian-American cuisine has grown from its origins in Connecticut, Pennsylvania, Delaware, Maryland, New York, New Jersey, Massachusetts, and Rhode Island to most parts of the United States and Canada, and with the advent of chain restaurants, is now available in many parts of the world.

Submarine

The use of the term "submarine" or "sub" (after the resemblance of the roll to the shape of a submarine) is widespread. While some accounts source the name as originating in New London, Connecticut (site of the United States Navy's primary submarine base) during World War II, written advertisements from 1940 in Wilmington, Delaware, indicate the term originated prior to the United States' entry into World War II.

Fenian Ram submarine, c. 1920
 
One theory says the submarine was brought to the U.S. by Dominic Conti (1874–1954), an Italian immigrant who came to New York in the early 1900s. He is said to have named it after seeing the recovered 1901 submarine called Fenian Ram in the Paterson Museum of New Jersey in 1928. His granddaughter has stated the following:
My grandfather came to this country circa 1895 from Montella, Italy. Around 1910, he started his grocery store, called Dominic Conti's Grocery Store, on Mill Street in Paterson, New Jersey where he was selling the traditional Italian sandwiches. His sandwiches were made from a recipe he brought with him from Italy, which consisted of a long crust roll, filled with cold cuts, topped with lettuce, tomatoes, peppers, onions, oil, vinegar, Italian herbs and spices, salt, and pepper. The sandwich started with a layer of cheese and ended with a layer of cheese (this was so the bread wouldn't get soggy).

Hoagie

Workers read the Hog Island News
 
Salami, ham and cheeses on a hoagie roll
 
The term hoagie originated in the Philadelphia area. The Philadelphia Bulletin reported, in 1953, that Italians working at the World War I–era shipyard in Philadelphia known as Hog Island, where emergency shipping was produced for the war effort, introduced the sandwich by putting various meats, cheeses, and lettuce between two slices of bread. This became known as the "Hog Island" sandwich; shortened to "Hoggies", then the "hoagie". 

Dictionary.com offers the following origin of the term hoagie. n. American English (originally Philadelphia) word for "hero, large sandwich made from a long, split roll"; originally hoggie (c. 1936), traditionally said to be named for Big Band songwriter Hoagland Howard "Hoagy" Carmichael (1899–1981), but the use of the word predates his celebrity and the original spelling seems to suggest another source (perhaps "hog"). Modern spelling is c. 1945, and may have been altered by influence of Carmichael's nickname.

The Philadelphia Almanac and Citizen's Manual offers a different explanation, that the sandwich was created by early-twentieth-century street vendors called "hokey-pokey men", who sold antipasto salad, meats, cookies and buns with a cut in them. When Gilbert and Sullivan's operetta H.M.S. Pinafore opened in Philadelphia in 1879, bakeries produced a long loaf called the pinafore. Entrepreneurial "hokey-pokey men" sliced the loaf in half, stuffed it with antipasto salad, and sold the world's first "hoagie".

Another explanation is that the word hoagie arose in the late 19th to early 20th century, among the Italian community in South Philadelphia, when "on the hoke" meant that someone was destitute. Deli owners would give away scraps of cheeses and meats in an Italian bread-roll known as a "hokie", but the Italian immigrants pronounced it "hoagie".

Shortly after World War II, there were numerous varieties of the term in use throughout Philadelphia. By the 1940s, the spelling "hoagie" had come to dominate less-used variations like "hoogie" and "hoggie". It is never spelled hoagy. By 1955, restaurants throughout the area were using the term hoagie. Listings in Pittsburgh show hoagies arriving in 1961 and becoming widespread in that city by 1966.

Former Philadelphia mayor (and later Pennsylvania governor) Ed Rendell declared the hoagie the "Official Sandwich of Philadelphia". However, there are claims that the hoagie was actually a product of nearby Chester, Pennsylvania. DiCostanza's in Boothwyn, Pennsylvania, claims that the mother of DiConstanza's owner originated the hoagie in 1925 in Chester. DiCostanza relates the story that a customer came into the family deli and through an exchange matching the customer's requests and the deli's offerings, the hoagie was created.

Woolworth's to-go sandwich was called a hoagie in all U.S. stores.
Bánh mì sandwiches are sometimes referred to as "Vietnamese hoagies" in Philadelphia.

Hero

New York style meatball hero with mozzarella
 
The New York term hero is first attested in 1937. The name is sometimes credited to the New York Herald Tribune food writer Clementine Paddleford in the 1930s, but there is no good evidence for this. It is also sometimes claimed that it is related to the gyro, but this is unlikely as the gyro was unknown in the United States until the 1960s.

Hero (plural usually heros, not heroes) remains the prevailing New York City term for most sandwiches on an oblong roll with a generally Italian flavor, in addition to the original described above. Pizzeria menus often include eggplant parmigiana, chicken parmigiana, and meatball heros, each served with sauce. 

Grinder

Pastrami grinder
 
A common term in New England is grinder, but its origin has several possibilities. One theory has the name coming from Italian-American slang for a dock worker, among whom the sandwich was popular. Others say that it was called a grinder because it took a lot of chewing to eat the hard crust of the bread used.

In Pennsylvania, New York, Delaware, and parts of New England, the term grinder usually refers to a hot submarine sandwich (meatball, sausage, etc.), whereas a cold sandwich (e.g., cold cuts) is usually called a "sub". In the Philadelphia area, the term grinder is also applied to any hoagie that is toasted in the oven after assembly, whether or not it is made with traditionally hot ingredients. 

Wedge

The term wedge is used in Westchester County, New York, Putnam County, New York, Dutchess County, New York, and Fairfield County, Connecticut – four counties directly north of New York City.
Some base the name wedge on a diagonal cut in the middle of the sandwich, creating two halves or "wedges", or a "wedge" cut out of the top half of the bread with the fillings "wedged" in between, or a sandwich that is served between two "wedges" of bread. It has also been said wedge is just short for "sandwich", with the name having originated from an Italian deli owner located in Yonkers, who got tired of saying the whole word.

Spukie

The term spukie ("spukkie" or "spuckie") is unique to the city of Boston and derives from the Italian word spuccadella, meaning "long roll". The word spucadella is not typically found in Italian dictionaries, which may suggest that it could be a regional Italian dialect, or possibly a Boston Italian-American innovation. Spukie is typically heard in parts of Dorchester and South Boston. Some bakeries in Boston's North End neighborhood have homemade spucadellas for sale.

Other types

A Gatsby sandwich

Popularity and availability

Rolls filled with condiments have been common in several European countries for more than a century, notably in France and Scotland.

In the United States, from its origins with the Italian American labor force in the northeast, the sub began to show up on menus of local pizzerias. As time went on and popularity grew, small restaurants, called hoagie shops and sub shops, that specialized in the sandwich began to open.
Pizzerias may have been among the first Italian-American eateries, but even at the turn of the [20th] century distinctions were clear-cut as to what constituted a true ristorante. To be merely a pizza-maker was to be at the bottom of the culinary and social scale; so many pizzeria owners began offering other dishes, including the hero sandwich (also, depending on the region of the United States, called a 'wedge,' a 'hoagie,' a 'sub,' or a 'grinder') made on an Italian loaf of bread with lots of salami, cheese, and peppers.
— John Mariani, America Eats Out, p. 66
Subs or their national equivalents were already popular in many European, Asian and Australasian countries when late 20th-century franchisee chain restaurants and fast food made them even more popular and increased the prevalence of the word sub. Many outlets offer non-traditional ingredient combinations. 

In the United States, there are many chain restaurants that specialize in subs. Major international chains include Firehouse Subs, Quiznos, Mr. Sub and the largest restaurant chain in the world, Subway. The sandwich is also often available at supermarkets, local delis, and convenience stores, such as Wawa, who annually run a sub promotional event during the summer called Hoagiefest.

Cheesesteak

From Wikipedia, the free encyclopedia
 
Cheesesteak
Cheesesteak heaven.jpg
A cheesesteak sandwich
Alternative namesPhiladelphia cheesesteak, Philly cheesesteak
CourseMain course
Place of originUnited States
Region or statePhiladelphia, Pennsylvania
Created byPat & Harry Olivieri
Serving temperatureHot
Main ingredientsSliced steak, cheese, bread
VariationsMultiple

A cheesesteak (also known as a Philadelphia cheesesteak, Philly cheesesteak, cheesesteak sandwich, cheese steak, or steak and cheese) is a sandwich made from thinly sliced pieces of beefsteak and melted cheese in a long hoagie roll. A popular regional fast food, it has its roots in the U.S. city of Philadelphia, Pennsylvania.
 
 

History

The cheesesteak was developed in the early 20th century "by combining frizzled beef, onions, and cheese in a small loaf of bread", according to a 1987 exhibition catalog published by the Library Company of Philadelphia and the Historical Society of Pennsylvania.

Philadelphians Pat and Harry Olivieri are often credited with inventing the sandwich by serving chopped steak on an Italian roll in the early 1930s. The exact story behind its creation is debated, but in some accounts, Pat and Harry Olivieri originally owned a hot dog stand, and on one occasion, decided to make a new sandwich using chopped beef and grilled onions. While Pat was eating the sandwich, a cab driver stopped by and was interested in it, so he requested one for himself. After eating it, the cab driver suggested that Olivieri quit making hot dogs and instead focus on the new sandwich. They began selling this variation of steak sandwiches at their hot dog stand near South Philadelphia's Italian Market. They became so popular that Pat opened up his own restaurant which still operates today as Pat's King of Steaks. The sandwich was originally prepared without cheese; Olivieri said provolone cheese was first added by Joe "Cocky Joe" Lorenza, a manager at the Ridge Avenue location.

Cheesesteaks have become popular at restaurants and food carts throughout the city with many locations being independently owned, family-run businesses. Variations of cheesesteaks are now common in several fast food chains. Versions of the sandwich can also be found at high-end restaurants. Many establishments outside of Philadelphia refer to the sandwich as a "Philly cheesesteak".

Description


Meat

The meat traditionally used is thinly sliced rib-eye or top round, although other cuts of beef are also used. On a lightly oiled griddle at medium temperature, the steak slices are quickly browned and then scrambled into smaller pieces with a flat spatula. Slices of cheese are then placed over the meat, letting it melt, and then the roll is placed on top of the cheese. The mixture is then scooped up with a spatula and pressed into the roll, which is then cut in half.

Common additions include sautéed onions, sweet and hot peppers, mushrooms, ketchup, hot sauce, salt, and black pepper.

Bread

In Philadelphia, cheesesteaks are invariably served on hoagie rolls. Among several brands, perhaps the most famous are Amoroso rolls; these rolls are long, soft, and slightly salted. One source writes that "a proper cheesesteak consists of provolone or Cheez Whiz slathered on an Amoroso roll and stuffed with thinly shaved grilled meat," while a reader's letter to an Indianapolis magazine, lamenting the unavailability of good cheesesteaks, wrote that "the mention of the Amoroso roll brought tears to my eyes." After commenting on the debates over types of cheese and "chopped steak or sliced", Risk and Insurance magazine declared "The only thing nearly everybody can agree on is that it all has to be piled onto a fresh, locally baked Amoroso roll."

Cheese

A cheesesteak from Pat's King of Steaks with Cheez Whiz and onions
 
American cheese, Cheez Whiz, and provolone are the most commonly used cheeses or cheese products put on to the Philly cheesesteak.

White American cheese, along with provolone cheese, are the favorites due to their mild flavor and medium consistency. Some establishments melt the American cheese to achieve the creamy consistency, while others place slices over the meat, letting them melt slightly under the heat. Philadelphia Inquirer restaurant critic Craig LaBan says "Provolone is for aficionados, extra-sharp for the most discriminating among them." Geno's owner, Joey Vento, said, "We always recommend the provolone. That's the real cheese."

Cheez Whiz, first marketed in 1952, was not yet available for the original 1930 version, but has spread in popularity. A 1986 New York Times article called Cheez Whiz "the sine qua non of cheesesteak connoisseurs." In a 1985 interview, Pat Olivieri's nephew Frank Olivieri said that he uses "the processed cheese spread familiar to millions of parents who prize speed and ease in fixing the children's lunch for the same reason, because it is fast." Cheez Whiz is "overwhelmingly the favorite" at Pat's, outselling runner-up American by a ratio of eight or ten to one, while Geno's claims to go through eight to ten cases of Cheez Whiz a day.

In 2003, while running for President of the United States, John Kerry made what was considered a major faux pas when campaigning in Philadelphia and went to Pat's King of Steaks and ordered a cheesesteak with Swiss.

Variations

  • A chicken cheesesteak is made with chicken instead of beef.
  • A pizza steak is a cheesesteak topped with marinara sauce and mozzarella cheese and may be toasted in a broiler
  • A cheesesteak hoagie contains lettuce and tomato in addition to the ingredients found in the traditional steak sandwich, and may contain other elements often served in a hoagie.
  • A vegan cheesesteak is a sandwich that replaces steak and cheese with vegan ingredients, such as seitan or mushrooms for the steak, and soy-based cheese.
  • The Heater is a served at Phillies baseball games at Citizens Bank Park, so named for being a spicy variation as it is topped with jalapenos, Buffalo sauce, and jalapeno cheddar.

Engaged theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Engaged_theory

Engaged theory is a methodological framework for understanding social complexity. It takes social life or social relations as its base category, with 'the social' always understood as grounded in 'the natural', including humans as embodied beings. Engaged theory provides a framework that moves from detailed empirical analysis about things, people and processes in the world to abstract theory about the constitution and social framing of those things, people and processes.

Engaged theory is one approach within the broader tradition of critical theory. Engaged theory crosses the fields of sociology, anthropology, political studies, history, philosophy, and global studies. At its most general, the term engaged theory is used to describe theories that provide a tool box for engaging with the world while seeking to change it.

One lineage of engaged theory is called the 'constitutive abstraction' approach associated with a group of writers publishing in Arena Journal such as John Hinkson, Geoff Sharp (1926–2015), and Simon Cooper.

A related lineage of engaged theory has been developed by researchers who began their association through the Centre for Global Research at Royal Melbourne Institute of Technology in Australia – scholars such as Manfred Steger, Paul James and Damian Grenfell – drawing upon a range of writers from Pierre Bourdieu to Benedict Anderson and Charles Taylor. A group of researchers at Western Sydney University describe their work as 'Engaged Research'.

The politics of engagement

For all of its concern for epistemological grounding (see below), Engaged theory is an approach that is 'in the world'. All theory in some way affects what happens in the world, but it does not always theorize its own place in the constitution of ideas and practices. Anthony Giddens calls this movement a double hermeneutic. Engaged theory is more explicit than most about its political standpoint. Carol J. Adams expresses one dimension of this when she writes: 


However, the other important dimension is that any theory needs to be aware of its own tendencies to be ideologically driven by dominant concerns of its day. Liberalism, for example, with its reductive advocacy of the ideology of 'freedom', fails to be reflexive about this dimension. Similarly, critical theory sometimes fails to be reflexive of what it means to be critical or advocate social change.

The grounding of analysis

All social theories are dependent upon a process of abstraction. This is what philosophers call epistemological abstraction. However, they do not characteristically theorize their own bases for establishing their standpoint. Engaged theory does. By comparison, Grounded theory, a very different approach, suggests that empirical data collection is a neutral process that gives rise to theoretical claims out of that data. Engaged theory, to the contrary, treats such a claim to value neutrality as naively unsustainable. Engaged theory is thus reflexive in a number of ways:
  • Firstly, it recognises that doing something as basic as collecting data already entails making theoretical presuppositions.
  • Secondly, it names the levels of analysis from which theoretical claims are made. Engaged theory works across four levels of theoretical abstraction. (See below: Modes of Analysis.)
  • Thirdly, it makes a clear distinction between theory and method, suggesting that a social theory is an argument about a social phenomenon, while an analytical method or set of methods is defined a means of substantiating that theory. Engaged theory in these terms works as a 'Grand method', but not a 'Grand theory'. It provides an integrated set of methodological tools for developing different theories of things and processes in the world.
  • Fourthly, it seeks to understand both its own epistemological basis, while treating knowledge formation as one of the basic ontological categories of human practice.
  • Fifthly, it treats history as a modern way of understanding temporal change; and therefore different ontologically from a tribal saga or cosmological narrative. In other words, it provides meta-standpoint on its own capacity to historicize.

The modes of analysis

In the version of Engaged theory developed by an Australian-based group of writers, analysis moves from the most concrete form of analysis – empirical generalization – to more abstract modes of analysis. Each subsequent mode of analysis is more abstract than the previous one moving across the following themes: 1. doing, 2. acting, 3. relating, 4. being.

This leads to the 'levels' approach as set out below: 

1. Empirical analysis (ways of doing)

The method begins by emphasizing the importance of a first-order abstraction, here called empirical analysis. It entails drawing out and generalizing from on-the-ground detailed descriptions of history and place. This first level either involves generating empirical description based on observation, experience, recording or experiment—in other words, abstracting evidence from that which exists or occurs in the world—or it involves drawing upon the empirical research of others. The first level of analytical abstraction is an ordering of ‘things in the world’, in a way that does not depend upon any kind of further analysis being applied to those ‘things’.

For example, the Circles of Sustainability approach is a form of engaged theory distinguishing (at the level of empirical generalization) between different domains of social life. It can be used for understanding and assessing quality of life. Although that approach is also analytically defended through more abstract theory, the claim that economics, ecology, politics and culture can be distinguished as central domains of social practice has to be defensible at an empirical level. It needs to be useful in analysing situations on the ground.

The success or otherwise of the method can be assessed by examining how it is used. One example of use of the method was a project on Papua New Guinea called Sustainable Communities, Sustainable Development.

2. Conjunctural analysis (ways of acting)

This second level of analysis, conjunctural analysis, involves identifying and, more importantly, examining the intersection (the conjunctures) of various patterns of action (practice and meaning). Here the method draws upon established sociological, anthropological and political categories of analysis such as production, exchange, communication, organization and inquiry. 

3. Integrational analysis (ways of relating)

This third level of entry into discussing the complexity of social relations examines the intersecting modes of social integration and differentiation. These different modes of integration are expressed here in terms of different ways of relating to and distinguishing oneself from others—from the face-to-face to the disembodied. Here we see a break with the dominant emphases of classical social theory and a movement towards a post-classical sensibility. In relation to the nation-state, for example, we can ask how it is possible to explain a phenomenon that, at least in its modern variant, subjectively explains itself by reference to face-to-face metaphors of blood and place—ties of genealogy, kinship and ethnicity—when the objective ‘reality’ of all nation-states is that they are disembodied communities of abstracted strangers who will never meet. This accords with Benedict Anderson's conception of 'imagined communities', but recognizes the contradictory formation of that kind of community.

4. Categorical analysis (ways of being)

This level of enquiry is based upon an exploration of the ontological categories (categories of being such as time and space). If the previous form of analysis emphasizes the different modes through which people live their commonalities with or differences from others, those same themes are examined through more abstract analytical lenses of different grounding forms of life: respectively, embodiment, spatiality, temporality, performativity and epistemology. At this level, generalizations can be made about the dominant modes of categorization in a social formation or in its fields of practice and discourse. It is only at this level that it makes sense to generalize across modes of being and to talk of ontological formations, societies as formed in the uneven dominance of formations of tribalism, traditionalism, modernism or postmodernism.

Applied linguistics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Applied_linguistics

Applied linguistics is an interdisciplinary field which identifies, investigates, and offers solutions to language-related real-life problems. Some of the academic fields related to applied linguistics are education, psychology, communication research, anthropology, and sociology.

Domain


Journals


History

The tradition of applied linguistics established itself in part as a response to the narrowing of focus in linguistics with the advent in the late 1950s of generative linguistics, and has always maintained a socially-accountable role, demonstrated by its central interest in language problems.

Although the field of applied linguistics started from Europe and the United States, the field rapidly flourished in the international context.

Applied linguistics first concerned itself with principles and practices on the basis of linguistics. In the early days, applied linguistics was thought as “linguistics-applied” at least from the outside of the field. In the 1960s, however, applied linguistics was expanded to include language assessment, language policy, and second language acquisition. As early as the 1970s, applied linguistics became a problem-driven field rather than theoretical linguistics, including the solution of language-related problems in the real world. By the 1990s, applied linguistics had broadened including critical studies and multilingualism. Research in applied linguistics was shifted to "the theoretical and empirical investigation of real world problems in which language is a central issue."

In the United States, applied linguistics also began narrowly as the application of insights from structural linguistics—first to the teaching of English in schools and subsequently to second and foreign language teaching. The linguistics applied approach to language teaching was promulgated most strenuously by Leonard Bloomfield, who developed the foundation for the Army Specialized Training Program, and by Charles C. Fries, who established the English Language Institute (ELI) at the University of Michigan in 1941. In 1946, Applied linguistics became a recognized field of studies in the aforementioned university. In 1948, the Research Club at Michigan established Language Learning: A Journal of Applied Linguistics, the first journal to bear the term applied linguistics. In the late 1960s, applied linguistics began to establish its own identity as an interdisciplinary field of linguistics concerned with real-world language issues. The new identity was solidified by the creation of the American Association for Applied Linguistics in 1977.

Associations

The International Association of Applied Linguistics was founded in France in 1964, where it is better known as Association Internationale de Linguistique Appliquée, or AILA. AILA has affiliates in more than thirty countries, some of which are listed below. 

Australia

Australian applied linguistics took as its target the applied linguistics of mother tongue teaching and teaching English to immigrants. The Australia tradition shows a strong influence of continental Europe and of the USA, rather than of Britain. Applied Linguistics Association of Australia (ALAA) was established at a national congress of applied linguists held in August 1976. ALAA holds a joint annual conference in collaboration with the Association for Applied Linguistics in New Zealand (ALANZ). 

Canada

The Canadian Association of Applied Linguistics / L’Association canadienne de linguistique appliquée (CAAL/ACLA), is an officially bilingual (English and French) scholarly association with approximately 200 members. They produce the Canadian Journal of Applied Linguistics and hold an annual conference.

Ireland

The Irish Association for Applied Linguistics/Cumann na Teangeolaíochta Feidhmí (IRAAL) was founded in 1975. They produce the journal Teanga, the Irish word for 'language'.

Japan

In 1982, the Japan Association of Applied Linguistics (JAAL) was established in the Japan Association of College English Teachers (JACET) in order to engage in activities on a more international scale. In 1984, JAAL became an affiliate of the International Association of Applied Linguistics (AILA).

New Zealand

The Applied Linguistics Association of New Zealand (ALANZ) produces the journal New Zealand Studies in Applied Linguistics and has been collaborating with the Applied Linguistics Association of Australia in a combined annual conference since 2010, with the Association for Language Testing and Assessment of Australia and New Zealand (ALTAANZ) later joining the now three-way conference collaboration.

South Africa

The Southern African Applied Linguistics Association (SAALA) was founded in 1980. There are currently four publications associated with SAALA including the Southern African Linguistics and Applied Language Studies Journal (SAJALS).

United Kingdom

The British Association for Applied Linguistics (BAAL) was established in 1967. Its mission is "the advancement of education by fostering and promoting, by any lawful charitable means, the study of language use, language acquisition and language teaching and the fostering of interdisciplinary collaboration in this study [...]". BAAL hosts an annual conference, as well as many additional smaller conferences and events organised by its Special Interest Groups (SIGs). 

United States

The American Association for Applied Linguistics (AAAL) was founded in 1977. AAAL holds an annual conference, usually in March or April, in the United States or Canada.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...