Search This Blog

Tuesday, May 30, 2023

Boltzmann distribution

From Wikipedia, the free encyclopedia
Boltzmann's distribution is an exponential distribution.
Boltzmann factor (vertical axis) as a function of temperature T for several energy differences εiεj.

In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:

where pi is the probability of the system being in state i, exp is the exponential function, εi is the energy of that state, and a constant kT of the distribution is the product of the Boltzmann constant k and thermodynamic temperature T. The symbol denotes proportionality (see § The distribution for the proportionality constant).

The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.

The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:

The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.

The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.

The distribution

The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as

where:

  • exp() is the exponential function,
  • pi is the probability of state i,
  • εi is the energy of state i,
  • k is the Boltzmann constant,
  • T is the absolute temperature of the system,
  • M is the number of all states accessible to the system of interest,
  • Q (denoted by some authors by Z) is the normalization denominator, which is the canonical partition function
    It results from the constraint that the probabilities of all accessible states must add up to 1.

The Boltzmann distribution is the distribution that maximizes the entropy

subject to the normalization constraint and the constraint that equals a particular mean energy value (which can be proven using Lagrange multipliers).

The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.

The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states i and j is given as

where:

  • pi is the probability of state i,
  • pj the probability of state j,
  • εi is the energy of state i,
  • εj is the energy of state j.

The corresponding ratio of populations of energy levels must also take their degeneracies into account.

The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state i is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i.

where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is 

This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.

The softmax function commonly used in machine learning is related to the Boltzmann distribution:

Generalized Boltzmann distribution

Distribution of the form

is called generalized Boltzmann distribution by some authors.

The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from principle of maximum entropy, but there are other derivations.

The generalized Boltzmann distribution has the following properties:

In statistical mechanics

The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects:

Canonical ensemble (general case)
The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form.
Statistical frequencies of subsystems' states (in a non-interacting collection)
When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form.
Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles)
In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form.

Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:

  • When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead.
  • If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium.
  • With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively.

In mathematics

In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning , as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced.

In economics

The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries.

The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.

Super grid

From Wikipedia, the free encyclopedia
One conceptual plan of a super grid linking renewable sources across North Africa, the Middle East and Europe. (DESERTEC)

A super grid or supergrid is a wide-area transmission network, generally trans-continental or multinational, that is intended to make possible the trade of high volumes of electricity across great distances. It is sometimes also referred to as a "mega grid". Super grids typically are proposed to use high-voltage direct current (HVDC) to transmit electricity long distances. The latest generation of HVDC power lines can transmit energy with losses of only 1.6% per 1,000 km.

Super grids could support a global energy transition by smoothing local fluctuations of wind energy and solar energy. In this context they are considered as a key technology to mitigate global warming.

History

The idea of creating long-distance transmission lines in order to take advantage of renewable sources distantly located is not new. In the US in the 1950s, a proposal was made to ship hydroelectric power from dams being constructed in the Pacific Northwest to consumers in Southern California, but it was opposed and scrapped. In 1961, U.S. president John F. Kennedy authorized a large public works project using new high-voltage, direct current technology from Sweden. The project was undertaken as a close collaboration between General Electric of the U.S. and ASEA of Sweden, and the system was commissioned in 1970. With several upgrades of the converter stations in the intervening decades, the system now has a capacity of 3,100 MW and is known as the Pacific DC Intertie.

The concept of a "super grid" dates back to the 1960s and was used to describe the emerging unification of the Great Britain grid. In the code that governs the British Grid, the Grid Code, the Supergrid is currently defined – and has been since this code was first written, in 1990 – as referring to those parts of the British electricity transmission system that are connected at voltages in excess of 200 kV (200,000 volts). British power system planners and operational staff therefore invariably speak of the Supergrid in this context; in practice the definition used captures all of the equipment owned by the National Grid company in England and Wales, and no other equipment.

What has changed during the past 40 years is the scale of energy and distances that are imagined possible in a super grid. Europe began unifying its grids in the 1950s and its largest unified grid is the synchronous grid of Continental Europe serving 24 countries. Serious work is being conducted on unification of this synchronous European grid (previously known as the UCTE grid), with the neighboring synchronous transmission grid of some CIS countries, the IPS/UPS grid. If completed, the resulting massive grid would span 13 time zones stretching from the Atlantic to the Pacific.

While such grids cover great distances, the capacity to transmit large volumes of electricity remains limited due to congestion and control issues. The SuperSmart Grid (Europe) and the Unified Smart Grid (US) specify major technological upgrades that proponents claim are necessary to assure the practical operation and promised benefits of such transcontinental mega grids.

Concept

In current usage, "super grid" has two senses – one of being a superstructure layer overlaid or super-imposed upon existing regional transmission grid or grids, and the second of having some set of superior abilities exceeding those of even the most advanced grids.

Mega grid

In the "overlay", or "superstructure" meaning, a super grid is a very long-distance equivalent of a wide area synchronous network capable of large-scale transmission of renewable electricity. In some conceptions, a transmission grid of HVDC transmission lines forms a layer that is distinctly separate in the way that a superhighway system is separate from the system of city streets and regional highways. In more conventional conceptions such as the proposed unification of the synchronous European grid UCTE and the IPS/UPS system of the CIS, such a mega grid is no different from typical wide area synchronous transmission systems where electricity takes an ad hoc transit route directly through local utility transmission lines or HVDC lines as required. Studies for such continental sized systems report there are scaling problems as a result of network complexity, transmission congestion, and the need for rapid diagnostic, coordination and control systems. Such studies observe that transmission capacity would need to be significantly higher than current transmission systems in order to promote unimpeded energy trading across distances unbounded by state, regional or national, or even continental borders. As a practical matter, it has become necessary to incorporate smart grid features such as wide area sensor networks (WAMS) into even modest-sized regional grids in order to avert major power outages such as the Northeast Blackout of 2003. Dynamic interactions between power generation groups are increasingly complex, and transient disturbances that cascade across neighboring utilities can be sudden, large and violent, accompanied by abrupt changes in the network topology as operators attempt to manually stabilize the network.

Superior grid

In the second sense of an advanced grid, the super grid is superior not only because it is a wide area mega grid, but also because it is highly coordinated from a macro level spanning nations and continents, all the way down to the micro-level scheduling low priority loads like water heaters and refrigeration. In the European SuperSmart Grid proposal and the US Unified Smart Grid concept, such super grids have intelligence features in the wide-area transmission layer which integrate the local smart grids into a single wide-area super grid. This is similar to how the Internet bound together multiple small networks into a single ubiquitous network.

Wide area transmission can be viewed as a horizontal extension of the smart grid. In a paradigm shift, the distinction between transmission and distribution blurs with the integration as energy flow becomes bidirectional. For example, distribution grids in rural areas might generate more energy than they use, turning the local smart grid into a virtual power plant, or a city's fleet of one million electric vehicles could be used to trim peaks in transmission supply by integrating them to the smart grid using vehicle to grid technology.

A 765 kV AC transmission grid designed to carry 400 GW of wind power to cities from Midwest at a cost of $60 billion.

One advantage of such a geographically dispersed and dynamically balanced system is that the need for baseload generation is significantly reduced since intermittency of some sources such as ocean, solar, and wind can be smoothed. A series of detailed modeling studies by Dr. Gregor Czisch, which looked at the European-wide adoption of renewable energy and interlinking power grids using HVDC cables, indicates that Europe's entire power usage could come from renewables, with 70% total energy from wind at the same level of cost or lower as at present.

To some critics, such a wide area transmission layer is not novel; they point out that the technology has little difference from that used for regional and national power transmission networks. Proponents respond that beyond the qualitative smart grid features that allow instantaneous coordination and balancing of intermittent power sources across international boundaries, the quantitative comprehensiveness has a quality all its own. The claim is made that super grids open up markets. In the same way that freeways revolutionized interstate transport and the Internet revolutionized online commerce when comprehensive high-capacity networks were built, it is argued that a high capacity super grid must be built in order to provide a distribution network so comprehensive and with such available capacity that energy trading is only limited by how much electricity entrepreneurs can bring to market.

Technology

Wide area super grids plans typically call for bulk transmission using high voltage direct current lines. Europe's SuperSmart Grid proposal relies on HVDC, and in the US, key decision makers such as Steven Chu favor a national long distance DC grid system. There are industry advocates of high voltage alternating current (HVAC). Although flexible alternating current transmission systems (FACTS) have drawbacks for long distances, American Electric Power has championed a 765 kV super grid they call I-765 that would provide 400 GW of extra transmission capacity required for producing 20% of US energy from wind farms based in the midwest. (See figure above). Advocates of HVAC systems point out that HVDC systems are oriented for point to point bulk transmission and multiple connections to them would require expensive complex communication and control equipment as opposed to the simple step up transformers needed if AC lines were used. Currently, there is only one multipoint long distance HVDC transmission system. In the more distant future, the voltage loss of current methods could be avoided using experimental superconducting "SuperGrid" technology where the transmission cable is cooled by a liquid hydrogen pipeline which is also used to move energy nationwide. The energy losses for creating, containing, and re-cooling liquid hydrogen need to be accounted for.

Coordination and control of the network would use smart grid technologies such as phasor measurement units to rapidly detect imbalances in the network caused by fluctuating renewable energy sources and potentially respond instantaneously with programmed automatic protection schemes to reroute, reduce load, or reduce generation in response to network disturbances.

Government policy

China supports the idea of a global, intercontinental super grid. For a super grid in the US, a study estimated an 80% reduction of greenhouse gas emissions in combination with the installation of renewable energy, currently in planning stage.

Significant scale

One study for a European super grid estimates that as much as 750 GW of extra transmission capacity would be required – capacity that would be accommodated in increments of 5 GW with HVDC lines. A recent proposal by Transcanada priced a 1,600-km, 3 GW HVDC line at US$3 billion; it would require a corridor 60 meters wide. In India, a recent 6 GW, 1,850-km proposal was priced at $790 million and would require a 69 meter wide right of way. With 750 GW of new HVDC transmission capacity required for a European super grid, the land and money needed for new transmission lines would be considerable.

Energy independence

In Europe, the energy security implication of a super grid has been discussed as a way in part to prevent Russian energy hegemony. In the US, advocates such as T. Boone Pickens have promoted the idea of a national transmission grid in order to promote United States energy independence. Al Gore advocates the Unified Smart Grid which has comprehensive super grid capabilities. Gore and other advocates such as James E. Hansen believe super grids are essential for the eventual complete replacement of the greenhouse gas producing fossil fuel use that feeds global warming.

Permits for corridors

Large amounts of land would be required for the electricity transmission corridors used by the new transmission lines of a super grid. There can be significant opposition to the siting of power lines out of concerns about visual impact, anxiety over perceived health issues, and environmental concerns. The US has a process of designating National Interest Electric Transmission Corridors, and it is likely that this process would be used to specify the pathways for a super grid in that country. In the EU, permits for new overhead lines can easily reach 10 years. In some cases, this has made underground cable more expedient. Since land required can be one fifth than that for overhead and the permit process can be significantly faster, underground cable can be more attractive despite its weaknesses of being more expensive, lower capacity, shorter-lived, and suffers significantly longer down times.

Business interests

Siting

Just as superhighways change valuations of land due to the proximity to the ability to transport valuable commodities, businesses are strongly motivated to influence the siting of a super grid to their benefit. The cost of alternative power is the delivered price of electricity, and if the production of electricity from North Dakota wind or Arizona solar is to be competitive, the distance of the connection from the wind farm to the interstate transmission grid must not be great. This is because the feeder line from the generator to the transmission lines is usually paid for by the owner of the generation. Some localities will help pay for the cost of these lines, at the cost of local regulation such as that of a public utilities commission. T. Boone Pickens' project has chosen to pay for the feeder lines privately. Some localities, such as Texas give such projects the power of eminent domain which allows companies to seize land in the path of the planned construction.

Technology preferences

Energy producers are interested in whether the super grid employs HVDC technology, or uses AC, because the cost of connection to an HVDC line is generally greater than that if the AC is used. The Pickens plan favors 765 kV AC transmission, which is considered to be less efficient for long-distance transmission.

Competition

In the 1960s, private California power companies opposed the Pacific Intertie project with a set of technical objections that were overruled. When the project was completed, consumers in Los Angeles saved approximately U.S. $600,000 per day by use of electric power from projects on the Columbia River rather than local power companies burning more expensive fossil fuel.

Firewalking

From Wikipedia, the free encyclopedia
Firewalking in Sri Lanka

Firewalking is the act of walking barefoot over a bed of hot embers or stones. It has been practiced by many people and cultures in many parts of the world, with the earliest known reference dating from Iron Age India c. 1200 BCE. It is often used as a rite of passage, as a test of strength and courage, and in religion as a test of faith.

Modern physics has explained the phenomenon, concluding that the foot does not touch the hot surface long enough to burn and that embers are poor conductors of heat.

History

Walking on fire has existed for several thousand years, with records dating back to 1200 BCE. Cultures across the globe use firewalking for rites of healing, initiation, and faith.

Firewalking is also practiced by:

Persistence and functions

Social theorists have long argued that the performance of intensely arousing collective events such as firewalking persists because it serves some basic socialising function, such as social cohesion, team building, and so on. Émile Durkheim attributed this effect to the theorized notion of collective effervescence, whereby collective arousal results in a feeling of togetherness and assimilation. A scientific study conducted during a fire-walking ritual at the village of San Pedro Manrique, Spain, showed synchronized heart rate rhythms between performers of the firewalk and non-performing spectators. Notably, levels of synchronicity also depended on social proximity. This research suggests that there is a physiological foundation for collective religious rituals, through the alignment of emotional states, which strengthens group dynamics and forges a common identity amongst participants.

Explanation

Per the second law of thermodynamics, when two bodies of different temperatures meet, the hotter body will cool off, and the cooler body will heat up, until they are separated or until they meet at a temperature in between. What that temperature is, and how quickly it is reached, depends on the thermodynamic properties of the two bodies. The important properties are temperature, density, specific heat capacity, and thermal conductivity.

The square root of the product of thermal conductivity, density, and specific heat capacity is called thermal effusivity, and tells how much heat energy the body absorbs or releases in a certain amount of time per unit area when its surface is at a certain temperature. Since the heat taken in by the cooler body must be the same as the heat given by the hotter one, the surface temperature must lie closer to the temperature of the body with the greater thermal effusivity. The bodies in question here are human feet (which mainly consist of water) and burning coals.

Due to these properties, David Willey, professor of physics at the University of Pittsburgh at Johnstown, points out that firewalking is explainable in terms of basic physics and is neither supernatural nor paranormal. Willey notes that most fire-walks occur on coals that measure about 1,000 °F (538 °C), but he once recorded someone walking on 1,800 °F (980 °C) coals.

Additionally, Jearl Walker has postulated that walking over hot coals with wet feet may insulate the feet due to the Leidenfrost effect.

Factors that prevent burning

  • Water has a very high specific heat capacity (4.184 J g−1 K−1), whereas embers have a very low one. Therefore, the foot's temperature tends to change less than the coal's.
  • Water also has a high thermal conductivity, and on top of that, the rich blood flow in the foot will carry away the heat and spread it. On the other hand, embers have a poor thermal conductivity, so the hotter body consists only of the parts of the embers which are close to the foot.
  • When the embers cool down, their temperature sinks below the flash point, so they stop burning, and no new heat is generated.
  • Firewalkers do not spend very much time on the embers, and they keep moving.

Risks when firewalking

  • People have burned their feet when they remained in the fire for too long enabling the thermal conductivity of the embers to catch up.
  • One is more likely to be burned when running through the embers since running pushes one's feet deeper into the embers, resulting in the top of the feet being burnt.
  • Foreign objects in the embers may result in burns. Metal is especially dangerous since it has a high thermal conductivity.
  • Embers which have not burned long enough can burn feet more quickly. Embers contain water, which increases their heat capacity as well as their thermal conductivity. The water must be evaporated already when the firewalk starts.
  • Wet feet can cause embers to cling to them, increasing the exposure time.

A myth that persists is that safe firewalking requires the aid of a supernatural force, strong faith, or on an individual's ability to focus on "mind over matter".

Since the 20th century, this practice is often used in corporate and team-building seminars and self-help workshops as a confidence-building exercise.

 

Wireless LAN

From Wikipedia, the free encyclopedia
This notebook computer is connected to a wireless access point using a PC Card wireless card.
 
An example of a Wi-Fi network

A wireless LAN (WLAN) is a wireless computer network that links two or more devices using wireless communication to form a local area network (LAN) within a limited area such as a home, school, computer laboratory, campus, or office building. This gives users the ability to move around within the area and remain connected to the network. Through a gateway, a WLAN can also provide a connection to the wider Internet.

Wireless LANs based on the IEEE 802.11 standards are the most widely used computer networks in the world. These are commonly called Wi-Fi, which is a trademark belonging to the Wi-Fi Alliance. They are used for home and small office networks that link together laptop computers, printers, smartphones, Web TVs and gaming devices with a wireless router, which links them to the internet. Hotspots provided by routers at restaurants, coffee shops, hotels, libraries, and airports allow consumers to access the internet with portable wireless devices.

History

Norman Abramson, a professor at the University of Hawaii, developed the world's first wireless computer communication network, ALOHAnet. The system became operational in 1971 and included seven computers deployed over four islands to communicate with the central computer on the Oahu island without using phone lines.

54 Mbit/s WLAN PCI Card (802.11g)

Wireless LAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by technical standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand name).

Beginning in 1991, a European alternative known as HiperLAN/1 was pursued by the European Telecommunications Standards Institute (ETSI) with a first version approved in 1996. This was followed by a HiperLAN/2 functional specification with ATM influences accomplished February 2000. Neither European standard achieved the commercial success of 802.11, although much of the work on HiperLAN/2 has survived in the physical specification (PHY) for IEEE 802.11a, which is nearly identical to the PHY of HiperLAN/2.

In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are dual-band and able to utilize both wireless bands. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band also has more channels than the 2.4 GHz band, permitting a greater number of devices to share the space. Not all WLAN channels are available in all regions.

A HomeRF group formed in 1997 to promote a technology aimed at residential use, but it disbanded in January 2003.

Architecture

Stations

All components that can connect into a wireless medium in a network are referred to as stations. All stations are equipped with wireless network interface controllers. Wireless stations fall into two categories: wireless access points (WAPs) and clients. WAPs are base stations for the wireless network. They transmit and receive radio frequencies for wireless-enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, VoIP phones and other smartphones, or non-portable devices such as desktop computers, printers, and workstations that are equipped with a wireless network interface.

Service set

The basic service set (BSS) is a set of all stations that can communicate with each other at PHY layer. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS.

There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is an ad hoc network that contains no access points, which means they cannot connect to any other basic service set. In an IBSS the STAs are configured in ad hoc (peer-to-peer) mode.

An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string.

A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells. DS can be wired or wireless. Current wireless distribution systems are mostly based on WDS or Mesh protocols, though other systems are in use.

Types of wireless LANs

The IEEE 802.11 has two basic modes of operation: infrastructure and ad hoc mode. In ad hoc mode, mobile units communicate directly peer-to-peer. In infrastructure mode, mobile units communicate through a wireless access point (WAP) that also serves as a bridge to other networks such as a local area network or the Internet.

Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms: Wired Equivalent Privacy (WEP), no longer considered secure, Wi-Fi Protected Access (WPA, WPA2, WPA3), to secure wireless computer networks. Many access points will also offer Wi-Fi Protected Setup, a quick, but no longer considered secure, method of joining a new device to an encrypted network.

Infrastructure

Most Wi-Fi networks are deployed in infrastructure mode. In infrastructure mode, wireless clients, such as laptops and smartphones, connect to the WAP to join the network. The WAP usually has a wired network connection and may have permanent wireless connections to other WAPs.

WAPs are usually fixed and provide service to their client nodes within range. Some networks will have multiple WAPs using the same SSID and security arrangement. In that case, connecting to any WAP on that network joins the client to the network, and the client software will try to choose the WAP that gives the best service, such as the WAP with the strongest signal.

Peer-to-peer

Peer-to-Peer or ad hoc wireless LAN

An ad hoc network is a network where stations communicate only peer-to-peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS). A Wi-Fi Direct network is a different type of wireless network where stations communicate peer-to-peer. In a peer-to-peer network wireless devices within range of each other can discover and communicate directly without involving central access points.

In a Wi-Fi P2P group, the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach, the user sets up a P2P group owner manually. This method is also known as autonomous group owner (autonomous GO). In the second method, called negotiation-based group creation, two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, available power in the wireless device, whether the wireless device is already a group owner in another group or a received signal strength of the first wireless device.

Hidden node problem: Devices A and C are both communicating with B, but are unaware of each other

IEEE 802.11 defines the PHY and medium access control (MAC) layers based on carrier-sense multiple access with collision avoidance (CSMA/CA). This is in contrast to Ethernet which uses carrier-sense multiple access with collision detection (CSMA/CD). The 802.11 specification includes provisions designed to minimize collisions because mobile units have to contend with the hidden node problem where two mobile units may both be in range of a common access point, but out of range of each other.

Bridge

A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the wireless LAN.

Wireless distribution system

A wireless distribution system (WDS) enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of a WDS over some other solutions is that it preserves the MAC addresses of client packets across links between access points.

An access point can be either a main, relay, or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations.

Because data is forwarded wirelessly, consuming wireless bandwidth, throughput in this method is halved for wireless clients not connected to a main base station. Connections between base stations are done at layer-2 and do not involve or require layer-3 IP addresses. WDS capability may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging).

All base stations in a WDS must be configured to use the same radio channel and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above.

Roaming

Roaming among Wireless Local Area Networks

There are two definitions for wireless LAN roaming:

  1. Internal roaming: The mobile station (MS) moves from one access point (AP) to another AP within a home network if the signal strength is too weak. An authentication server performs the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. A MS roaming from one access point to another often interrupts the flow of data between the MS and an application connected to the network. The MS, for instance, periodically monitors the presence of alternative APs (ones that will provide a better connection). At some point, based on proprietary mechanisms, the MS decides to re-associate with an AP having a stronger wireless signal. The MS, however, may lose a connection with an AP before associating with another access point. To provide reliable connections with applications, the MS must generally include software that provides session persistence.
  2. External roaming: The MS (client) moves into a WLAN of another wireless Internet service provider (WISP) and takes their services. The user can use a foreign network independently from their home network, provided that the foreign network allows visiting users on their network. There must be special authentication and billing systems for mobile services in a foreign network.

Online machine learning

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Online_machine_learning In computer sci...