Search This Blog

Wednesday, March 23, 2022

Multi-state modeling of biomolecules

From Wikipedia, the free encyclopedia

Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states.

Biological signaling systems often rely on complexes of biological macromolecules that can undergo several functionally significant modifications that are mutually compatible. Thus, they can exist in a very large number of functionally different states. Modeling such multi-state systems poses two problems: The problem of how to describe and specify a multi-state system (the "specification problem") and the problem of how to use a computer to simulate the progress of the system over time (the "computation problem"). To address the specification problem, modelers have in recent years moved away from explicit specification of all possible states, and towards rule-based modeling that allow for implicit model specification, including the κ-calculus, BioNetGen, the Allosteric Network Compiler and others. To tackle the computation problem, they have turned to particle-based methods that have in many cases proved more computationally efficient than population-based methods based on ordinary differential equations, partial differential equations, or the Gillespie stochastic simulation algorithm. Given current computing technology, particle-based methods are sometimes the only possible option. Particle-based simulators further fall into two categories: Non-spatial simulators such as StochSim, DYNSTOC, RuleMonkey, and NFSim and spatial simulators, including Meredys, SRSim and MCell. Modelers can thus choose from a variety of tools; the best choice depending on the particular problem. Development of faster and more powerful methods is ongoing, promising the ability to simulate ever more complex signaling processes in the future.

Introduction

Multi-state biomolecules in signal transduction

In living cells, signals are processed by networks of proteins that can act as complex computational devices. These networks rely on the ability of single proteins to exist in a variety of functionally different states achieved through multiple mechanisms, including post-translational modifications, ligand binding, conformational change, or formation of new complexes. Similarly, nucleic acids can undergo a variety of transformations, including protein binding, binding of other nucleic acids, conformational change and DNA methylation.

In addition, several types of modifications can co-exist, exerting a combined influence on a biological macromolecule at any given time. Thus, a biomolecule or complex of biomolecules can often adopt a very large number of functionally distinct states. The number of states scales exponentially with the number of possible modifications, a phenomenon known as "combinatorial explosion". This is of concern for computational biologists who model or simulate such biomolecules, because it raises questions about how such large numbers of states can be represented and simulated.

Examples of combinatorial explosion

Biological signaling networks incorporate a wide array of reversible interactions, post-translational modifications and conformational changes. Furthermore, it is common for a protein to be composed of several - identical or nonidentical - subunits, and for several proteins and/or nucleic acid species to assemble into larger complexes. A molecular species with several of those features can therefore exist in a large number of possible states.

For instance, it has been estimated that the yeast scaffold protein Ste5 can be a part of 25666 unique protein complexes. In E. coli, chemotaxis receptors of four different kinds interact in groups of three, and each individual receptor can exist in at least two possible conformations and has up to eight methylation sites, resulting in billions of potential states. The protein kinase CaMKII is a dodecamer of twelve catalytic subunits, arranged in two hexameric rings. Each subunit can exist in at least two distinct conformations, and each subunit features various phosphorylation and ligand binding sites. A recent model incorporated conformational states, two phosphorylation sites and two modes of binding calcium/calmodulin, for a total of around one billion possible states per hexameric ring. A model of coupling of the EGF receptor to a MAP kinase cascade presented by Danos and colleagues accounts for distinct molecular species, yet the authors note several points at which the model could be further extended. A more recent model of ErbB receptor signalling even accounts for more than one googol () distinct molecular species. The problem of combinatorial explosion is also relevant to synthetic biology, with a recent model of a relatively simple synthetic eukaryotic gene circuit featuring 187 species and 1165 reactions.

Of course, not all of the possible states of a multi-state molecule or complex will necessarily be populated. Indeed, in systems where the number of possible states is far greater than that of molecules in the compartment (e.g. the cell), they cannot be. In some cases, empirical information can be used to rule out certain states if, for instance, some combinations of features are incompatible. In the absence of such information, however, all possible states need to be considered a priori. In such cases, computational modeling can be used to uncover to what extent the different states are populated.

The existence (or potential existence) of such large numbers of molecular species is a combinatorial phenomenon: It arises from a relatively small set of features or modifications (such as post-translational modification or complex formation) that combine to dictate the state of the entire molecule or complex, in the same way that the existence of just a few choices in a coffee shop (small, medium or large, with or without milk, decaf or not, extra shot of espresso) quickly leads to a large number of possible beverages (24 in this case; each additional binary choice will double that number). Although it is difficult for us to grasp the total numbers of possible combinations, it is usually not conceptually difficult to understand the (much smaller) set of features or modifications and the effect each of them has on the function of the biomolecule. The rate at which a molecule undergoes a particular reaction will usually depend mainly on a single feature or a small subset of features. It is the presence or absence of those features that dictates the reaction rate. The reaction rate is the same for two molecules that differ only in features which do not affect this reaction. Thus, the number of parameters will be much smaller than the number of reactions. (In the coffee shop example, adding an extra shot of espresso will cost 40 cent, no matter what size the beverage is and whether or not it has milk in it). It is such "local rules" that are usually discovered in laboratory experiments. Thus, a multi-state model can be conceptualised in terms of combinations of modular features and local rules. This means that even a model that can account for a vast number of molecular species and reactions is not necessarily conceptually complex.

Specification vs computation

An overview of tools discussed that are used for the rule-based specification and particle-based evaluation (spatial or non-spatial) of multi-state biomolecules.

The combinatorial complexity of signaling systems involving multi-state proteins poses two kinds of problems. The first problem is concerned with how such a system can be specified; i.e. how a modeler can specify all complexes, all changes those complexes undergo and all parameters and conditions governing those changes in a robust and efficient way. This problem is called the "specification problem". The second problem concerns computation. It asks questions about whether a combinatorially complex model, once specified, is computationally tractable, given the large number of states and the even larger number of possible transitions between states, whether it can be stored electronically, and whether it can be evaluated in a reasonable amount of computing time. This problem is called the "computation problem". Among the approaches that have been proposed to tackle combinatorial complexity in multi-state modeling, some are mainly concerned with addressing the specification problem, some are focused on finding effective methods of computation. Some tools address both specification and computation. The sections below discuss rule-based approaches to the specification problem and particle-based approaches to solving the computation problem. A wide range of computational tools exist for multi-state modeling.

The specification problem

Explicit specification

The most naïve way of specifying, e.g., a protein in a biological model is to specify each of its states explicitly and use each of them as a molecular species in a simulation framework that allows transitions from state to state. For instance, if a protein can be ligand-bound or not, exist in two conformational states (e.g. open or closed) and be located in two possible subcellular areas (e.g. cytosolic or membrane-bound), then the eight possible resulting states can be explicitly enumerated as:

  • bound, open, cytosol
  • bound, open, membrane
  • bound, closed, cytosol
  • bound, closed, membrane
  • unbound, open, cytosol
  • unbound, open, membrane
  • unbound, closed, cytosol
  • unbound, closed, membrane

Enumerating all possible states is a lengthy and potentially error-prone process. For macromolecular complexes that can adopt multiple states, enumerating each state quickly becomes tedious, if not impossible. Moreover, the addition of a single additional modification or feature to the model of the complex under investigation will double the number of possible states (if the modification is binary), and it will more than double the number of transitions that need to be specified.

Rule-based model specification

It is clear that an explicit description, which lists all possible molecular species (including all their possible states), all possible reactions or transitions these species can undergo, and all parameters governing these reactions, very quickly becomes unwieldy as the complexity of the biological system increases. Modelers have therefore looked for implicit, rather than explicit, ways of specifying a biological signaling system. An implicit description is one that groups reactions and parameters that apply to many types of molecular species into one reaction template. It might also add a set of conditions that govern reaction parameters, i.e. the likelihood or rate at which a reaction occurs, or whether it occurs at all. Only properties of the molecule or complex that matter to a given reaction (either affecting the reaction or being affected by it) are explicitly mentioned, and all other properties are ignored in the specification of the reaction.

For instance, the rate of ligand dissociation from a protein might depend on the conformational state of the protein, but not on its subcellular localization. An implicit description would therefore list two dissociation processes (with different rates, depending on conformational state), but would ignore attributes referring to subcellular localization, because they do not affect the rate of ligand dissociation, nor are they affected by it. This specification rule has been summarized as "Don't care, don't write".

Since it is not written in terms of reactions, but in terms of more general "reaction rules" encompassing sets of reactions, this kind of specification is often called "rule-based". This description of the system in terms of modular rules relies on the assumption that only a subset of features or attributes are relevant for a particular reaction rule. Where this assumption holds, a set of reactions can be coarse-grained into one reaction rule. This coarse-graining preserves the important properties of the underlying reactions. For instance, if the reactions are based on chemical kinetics, so are the rules derived from them.

Many rule-based specification methods exist. In general, the specification of a model is a separate task from the execution of the simulation. Therefore, among the existing rule-based model specification systems, some concentrate on model specification only, allowing the user to then export the specified model into a dedicated simulation engine. However, many solutions to the specification problem also contain a method of interpreting the specified model. This is done by providing a method to simulate the model or a method to convert it into a form that can be used for simulations in other programs.

An early rule-based specification method is the κ-calculus, a process algebra that can be used to encode macromolecules with internal states and binding sites and to specify rules by which they interact. The κ-calculus is merely concerned with providing a language to encode multi-state models, not with interpreting the models themselves. A simulator compatible with Kappa is KaSim.

BioNetGen is a software suite that provides both specification and simulation capacities. Rule-based models can be written down using a specified syntax, the BioNetGen language (BNGL). The underlying concept is to represent biochemical systems as graphs, where molecules are represented as nodes (or collections of nodes) and chemical bonds as edges. A reaction rule, then, corresponds to a graph rewriting rule. BNGL provides a syntax for specifying these graphs and the associated rules as structured strings. BioNetGen can then use these rules to generate ordinary differential equations (ODEs) to describe each biochemical reaction. Alternatively, it can generate a list of all possible species and reactions in SBML, which can then be exported to simulation software packages that can read SBML. One can also make use of BioNetGen's own ODE-based simulation software and its capability to generate reactions on-the-fly during a stochastic simulation. In addition, a model specified in BNGL can be read by other simulation software, such as DYNSTOC, RuleMonkey, and NFSim.

Another tool that generates full reaction networks from a set of rules is the Allosteric Network Compiler (ANC). Conceptually, ANC sees molecules as allosteric devices with a Monod-Wyman-Changeux (MWC) type regulation mechanism, whose interactions are governed by their internal state, as well as by external modifications. A very useful feature of ANC is that it automatically computes dependent parameters, thereby imposing thermodynamic correctness.

An extension of the κ-calculus is provided by React(C). The authors of React C show that it can express the stochastic π calculus. They also provide a stochastic simulation algorithm based on the Gillespie stochastic algorithm  for models specified in React(C).

ML-Rules is similar to React(C), but provides the added possibility of nesting: A component species of the model, with all its attributes, can be part of a higher-order component species. This enables ML-Rules to capture multi-level models that can bridge the gap between, for instance, a series of biochemical processes and the macroscopic behaviour of a whole cell or group of cells. For instance, a proof-of-concept model of cell division in fission yeast includes cyclin/cdc2 binding and activation, pheromone secretion and diffusion, cell division and movement of cells. Models specified in ML-Rules can be simulated using the James II simulation framework. A similar nested language to represent multi-level biological systems has been proposed by Oury and Plotkin. A specification formalism based on molecular finite automata (MFA) framework can then be used to generate and simulate a system of ODEs or for stochastic simulation using a kinetic Monte Carlo algorithm.

Some rule-based specification systems and their associated network generation and simulation tools have been designed to accommodate spatial heterogeneity, in order to allow for the realistic simulation of interactions within biological compartments. For instance, the Simmune project includes a spatial component: Users can specify their multi-state biomolecules and interactions within membranes or compartments of arbitrary shape. The reaction volume is then divided into interfacing voxels, and a separate reaction network generated for each of these subvolumes.

The Stochastic Simulator Compiler (SSC) allows for rule-based, modular specification of interacting biomolecules in regions of arbitrarily complex geometries. Again, the system is represented using graphs, with chemical interactions or diffusion events formalised as graph-rewriting rules. The compiler then generates the entire reaction network before launching a stochastic reaction-diffusion algorithm.

A different approach is taken by PySB, where model specification is embedded in the programming language Python. A model (or part of a model) is represented as a Python programme. This allows users to store higher-order biochemical processes such as catalysis or polymerisation as macros and re-use them as needed. The models can be simulated and analysed using Python libraries, but PySB models can also be exported into BNGL, kappa, and SBML.

Models involving multi-state and multi-component species can also be specified in Level 3 of the Systems Biology Markup Language (SBML) using the multi package. A draft specification is available.

Thus, by only considering states and features important for a particular reaction, rule-based model specification eliminates the need to explicitly enumerate every possible molecular state that can undergo a similar reaction, and thereby allows for efficient specification.

The computation problem

When running simulations on a biological model, any simulation software evaluates a set of rules, starting from a specified set of initial conditions, and usually iterating through a series of time steps until a specified end time. One way to classify simulation algorithms is by looking at the level of analysis at which the rules are applied: they can be population-based, single-particle-based or hybrid.

Population-based rule evaluation

In Population-based rule evaluation, rules are applied to populations. All molecules of the same species in the same state are pooled together. Application of a specific rule reduces or increases the size of one of the pools, possibly at the expense of another.

Some of the best-known classes of simulation approaches in computational biology belong to the population-based family, including those based on the numerical integration of ordinary and partial differential equations and the Gillespie stochastic simulation algorithm.

Differential equations describe changes in molecular concentrations over time in a deterministic manner. Simulations based on differential equations usually do not attempt to solve those equations analytically, but employ a suitable numerical solver.

The stochastic Gillespie algorithm changes the composition of pools of molecules through a progression of randomness reaction events, the probability of which is computed from reaction rates and from the numbers of molecules, in accordance with the stochastic master equation.

In population-based approaches, one can think of the system being modeled as being in a given state at any given time point, where a state is defined according to the nature and size of the populated pools of molecules. This means that the space of all possible states can become very large. With some simulation methods implementing numerical integration of ordinary and partial differential equations or the Gillespie stochastic algorithm, all possible pools of molecules and the reactions they undergo are defined at the start of the simulation, even if they are empty. Such "generate-first" methods scale poorly with increasing numbers of molecular states. For instance, it has recently been estimated that even for a simple model of CaMKII with just 6 states per subunits and 10 subunits, it would take 290 years to generate the entire reaction network on a 2.54 GHz Intel Xeon processor. In addition, the model generation step in generate-first methods does not necessarily terminate, for instance when the model includes assembly of proteins into complexes of arbitrarily large size, such as actin filaments. In these cases, a termination condition needs to be specified by the user.

Even if a large reaction system can be successfully generated, its simulation using population-based rule evaluation can run into computational limits. In a recent study, a powerful computer was shown to be unable to simulate a protein with more than 8 phosphorylation sites ( phosphorylation states) using ordinary differential equations.

Methods have been proposed to reduce the size of the state space. One is to consider only the states adjacent to the present state (i.e. the states that can be reached within the next iteration) at each time point. This eliminates the need for enumerating all possible states at the beginning. Instead, reactions are generated "on-the-fly" at each iteration. These methods are available both for stochastic and deterministic algorithms. These methods still rely on the definition of an (albeit reduced) reaction network - in contrast to the "network-free" methods discussed below.

Even with "on-the-fly" network generation, networks generated for population-based rule evaluation can become quite large, and thus difficult - if not impossible - to handle computationally. An alternative approach is provided by particle-based rule evaluation.

Particle-based rule evaluation

Principles of particle-based modeling. In particle-based modeling, each particle is tracked individually through the simulation. At any point, a particle only "sees" the rules that apply to it. This figure follows two molecular particles (one of type A in red, one of type B in blue) through three steps in a hypothetical simulation following a simple set of rules (given on the right). At each step, the rules that potentially apply to the particle under consideration are highlighted in that particle's colour.

In particle-based (sometimes called "agent-based") simulations, proteins, nucleic acids, macromolecular complexes or small molecules are represented as individual software objects, and their progress is tracked through the course of the entire simulation. Because particle-based rule evaluation keeps track of individual particles rather than populations, it comes at a higher computational cost when modeling systems with a high total number of particles, but a small number of kinds (or pools) of particles. In cases of combinatorial complexity, however, the modeling of individual particles is an advantage because, at any given point in the simulation, only existing molecules, their states and the reactions they can undergo need to be considered. Particle-based rule evaluation does not require the generation of complete or partial reaction networks at the start of the simulation or at any other point in the simulation and is therefore called "network-free".

This method reduces the complexity of the model at the simulation stage, and thereby saves time and computational power. The simulation follows each particle, and at each simulation step, a particle only "sees" the reactions (or rules) that apply to it. This depends on the state of the particle and, in some implementation, on the states of its neighbours in a holoenzyme or complex. As the simulation proceeds, the states of particles are updated according to the rules that are fired.

Some particle-based simulation packages use an ad-hoc formalism for specification of reactants, parameters and rules. Others can read files in a recognised rule-based specification format such as BNGL.

Non-spatial particle-based methods

StochSim is a particle-based stochastic simulator used mainly to model chemical reactions and other molecular transitions. The algorithm used in StochSim is different from the more widely known Gillespie stochastic algorithm in that it operates on individual entities, not entity pools, making it particle-based rather than population-based.

In StochSim, each molecular species can be equipped with a number of binary state flags representing a particular modification. Reactions can be made dependent on a set of state flags set to particular values. In addition, the outcome of a reaction can include a state flag being changed. Moreover, entities can be arranged in geometric arrays (for instance, for holoenzymes consisting of several subunits), and reactions can be "neighbor-sensitive", i.e. the probability of a reaction for a given entity is affected by the value of a state flag on a neighboring entity. These properties make StochSim ideally suited to modeling multi-state molecules arranged in holoenzymes or complexes of specified size. Indeed, StochSim has been used to model clusters of bacterial chemotactic receptors, and CaMKII holoenzymes.

An extension to StochSim includes a particle-based simulator DYNSTOC, which uses a StochSim-like algorithm to simulate models specified in the BioNetGen language (BNGL), and improves the handling of molecules within macromolecular complexes.

Another particle-based stochastic simulator that can read BNGL input files is RuleMonkey. Its simulation algorithm differs from the algorithms underlying both StochSim and DYNSTOC in that the simulation time step is variable.

The Network-Free Stochastic Simulator (NFSim) differs from those described above by allowing for the definition of reaction rates as arbitrary mathematical or conditional expressions and thereby facilitates selective coarse-graining of models. RuleMonkey and NFsim implement distinct but related simulation algorithms. A detailed review and comparison of both tools is given by Yang and Hlavacek.

It is easy to imagine a biological system where some components are complex multi-state molecules, whereas others have few possible states (or even just one) and exist in large numbers. A hybrid approach has been proposed to model such systems: Within the Hybrid Particle/Population (HPP) framework, the user can specify a rule-based model, but can designate some species to be treated as populations (rather than particles) in the subsequent simulation. This method combines the computational advantages of particle-based modeling for multi-state systems with relatively low molecule numbers and of population-based modeling for systems with high molecule numbers and a small number of possible states. Specification of HPP models is supported by BioNetGen, and simulations can be performed with NFSim.

Spatial particle-based methods

Screenshot from an MCell simulation of calcium signaling within the spine. Although other types of calcium-regulated molecules were included in the simulations, only CaMKII molecules are visualized. They are shown in red when bound to calmodulin and in black when unbound. The simulation compartment is a reconstruction of a dendritic spine. The area of the postsynaptic density is shown in red, the spine head and neck in gray, and the parent dendrite in yellow. The figure was generated by visualizing the simulation results in Blender.

Spatial particle-based methods differ from the methods described above by their explicit representation of space.

One example of a particle-based simulator that allows for a representation of cellular compartments is SRSim. SRSim is integrated in the LAMMPS molecular dynamics simulator and allows the user to specify the model in BNGL. SRSim allows users to specify the geometry of the particles in the simulation, as well as interaction sites. It is therefore especially good at simulating the assembly and structure of complex biomolecular complexes, as evidenced by a recent model of the inner kinetochore.

MCell allows individual molecules to be traced in arbitrarily complex geometric environments which are defined by the user. This allows for simulations of biomolecules in realistic reconstructions of living cells, including cells with complex geometries like those of neurons. The reaction compartment is a reconstruction of a dendritic spine.

MCell uses an ad-hoc formalism within MCell itself to specify a multi-state model: In MCell, it is possible to assign "slots" to any molecular species. Each slot stands for a particular modification, and any number of slots can be assigned to a molecule. Each slot can be occupied by a particular state. The states are not necessarily binary. For instance, a slot describing binding of a particular ligand to a protein of interest could take the states "unbound", "partially bound", and "fully bound".

The slot-and-state syntax in MCell can also be used to model multimeric proteins or macromolecular complexes. When used in this way, a slot is a placeholder for a subunit or a molecular component of a complex, and the state of the slot will indicate whether a specific protein component is absent or present in the complex. A way to think about this is that MCell macromolecules can have several dimensions: A "state dimension" and one or more "spatial dimensions". The "state dimension" is used to describe the multiple possible states making up a multi-state protein, while the spatial dimension(s) describe topological relationships between neighboring subunits or members of a macromolecular complex. One drawback of this method for representing protein complexes, compared to Meredys, is that MCell does not allow for the diffusion of complexes, and hence, of multi-state molecules. This can in some cases be circumvented by adjusting the diffusion constants of ligands that interact with the complex, by using checkpointing functions or by combining simulations at different levels.

Second Cold War

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Second_Cold_War

The Second Cold War, Cold War II, or the New Cold War are terms that refer to heightened political, social, ideological, informational, and military tensions in the 21st century between the United States and China. It is also used to describe such tensions between the United States and Russia, the primary successor state of the former Soviet Union, which was one of the major parties of the original Cold War until its dissolution in 1991. Some commentators have used the term as a comparison to the original Cold War. Some other commentators have either doubted that either tension would lead to another "cold war" or have discouraged using the term to refer to either or both tensions.

Past usages

Past sources, such as academics Fred Halliday, Alan M. Wald, and David S. Painter, used the interchangeable terms to refer to the 1979–1985 and/or 1985–1991 phases of the Cold War. Some other sources used similar terms to refer to the Cold War of the mid-1970s. Columnist William Safire argued in a 1975 New York Times editorial that the Nixon administration's policy of détente with the Soviet Union had failed and that "Cold War II" was then underway. Academic Gordon H. Chang in 2007 used the term "Cold War II" to refer to the Cold War period after the 1972 meeting in China between US President Richard Nixon and Chinese Communist Party chairman Mao Zedong.

In 1998, George Kennan described the US Senate vote to expand NATO to include Poland, Hungary, and the Czech Republic as "the beginning of a new cold war", and predicted that "the Russians will gradually react quite adversely and it will affect their policies".

The journalist Edward Lucas wrote his 2008 book The New Cold War: How the Kremlin Menaces both Russia and the West, claiming that a new cold war between Russia and the West had begun already.

"New Cold War"

In June 2019, University of Southern California (USC) professors Steven Lamy and Robert D. English agreed that the "new Cold War" would distract political parties from bigger issues such as globalization, global warming, global poverty, increasing inequality, and far-right populism. However, Lamy said that the new Cold War had not yet begun, while English said that it already had. English further said that China poses a far greater threat than Russia in cyberwarfare but not as much as far-right populism does from within liberal states like the US.

In his September 2021 speech to the United Nations General Assembly, US President Joe Biden said that the US is "not seeking a new Cold War or a world divided into rigid blocs." Biden further said that the US would cooperate "with any nation that steps up and pursues peaceful resolution to shared challenges," despite "intense disagreement in other areas, because we'll all suffer the consequences of our failure."

Sino-American tensions

The US senior defence official Jed Babbin, Yale University professor David Gelernter, Firstpost editor R. Jagannathan, Subhash Kapila of the South Asia Analysis Group, former Australian Prime Minister Kevin Rudd, and some other sources have used the term (occasionally using the term the Pacific Cold War) to refer to tensions between the United States and China in the 2000s and 2010s.

Talk of a "new Cold War" between a United States-led bloc of countries on the one hand and the putative Beijing-Moscow bloc, including explicit references to it in the official PRC's media, intensified in the summer of 2016 as a result of the territorial dispute in the South China Sea, when China defied the Permanent Court of Arbitration′s ruling against China on the South China Sea dispute, and the US announcing in July 2016 it would deploy the Terminal High Altitude Area Defense (THAAD) in South Korea, a move resented by China as well as Russia and North Korea.

Donald Trump, who was inaugurated as US president on 20 January 2017, had repeatedly said during his presidential campaign that he considered China a threat, a stance that heightened speculations of the possibility of a "new cold war with China". Claremont McKenna College professor Minxin Pei said that Trump's election win and "ascent to the presidency" may increase chances of the possibility. In March 2017, a self-declared socialist magazine Monthly Review said, "With the rise of the Trump administration, the new Cold War with Russia has been put on hold", and also said that the Trump administration has planned to shift from Russia to China as its main competitor.

In July 2018, Michael Collins, deputy assistant director of the CIA's East Asia mission center, told the Aspen Security Forum in Colorado he believed China under paramount leader and general secretary Xi Jinping, while unwilling to go to war, was waging a "quiet kind of cold war" against the United States, seeking to replace the US as the leading global power. He further elaborated: "What they're waging against us is fundamentally a cold war — a cold war not like we saw during [the] Cold War (between the U.S. and the Soviet Union) but a cold war by definition". In October 2018, a Hong Kong's Lingnan University professor Zhang Baohui told The New York Times that a speech by United States Vice-president Mike Pence at the Hudson Institute "will look like the declaration of a new Cold War".

In January 2019, Robert D. Kaplan of the Center for a New American Security wrote that "it is nothing less than a new cold war: The constant, interminable Chinese computer hacks of American warships’ maintenance records, Pentagon personnel records, and so forth constitute war by other means. This situation will last decades and will only get worse".

In February 2019, Joshua Shifrinson, an associate professor from Boston University, criticised the concerns about tensions between China and the US as "overblown", saying that the relationship between the two countries are different from that of the US–Soviet Union relations during the original Cold War, that factors of heading to another era of bipolarity are uncertain, and that ideology play a less prominent role between China and the US.

In June 2019, academic Stephen Wertheim called President Trump a "xenophobe" and criticised Trump's foreign policy toward China for heightening risks of a new Cold War, which Wertheim wrote "could plunge the United States back into gruesome proxy wars around the world and risk a still deadlier war among the great powers."

In August 2019, Yuan Peng of the China Institute of International Studies said that the financial crisis of 2007–2008 "initiated a shift in the global order." Yuan predicted the possibility of the new cold war between both countries and their global power competition turning "from 'superpower vs. major power' to 'No. 1 vs. No. 2'." On the other hand, scholar Zhu Feng said that their "strategic competition" would not lead to the new Cold War. Zhu said that the US–China relations have progressed positively and remained "stable", despite disputes in the South China Sea and Taiwan Strait and US President Trump's aggressive approaches toward China.

In January 2020, columnist and historian Niall Ferguson opined that China is one of the major players of this Cold War, whose powers are "economic rather than military", and that Russia's role is "quite small". Ferguson also wrote: "[C]ompared with the 1950s, the roles have been reversed. China is now the giant, Russia the mean little sidekick. China under Xi remains strikingly faithful to the doctrine of Marx and Lenin. Russia under Putin has reverted to Tsarism." Ferguson further wrote that this Cold War is different from the original Cold War because the US "is so intertwined with China" at the point where "decoupling" is as others argued "a delusion" and because "America's traditional allies are much less eager to align themselves with Washington and against Beijing." He further wrote that the new Cold War "shifted away from trade to technology" when both the US and China signed their Phase One trade deal. In a February 2020 interview with The Japan Times, Ferguson suggested that, to "contain China", the US "work intelligently with its Asian and European allies", as the US had done in the original Cold War, rather than on its own and perform something more effective than "tariffs, which are a very blunt instrument." He also said that the US under Trump has been "rather poor" at making foreign relations.

On May 24, 2020, China Foreign Minister Wang Yi said that relations with the U.S. were on the "brink of a new Cold War" after it was fuelled by tensions over the COVID-19 pandemic. In June 2020, Boston College political scientist Robert S. Ross wrote that the US and China "are destined to compete [but] not destined for violent conflict or a cold war." In the following month July, Ross said that the Trump "administration would like to fully decouple from China. No trade, no cultural exchanges, no political exchanges, no cooperation on anything that resembles common interests."

In August 2020, a La Trobe University professor Nick Bisley wrote that the US–China rivalry "will be no Cold War" but rather will "be more complex, harder to manage, and last much longer." He further wrote that comparing the old Cold War to the ongoing rivalry "is a risky endeavour."

In September 2020, the UN Secretary General António Guterres warned that the increasing tensions between the US under Trump and China under Xi were leading to "a Great Fracture" which would become costly to the world. Xi Jinping replied by saying that "China has no intention to fight either a Cold War or a hot one with any country."

In March 2021, Columbia University professor Thomas J. Christensen wrote that the cold war between the US and China "is unlikely" in comparison to the original Cold War, citing China's prominence in the "global production chain" and absence of the authoritarianism vs. liberal democracy dynamic. Christensen further advised those concerned about the tensions between the two nations to research China's role in the global economy and its "foreign policy toward international conflicts and civil wars" between liberal and authoritarian forces. He further noted newly elected US President Joe Biden's planned different approach from predecessor Donald Trump.

In September 2021, former Portuguese defence and foreign minister Paulo Portas described the announcement of the AUKUS security pact and the ensuing unprecedented diplomatic crisis between the signatories (Australia, the United Kingdom, and the United States) and France (which has several territories in the Indo-Pacific) as a possible formal starting point of a New Cold War.

On 7 November 2021, President Joe Biden's national security adviser Jake Sullivan stated that the US does not pursue system change in China anymore, marking a clear break from the China policy pursued by previous US administrations. Sullivan said that US is not seeking a new Cold War with China, but is looking for a system of peaceful coexistence.

In November 2021, Hal Brands and a Yale professor John Lewis Gaddis wrote in their Foreign Affairs article that China and the US have been entering "a new cold war", meaning "a protracted international rivalry, for cold wars in this sense are as old as history itself." Brands and Gaddis further wrote that this has not been "the Cold War" and that "the context is quite different". Both authors differentiated the "Soviet–American Cold War" from the "Sino–American cold war".

Russo-American tensions

Sources disagree as to whether a period of global tension analogous to the Cold War is possible in the future, while others have used the term to describe the ongoing renewed tensions, hostilities, and political rivalries that intensified dramatically in 2014 between Russia, the United States and their respective allies.

In 2013, Michael Klare compared in RealClearPolitics tensions between Russia and the West to the ongoing proxy conflict between Saudi Arabia and Iran. Oxford Professor Philip N. Howard argued that a new cold war was being fought via the media, information warfare, and cyberwar. In 2014, notable figures such as Mikhail Gorbachev warned, against the backdrop of a confrontation between Russia and the West over the Russo-Ukrainian War, that the world was on the brink of a new cold war, or that it was already occurring. The American political scientist Robert Legvold also believes it started in 2013 during the Ukraine crisis. Others argued that the term did not accurately describe the nature of relations between Russia and the West.

Stephen F. Cohen, Robert D. Crane, and Alex Vatanka have all referred to a "US–Russian Cold War". Andrew Kuchins, an American political scientist and Kremlinologist speaking in 2016, believed the term was "unsuited to the present conflict" as it may be more dangerous than the Cold War.

While new tensions between Russia and the West have similarities with those during the Cold War, there are also major differences, such as modern Russia's increased economic ties with the outside world, which may potentially constrain Russia's actions, and provide it with new avenues for exerting influence, such as in Belarus and Central Asia, which have not seen the type of direct military action that Russia engaged in less cooperative former Soviet states like Ukraine and the Caucasus region. The term "Cold War II" has therefore been described as a misnomer.

The term "Cold War II" gained currency and relevance as tensions between Russia and the West escalated throughout the 2014 pro-Russian unrest in Ukraine followed by the Russian military intervention and especially the downing of Malaysia Airlines Flight 17 in July 2014. By August 2014, both sides had implemented economic, financial, and diplomatic sanctions upon each other: virtually all Western countries, led by the US and European Union, imposed punitive measures on Russia, which introduced retaliatory measures.

Some observers, including Syrian President Bashar al-Assad, judged the Syrian civil war to be a proxy war between Russia and the United States, and even a "proto-world war". In January 2016, senior UK government officials were reported to have registered their growing fears that "a new cold war" was now unfolding in Europe: "It really is a new Cold War out there. Right across the EU we are seeing alarming evidence of Russian efforts to unpick the fabric of European unity on a whole range of vital strategic issues".

In an interview with Time magazine in December 2014, Gorbachev said that the US under Barack Obama was dragging Russia into a new cold war. In February 2016, at the Munich Security Conference, NATO Secretary General Jens Stoltenberg said that NATO and Russia were "not in a cold-war situation but also not in the partnership that we established at the end of the Cold War", while Russian Prime Minister Dmitry Medvedev, speaking of what he called NATO's "unfriendly and opaque" policy on Russia, said "One could go as far as to say that we have slid back to a new Cold War". In October 2016 and March 2017, Stoltenberg said that NATO did not seek "a new Cold War" or "a new arms race" with Russia.

In February 2016, a Higher School of Economics university academic and Harvard University visiting scholar Yuval Weber wrote on E-International Relations that "the world is not entering Cold War II", asserting that the current tensions and ideologies of both sides are not similar to those of the original Cold War, that situations in Europe and the Middle East do not destabilise other areas geographically, and that Russia "is far more integrated with the outside world than the Soviet Union ever was". In September 2016, when asked if he thought the world had entered a new cold war, Russian Foreign Minister, Sergey Lavrov, argued that current tensions were not comparable to the Cold War. He noted the lack of an ideological divide between the United States and Russia, saying that conflicts were no longer ideologically bipolar.

In August 2016, Daniel Larison of The American Conservative magazine wrote that tensions between Russia and the United States would not "constitute a 'new Cold War'" especially between democracy and authoritarianism, which Larison found more limited than and not as significant in 2010s as that of the Soviet-Union era.

In October 2016, John Sawers, a former MI6 chief, said he thought the world was entering an era that was possibly "more dangerous" than the Cold War, as "we do not have that focus on a strategic relationship between Moscow and Washington". Similarly, Igor Zevelev, a fellow at the Wilson Center, said that "it's not a Cold War [but] a much more dangerous and unpredictable situation". CNN opined: "It's not a new Cold War. It's not even a deep chill. It's an outright conflict".

In January 2017, a former US Government adviser Molly K. McKew said at Politico that the US would win a new cold war. The New Republic editor Jeet Heer dismissed the possibility as "equally troubling[,] reckless threat inflation, wildly overstating the extent of Russian ambitions and power in support of a costly policy", and too centred on Russia while "ignoring the rise of powers like China and India". Heer also criticised McKew for suggesting the possibility. Jeremy Shapiro, a senior fellow in the Brookings Institution, wrote in his blog post at RealClearPolitics, referring to the US–Russia relations: "A drift into a new Cold War has seemed the inevitable result".

In August 2017, Russian Deputy Foreign Minister Sergei Ryabkov denied claims that the US and Russia were having another cold war, despite ongoing tensions between the two countries and newer US sanctions against Russia. A University of East Anglia graduate student Oliver Steward and the Casimir Pulaski Foundation senior fellow Stanisław Koziej in 2017 attributed Zapad 2017 exercise, a military exercise by Russia, as part of the new Cold War. In March 2018, Russian President Vladimir Putin told journalist Megyn Kelly in an interview: "My point of view is that the individuals that have said that a new Cold War has started are not analysts. They do propaganda." Michael Kofman, a senior research scientist at the CNA Corporation and a fellow at the Wilson Center's Kennan Institute said that the new cold war for Russia "is about its survival as a power in the international order, and also about holding on to the remnants of the Russian empire". Lyle Goldstein, a research professor at the US Naval War College claims that the situations in Georgia and Ukraine "seemed to offer the requisite storyline for new Cold War".

In March 2018, Harvard University professors Stephen Walt and then Odd Arne Westad criticised application of the term to increasing tensions between the Russia and the West as "misleading", "distract[ing]", and too simplistic to describe the more complicated contemporary international politics.

In April 2018 relations deteriorated over a potential US-led military strike in Middle East after the Douma chemical attack in Syria, which was attributed to the Syrian Army by rebel forces in Douma, and poisoning of the Skripals in the UK. The Secretary-General of the United Nations, António Guterres, told a meeting of the UN Security Council that "the Cold War was back with a vengeance". He suggested the dangers were even greater, as the safeguards that existed to manage such a crisis "no longer seem to be present". Dmitri Trenin supported Guterres' statement, but added that it began in 2014 and had been intensifying since, resulting in US-led strikes on the Syrian government on 13 April 2018.

Russian news agency TASS reported the Russian Foreign Minister Sergei Lavrov saying "I don't think that we should talk about a new Cold War", adding that the US development of low-yield nuclear warheads (the first of which entered production in January 2019) had increased the potential for the use of nuclear weapons.

In October 2018, Russian military analyst Pavel Felgenhauer told Deutsche Welle that the new Cold War would make the Intermediate-Range Nuclear Forces (INF) Treaty and other Cold War-era treaties "irrelevant because they correspond to a totally different world situation." In February 2019, Russian Foreign Minister Sergey Lavrov stated that the withdrawal from the INF treaty would not lead to "a new Cold War".

Speaking to the press in Berlin on 8 November 2019, a day before the 30th anniversary of the fall of the Berlin Wall, U.S. secretary of state Mike Pompeo warned of the dangers posed by Russia and China and specifically accused Russia, "led by a former KGB officer once stationed in Dresden", of invading its neighbours and crushing dissent. Jonathan Marcus of the BBC opined that Pompeo's words "appeared to be declaring the outbreak of a second [Cold War]".

In February 2022, journalist Marwan Bishara held the US and Russia responsible for pursuing "their own narrow interests", including then-US President Trump's recognition of Jerusalem as capital of Israel and Putin's 2022 Russian invasion of Ukraine, and for "pav[ing] the way for, well, another Cold War". That same period, journalist H. D. S. Greenway cited the Russian invasion of Ukraine and 4 February joint statement between Russia and China (under Putin and Xi Jinping) as one of the signs that Cold War II had officially begun.

In March 2022, Yale historian Arne Westad and Harvard historian Fredrik Logevall in a videotelephony conversation asserted "that the global showdown over Ukraine" would "not signal a second Cold War". Furthermore, Westad said that Putin's words about Ukraine resembled, which Harvard journalist James F. Smith summarized, "some of the colonial racial arguments of imperial powers of the past, ideas from the late 19th and early 20th century rather than the Cold War."

Digital television

From Wikipedia, the free encyclopedia
 
A map depicting digital terrestrial television standards
 

Digital television (DTV) is the transmission of television signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high-definition television (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio (commonly 16:9) in contrast to the narrower format of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards:

History

Background

Digital television's roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed digital video, requiring around 200 Mbit/s (25 MB/s) for a standard-definition television (SDTV) signal, and over 1 Gbit/s for high-definition television (HDTV).

Development

In the mid-1980s, Toshiba released a television set with digital capabilities, using integrated circuit chips such as a microprocessor to convert analog television broadcast signals to digital video signals, enabling features such as freezing pictures and showing two channels at once. In 1986, Sony and NEC Home Electronics announced their own similar TV sets with digital video capabilities. However, they still relied on analog TV broadcast signals, with true digital TV broadcasts not yet being available at the time.

A digital TV broadcast service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of discrete cosine transform (DCT) video compression technology made it possible in the early 1990s.

In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, and as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard, Japanese advancements were seen as pacesetters that threatened to eclipse U.S. electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration.

Between 1988 and 1991, several European organizations were working on DCT-based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast SDTV at 34 Mbit/s and near-studio-quality HDTV at about 70–140 Mbit/s. RAI demonstrated this with a 1990 FIFA World Cup broadcast in March 1990. An American company, General Instrument, also demonstrated the feasibility of a digital television signal in 1990. This led to the FCC being persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.

In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new TV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.

The final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet, and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format.

Inaugural launches

DirecTV in the U.S. launched the first commercial digital satellite platform in May 1994, using the Digital Satellite System (DSS) standard. Digital cable broadcasts were tested and launched in the U.S. in 1996 by TCI and Time Warner. The first digital terrestrial platform was launched in November 1998 as ONdigital in the United Kingdom, using the DVB-T standard.

Technical information

Formats and bandwidth

Comparison of image quality between ISDB-T (1080i broadcast, top) and NTSC (480i transmission, bottom)

Digital television supports many different picture formats defined by the broadcast television systems which are a combination of size and aspect ratio (width to height ratio).

With digital terrestrial television (DTT) broadcasting, the range of formats can be broadly divided into two categories: high-definition television (HDTV) for the transmission of high-definition video and standard-definition television (SDTV). These terms by themselves are not very precise, and many subtle intermediate cases exist.

One of several different HDTV formats that can be transmitted over DTV is: 1280 × 720 pixels in progressive scan mode (abbreviated 720p) or 1920 × 1080 pixels in interlaced video mode (1080i). Each of these uses a 16:9 aspect ratio. HDTV cannot be transmitted over analog television channels because of channel capacity issues.

SDTV, by comparison, may use one of several different formats taking the form of various aspect ratios depending on the technology used in the country of broadcast. In terms of rectangular pixels, NTSC countries can deliver a 640 × 480 resolution in 4:3 and 854 × 480 in 16:9, while PAL can give 768 × 576 in 4:3 and 1024 × 576 in 16:9. However, broadcasters may choose to reduce these resolutions to reduce bit rate (e.g., many DVB-T channels in the United Kingdom use a horizontal resolution of 544 or 704 pixels per line).

Each commercial broadcasting terrestrial television DTV channel in North America is permitted to be broadcast at a bit rate up to 19 megabits per second. However, the broadcaster does not need to use this entire bandwidth for just one broadcast channel. Instead the broadcast can use the channel to include PSIP and can also subdivide across several video subchannels (a.k.a. feeds) of varying quality and compression rates, including non-video datacasting services that allow one-way high-bit-rate streaming of data to computers like National Datacast.

A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with HD Radio), providing multiple feeds of entirely different television programming on the same channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is often referred to as distributing one's "bit budget" or multicasting. This can sometimes be arranged automatically, using a statistical multiplexer (or "stat-mux"). With some implementations, image resolution may be less directly limited by bandwidth; for example in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option to reduce the transmission bit rate and make reception easier for more distant or mobile viewers.

Receiving digital signal

There are several different ways to receive digital television. One of the oldest means of receiving DTV (and TV in general) is from terrestrial transmitters using an antenna (known as an aerial in some countries). This way is known as Digital terrestrial television (DTT). With DTT, viewers are limited to channels that have a terrestrial transmitter in range of their antenna.

Other ways have been devised to receive digital television. Among the most familiar to people are digital cable and digital satellite. In some countries where transmissions of TV signals are normally achieved by microwaves, digital MMDS is used. Other standards, such as Digital multimedia broadcasting (DMB) and DVB-H, have been devised to allow handheld devices such as mobile phones to receive TV signals. Another way is IPTV, that is receiving TV via Internet Protocol, relying on digital subscriber line (DSL) or optical cable line. Finally, an alternative way is to receive digital TV signals via the open Internet (Internet television), whether from a central streaming service or a P2P (peer-to-peer) system.

Some signals carry encryption and specify use conditions (such as "may not be recorded" or "may not be viewed on displays larger than 1 m in diagonal measure") backed up with the force of law under the World Intellectual Property Organization Copyright Treaty (WIPO Copyright Treaty) and national legislation implementing it, such as the U.S. Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a removable smart card, for example via the Common Interface (DVB-CI) standard for Europe and via Point Of Deployment (POD) for IS or named differently CableCard.

Protection parameters for terrestrial DTV broadcasting

Digital television signals must not interfere with each other, and they must also coexist with analog television until it is phased out. The following table gives allowable signal-to-noise and signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory tool for controlling the placement and power levels of stations. Digital TV is more tolerant of interference than analog TV, and this is the reason a smaller range of channels can carry an all-digital set of television stations.

System Parameters
(protection ratios)
Canada [13] USA [5] EBU [9, 12]
ITU-mode M3
Japan & Brazil [36, 37]
C/N for AWGN Channel +19.5 dB
(16.5 dB)
+15.19 dB +19.3 dB +19.2 dB
Co-Channel DTV into Analog TV +33.8 dB +34.44 dB +34 ~ 37 dB +38 dB
Co-Channel Analog TV into DTV +7.2 dB +1.81 dB +4 dB +4 dB
Co-Channel DTV into DTV +19.5 dB
(16.5 dB)
+15.27 dB +19 dB +19 dB
Lower Adjacent Channel DTV into Analog TV −16 dB −17.43 dB −5 ~ −11 dB −6 dB
Upper Adjacent Channel DTV into Analog TV −12 dB −11.95 dB −1 ~ −10 −5 dB
Lower Adjacent Channel Analog TV into DTV −48 dB −47.33 dB −34 ~ −37 dB −35 dB
Upper Adjacent Channel Analog TV into DTV −49 dB −48.71 dB −38 ~ −36 dB −37 dB
Lower Adjacent Channel DTV into DTV −27 dB −28 dB −30 dB −28 dB
Upper Adjacent Channel DTV into DTV −27 dB −26 dB −30 dB −29 dB

Interaction

People can interact with a DTV system in various ways. One can, for example, browse the electronic program guide. Modern DTV systems sometimes use a return path providing feedback from the end user to the broadcaster. This is possible with a coaxial or fiber optic cable, a dialup modem, or Internet connection but is not possible with a standard antenna.

Some of these systems support video on demand using a communication channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite).

1-segment broadcasting

1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments. The 12 segments of them are allocated for HDTV and remaining segment, the 13th, is used for narrow-band receivers such as mobile television or cell phone.

Timeline of transition

Comparison of analog vs digital

DTV has several advantages over analog TV, the most significant being that digital channels take up less bandwidth, and the bandwidth needs are continuously variable, at a corresponding reduction in image quality depending on the level of compression as well as the resolution of the transmitted image. This means that digital broadcasters can provide more digital channels in the same space, provide high-definition television service, or provide other non-television services such as multimedia or interactivity. DTV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source.

Digital and analog signals react to interference differently. For example, common problems with analog television include ghosting of images, noise from weak signals, and many other potential problems which degrade the quality of the image and sound, although the program material may still be watchable. With digital television, the audio and video must be synchronized digitally, so reception of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable. Short of this complete failure, "blocky" video is seen when the digital signal experiences interference.

Analog TV began with monophonic sound, and later developed multichannel television sound with two independent audio signal channels. DTV allows up to 5 audio signal channels plus a subwoofer bass channel, with broadcasts similar in quality to movie theaters and DVDs.

Digital TV signals require less transmission power than analog TV signals to be broadcast and received satisfactorily.

Compression artifacts, picture quality monitoring, and allocated bandwidth

DTV images have some picture defects that are not present on analog television or motion picture cinema, because of present-day limitations of bit rate and compression algorithms such as MPEG-2. This defect is sometimes referred to as "mosquito noise".

Because of the way the human visual system works, defects in an image that are localized to particular features of the image or that come and go are more perceptible than defects that are uniform and constant. However, the DTV system is designed to take advantage of other limitations of the human visual system to help mask these flaws, e.g. by allowing more compression artifacts during fast motion where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still backgrounds that may be closely examined in a scene (since time allows).

Broadcast, cable, satellite, and Internet DTV operators control the picture quality of television signal encodes using sophisticated, neuroscience-based algorithms, such as the structural similarity (SSIM) video quality measurement tool, which was accorded each of its inventors a Primetime Emmy because of its global use. Another tool, called Visual Information Fidelity (VIF), is a top-performing algorithm at the core of the Netflix VMAF video quality monitoring system, which accounts for about 35% of all U.S. bandwidth consumption.

Effects of poor reception

Changes in signal reception from factors such as degrading antenna connections or changing weather conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a perfectly decodable video initially, until the receiving equipment starts picking up interference that overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a garbled picture with significant damage, while other devices may go directly from perfectly decodable video to no video at all or lock up. This phenomenon is known as the digital cliff effect.

Block error may occur when transmission is done with compressed images. A block error in a single frame often results in black boxes in several subsequent frames, making viewing difficult.

For remote locations, distant channels that, as analog signals, were previously usable in a snowy and degraded state may, as digital signals, be perfectly decodable or may become completely unavailable. The use of higher frequencies will add to these problems, especially in cases where a clear line-of-sight from the receiving antenna to the transmitter is not available, because usually higher frequency signals can't pass through obstacles as easily.

Effect on old analog technology

Television sets with only analog tuners cannot decode digital transmissions. When analog broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of programming (e.g. cable, recorded media) or may purchase set-top converter boxes to tune in the digital signals. In the United States, a government-sponsored coupon was available to offset the cost of an external converter box. Analog switch-off (of full-power stations) took place on December 11, 2006 in The Netherlands, June 12, 2009 in the United States for full-power stations, and later for Class-A Stations on September 1, 2016, July 24, 2011 in Japan, August 31, 2011 in Canada, February 13, 2012 in Arab states, May 1, 2012 in Germany, October 24, 2012 in the United Kingdom and Ireland, October 31, 2012 in selected Indian cities, and December 10, 2013 in Australia. Completion of analog switch-off is scheduled for December 31, 2017 in the whole of India, December 2018 in Costa Rica and around 2020 for the Philippines.

Disappearance of TV-audio receivers

Prior to the conversion to digital TV, analog television broadcast audio for TV channels on a separate FM carrier signal from the video signal. This FM audio signal could be heard using standard radios equipped with the appropriate tuning circuits.

However, after the transition of many countries to digital TV, no portable radio manufacturer has yet developed an alternative method for portable radios to play just the audio signal of digital TV channels; DTV radio is not the same thing.

Environmental issues

The adoption of a broadcast standard incompatible with existing analog receivers has created the problem of large numbers of analog receivers being discarded during digital television transition. One superintendent of public works was quoted in 2009 saying; "some of the studies I’ve read in the trade magazines say up to a quarter of American households could be throwing a TV out in the next two years following the regulation change". In 2009, an estimated 99 million analog TV receivers were sitting unused in homes in the US alone and, while some obsolete receivers are being retrofitted with converters, many more are simply dumped in landfills where they represent a source of toxic metals such as lead as well as lesser amounts of materials such as barium, cadmium and chromium.

According to one campaign group, a CRT computer monitor or TV contains an average of 8 pounds (3.6 kg) of lead. According to another source, the lead in glass of a CRT varies from 1.08 lb to 11.28 lb, depending on screen size and type, but the lead is in the form of "stable and immobile" lead oxide mixed into the glass. It is claimed that the lead can have long-term negative effects on the environment if dumped as landfill. However, the glass envelope can be recycled at suitably equipped facilities. Other portions of the receiver may be subject to disposal as hazardous material.

Local restrictions on disposal of these materials vary widely; in some cases second-hand stores have refused to accept working color television receivers for resale due to the increasing costs of disposing of unsold TVs. Those thrift stores which are still accepting donated TVs have reported significant increases in good-condition working used television receivers abandoned by viewers who often expect them not to work after digital transition.

In Michigan in 2009, one recycler estimated that as many as one household in four would dispose of or recycle a TV set in the following year. The digital television transition, migration to high-definition television receivers and the replacement of CRTs with flatscreens are all factors in the increasing number of discarded analog CRT-based television receivers.

List of human positions

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lis...