Search This Blog

Wednesday, August 30, 2023

Dependency theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Dependency_theory

Dependency theory is the idea that resources flow from a "periphery" of poor and underdeveloped states to a "core" of wealthy states, enriching the latter at the expense of the former. A central contention of dependency theory is that poor states are impoverished and rich ones enriched by the way poor states are integrated into the "world system". This theory was officially developed in the late 1960s following World War II, as scholars searched for the root issue in the lack of development in Latin America.

The theory arose as a reaction to modernization theory, an earlier theory of development which held that all societies progress through similar stages of development, that today's underdeveloped areas are thus in a similar situation to that of today's developed areas at some time in the past, and that, therefore, the task of helping the underdeveloped areas out of poverty is to accelerate them along this supposed common path of development, by various means such as investment, technology transfers, and closer integration into the world market. Dependency theory rejected this view, arguing that underdeveloped countries are not merely primitive versions of developed countries, but have unique features and structures of their own; and, importantly, are in the situation of being the weaker members in a world market economy.

Some writers have argued for its continuing relevance as a conceptual orientation to the global division of wealth. Dependency theorists can typically be divided into two categories: liberal reformists and neo-Marxists. Liberal reformists typically advocate for targeted policy interventions, while the neo-Marxists believe in a command-centered economy.

Basics

The premises of dependency theory are that:

  1. Poor nations provide natural resources, cheap labour, a destination for obsolete technology, and markets for developed nations, without which the latter could not have the standard of living they enjoy.
  2. Wealthy nations actively perpetuate a state of dependence by various means. This influence may be multifaceted, involving economics, media control, politics, banking and finance, education, culture, and sport.

History

Dependency theory originates with two papers published in 1949, one by Hans Singer and one by Raúl Prebisch, in which the authors observe that the terms of trade for underdeveloped countries relative to the developed countries had deteriorated over time: the underdeveloped countries were able to purchase fewer and fewer manufactured goods from the developed countries in exchange for a given quantity of their raw materials exports. This idea is known as the Prebisch–Singer thesis. Prebisch, an Argentine economist at the United Nations Commission for Latin America (UNCLA), went on to conclude that the underdeveloped nations must employ some degree of protectionism in trade if they were to enter a self-sustaining development path. He argued that import-substitution industrialisation (ISI), not a trade-and-export orientation, was the best strategy for underdeveloped countries. The theory was developed from a Marxian perspective by Paul A. Baran in 1957 with the publication of his The Political Economy of Growth. Dependency theory shares many points with earlier, Marxist, theories of imperialism by Rosa Luxemburg and Vladimir Lenin, and has attracted continued interest from Marxists. Some authors identify two main streams in dependency theory: the Latin American Structuralist, typified by the work of Prebisch, Celso Furtado, and Aníbal Pinto at the United Nations Economic Commission for Latin America (ECLAC, or, in Spanish, CEPAL); and the American Marxist, developed by Paul A. Baran, Paul Sweezy, and Andre Gunder Frank.

Using the Latin American dependency model, the Guyanese Marxist historian Walter Rodney, in his book How Europe Underdeveloped Africa, described in 1972 an Africa that had been consciously exploited by European imperialists, leading directly to the modern underdevelopment of most of the continent.

The theory was popular in the 1960s and 1970s as a criticism of modernization theory, which was falling increasingly out of favor because of continued widespread poverty in much of the world. At that time the assumptions of liberal theories of development were under attack. It was used to explain the causes of overurbanization, a theory that urbanization rates outpaced industrial growth in several developing countries.

The Latin American Structuralist and the American Marxist schools had significant differences but, according to economist Matias Vernengo, they agreed on some basic points:

[B]oth groups would agree that at the core of the dependency relation between center and periphery lays [lies] the inability of the periphery to develop an autonomous and dynamic process of technological innovation. Technology the Promethean force unleashed by the Industrial Revolution is at the center of stage. The Center countries controlled the technology and the systems for generating technology. Foreign capital could not solve the problem, since it only led to limited transmission of technology, but not the process of innovation itself. Baran and others frequently spoke of the international division of labour – skilled workers in the center; unskilled in the periphery – when discussing key features of dependency.

Baran placed surplus extraction and capital accumulation at the center of his analysis. Development depends on a population's producing more than it needs for bare subsistence (a surplus). Further, some of that surplus must be used for capital accumulation – the purchase of new means of production – if development is to occur; spending the surplus on things like luxury consumption does not produce development. Baran noted two predominant kinds of economic activity in poor countries. In the older of the two, plantation agriculture, which originated in colonial times, most of the surplus goes to the landowners, who use it to emulate the consumption patterns of wealthy people in the developed world; much of it thus goes to purchase foreign-produced luxury items –automobiles, clothes, etc. – and little is accumulated for investing in development. The more recent kind of economic activity in the periphery is industry—but of a particular kind. It is usually carried out by foreigners, although often in conjunction with local interests. It is often under special tariff protection or other government concessions. The surplus from this production mostly goes to two places: part of it is sent back to the foreign shareholders as profit; the other part is spent on conspicuous consumption in a similar fashion to that of the plantation aristocracy. Again, little is used for development. Baran thought that political revolution was necessary to break this pattern.

In the 1960s, members of the Latin American Structuralist school argued that there is more latitude in the system than the Marxists believed. They argued that it allows for partial development or "dependent development"–development, but still under the control of outside decision makers. They cited the partly successful attempts at industrialisation in Latin America around that time (Argentina, Brazil, Mexico) as evidence for this hypothesis. They were led to the position that dependency is not a relation between commodity exporters and industrialised countries, but between countries with different degrees of industrialisation. In their approach, there is a distinction made between the economic and political spheres: economically, one may be developed or underdeveloped; but even if (somewhat) economically developed, one may be politically autonomous or dependent. More recently, Guillermo O'Donnell has argued that constraints placed on development by neoliberalism were lifted by the military coups in Latin America that came to promote development in authoritarian guise (O'Donnell, 1982).

The importance of multinational corporations and state promotion of technology were emphasised by the Latin American Structuralists.

Fajnzylber has made a distinction between systemic or authentic competitiveness, which is the ability to compete based on higher productivity, and spurious competitiveness, which is based on low wages.

The third-world debt crisis of the 1980s and continued stagnation in Africa and Latin America in the 1990s caused some doubt as to the feasibility or desirability of "dependent development".

The sine qua non of the dependency relationship is not the difference in technological sophistication, as traditional dependency theorists believe, but rather the difference in financial strength between core and peripheral countries–particularly the inability of peripheral countries to borrow in their own currency. He believes that the hegemonic position of the United States is very strong because of the importance of its financial markets and because it controls the international reserve currency – the US dollar. He believes that the end of the Bretton Woods international financial agreements in the early 1970s considerably strengthened the United States' position because it removed some constraints on their financial actions.

"Standard" dependency theory differs from Marxism, in arguing against internationalism and any hope of progress in less developed nations towards industrialization and a liberating revolution. Theotonio dos Santos described a "new dependency", which focused on both the internal and external relations of less-developed countries of the periphery, derived from a Marxian analysis. Former Brazilian President Fernando Henrique Cardoso (in office 1995–2002) wrote extensively on dependency theory while in political exile during the 1960s, arguing that it was an approach to studying the economic disparities between the centre and periphery. Cardoso summarized his version of dependency theory as follows:

  • there is a financial and technological penetration by the developed capitalist centers of the countries of the periphery and semi-periphery;
  • this produces an unbalanced economic structure both within the peripheral societies and between them and the centers;
  • this leads to limitations on self-sustained growth in the periphery;
  • this favors the appearance of specific patterns of class relations;
  • these require modifications in the role of the state to guarantee both the functioning of the economy and the political articulation of a society, which contains, within itself, foci of inarticulateness and structural imbalance.

The analysis of development patterns in the 1990s and beyond is complicated by the fact that capitalism develops not smoothly, but with very strong and self-repeating ups and downs, called cycles. Relevant results are given in studies by Joshua Goldstein, Volker Bornschier, and Luigi Scandella.

With the economic growth of India and some East Asian economies, dependency theory has lost some of its former influence. It still influences some NGO campaigns, such as Make Poverty History and the fair trade movement.

Other theorists and related theories

Two other early writers relevant to dependency theory were François Perroux and Kurt Rothschild. Other leading dependency theorists include Herb Addo, Walden Bello, Ruy Mauro Marini, Enzo Faletto, Armando Cordova, Ernest Feder, Pablo González Casanova, Keith Griffin, Kunibert Raffer, Paul Israel Singer, and Osvaldo Sunkel. Many of these authors focused their attention on Latin America; dependency theory in the Islamic world was primarily refined by the Egyptian economist Samir Amin.

Tausch, based on works of Amin from 1973 to 1997, lists the following main characteristics of periphery capitalism:

  1. Regression in both agriculture and small scale industry characterizes the period after the onslaught of foreign domination and colonialism
  2. Unequal international specialization of the periphery leads to the concentration of activities in export-oriented agriculture and or mining. Some industrialization of the periphery is possible under the condition of low wages, which, together with rising productivity, determine that unequal exchange sets in (double factorial terms of trade < 1.0; see Raffer, 1987)
  3. These structures determine in the long run a rapidly growing tertiary sector with hidden unemployment and the rising importance of rent in the overall social and economic system
  4. Chronic current account balance deficits, re-exported profits of foreign investments, and deficient business cycles at the periphery that provide important markets for the centers during world economic upswings
  5. Structural imbalances in the political and social relationships, inter alia a strong 'compradore' element and the rising importance of state capitalism and an indebted state class

The American sociologist Immanuel Wallerstein refined the Marxist aspect of the theory and expanded on it, to form world-systems theory. World Systems Theory is also known as WST and aligns closely with the idea of the "rich get richer and the poor get poorer". Wallerstein states that the poor and peripheral nations continue to get more poor as the developed core nations use their resources to become richer. Wallerstein developed the World Systems Theory utilizing the Dependence theory along with the ideas of Marx and the Annales School. This theory postulates a third category of countries, the semi-periphery, intermediate between the core and periphery. Wallerstein believed in a tri-modal rather than a bi-modal system because he viewed the world-systems as more complicated than a simplistic classification as either core or periphery nations. To Wallerstein, many nations do not fit into one of these two categories, so he proposed the idea of a semi-periphery as an in between state within his model. In this model, the semi-periphery is industrialized, but with less sophistication of technology than in the core; and it does not control finances. The rise of one group of semi-peripheries tends to be at the cost of another group, but the unequal structure of the world economy based on unequal exchange tends to remain stable. Tausch traces the beginnings of world-systems theory to the writings of the Austro-Hungarian socialist Karl Polanyi after the First World War, but its present form is usually associated with the work of Wallerstein.

Dependency theorists hold that short-term spurts of growth notwithstanding, long-term growth in the periphery will be imbalanced and unequal, and will tend towards high negative current account balances. Cyclical fluctuations also have a profound effect on cross-national comparisons of economic growth and societal development in the medium and long run. What seemed like spectacular long-run growth may in the end turn out to be just a short run cyclical spurt after a long recession. Cycle time plays an important role. Giovanni Arrighi believed that the logic of accumulation on a world scale shifts over time, and that the 1980s and beyond once more showed a deregulated phase of world capitalism with a logic, characterized - in contrast to earlier regulatory cycles - by the dominance of financial capital.

Criticism

Economic policies based on dependency theory have been criticized by free-market economists such as Peter Bauer and Martin Wolf and others:

  • Lack of competition: by subsidizing in-country industries and preventing outside imports, these companies may have less incentive to improve their products, to try to become more efficient in their processes, to please customers, or to research new innovations.
  • Sustainability: industries reliant on government support may not be sustainable for very long, particularly in poorer countries and countries which largely depend on foreign aid from more developed countries.
  • Domestic opportunity costs: subsidies on domestic industries come out of state coffers and therefore represent money not spent in other ways, like development of domestic infrastructure, seed capital or need-based social welfare programs. At the same time, the higher prices caused by tariffs and restrictions on imports require the people either to forgo these goods altogether or buy them at higher prices, forgoing other goods.

Market economists cite a number of examples in their arguments against dependency theory. The improvement of India's economy after it moved from state-controlled business to open trade is one of the most often cited (see also economy of India, The Commanding Heights). India's example seems to contradict dependency theorists' claims concerning comparative advantage and mobility, as much as its economic growth originated from movements such as outsourcing – one of the most mobile forms of capital transfer. In Africa, states that have emphasized import-substitution development, such as Zimbabwe, have typically been among the worst performers, while the continent's most successful non-oil based economies, such as Egypt, South Africa, and Tunisia, have pursued trade-based development.

According to economic historian Robert C. Allen, dependency theory's claims are "debatable" due to fact that the protectionism that was implemented in Latin America as a solution ended up failing. The countries incurred too much debt and Latin America went into a recession. One of the problems was that the Latin American countries simply had too small national markets to be able to efficiently produce complex industrialized goods, such as automobiles.amples of dependency theory

Many nations have been affected by both the positive and negative effects of the Dependency Theory. The idea of national dependency on another nation is not a relatively new concept even though the dependency theory itself is rather new. Dependency is perpetuated by using capitalism and finance. The dependent nations come to owe the developed nations so much money and capital that it is not possible to escape the debt, continuing the dependency for the foreseeable future.

An example of the dependency theory is that during the years of 1650 to 1900 European nations such as Britain and France took over or colonialized other nations. They used their superior military technology and naval strength at the time to do this. This began an economic system in the Americas, Africa, and Asia to then export the natural materials from their land to Europe. After shipping the materials to Europe, Britain and the other European countries made products with these materials and then sent them back to colonized parts of the Americas, Africa, and Asia. This resulted in the transfer of wealth from these regions’ products to Europe for taking control of the products. Dependency theory is considered rather controversial and many say it is not still in effect. Some scholars and politicians claim that with the decline of colonialism, dependency has been erased. Other scholars counter this approach, and state that our society still has national powerhouses such as the United States, European Nations such as Germany and Britain, China, and rising India that hundreds of other nations rely on for military aid, economic investments, etc.

Aid dependency

Aid dependency is an economic problem described as the reliance of less developed countries (LDCs) on more developed countries (MDCs) for financial aid and other resources. More specifically, aid dependency refers to the proportion of government spending that is given by foreign donors. Having an aid dependency ratio of about 15%-20% or higher will have negative effects on the country. What causes dependency is the inhibition of development and economic/political reform that results from trying to use aid as a long-term solution to poverty-ridden countries. Aid dependency arose from long term provisions of aid to countries in need in which the receiving country became accustomed to and developed a dependency syndrome. Aid dependency is most common today in Africa. The top donors as of 2013 were the United States, the United Kingdom, and Germany while the top receivers were Afghanistan, Vietnam, and Ethiopia.

History of aid dependence

International development aid became widely popularized post World-War Two due to first-world countries trying to create a more open economy as well as cold war competition. In 1970, the United Nations agreed on 0.7% of Gross National Income per country as the target for how much should be dedicated for international aid. In his book “Ending Aid Dependence”, Yash Tondon describes how organizations like the International Monetary Fund (IMF) and the World Bank (WB) have driven many African countries into dependency. During the economic crisis in the 1980s and the 1990s, a great deal of Sub-Saharan countries in Africa saw an influx of aid money which in turn resulted in dependency over the next few decades. These countries became so dependent that the President of Tanzania, Benjamin W. Mkapa, stated that “Development aid has taken deep root to the psyche of the people, especially in the poorer countries of the South. It is similar to drug addiction.”

Motives for giving aid

While the widespread belief is that aid is motivated only by assisting poor countries, and this is true in some cases, there is substantial evidence that suggests strategic, political, and welfare interests of the donors are driving forces behind aid. Maizels and Nissanke (MN 1984), and McKinlay and Little (ML, 1977) have conducted studies to analyze donors’ motives. From these studies they found that US aid flows are influenced by military as well as strategic factors. British and French aid is given to countries that were former colonies, and also to countries in which they have significant investment interest and strong trade relations.

Stunted economic growth

A main concern revolving around the issue of foreign aid is that the citizens in the country that is benefiting from aid lose motivation to work after receiving aid. In addition, some citizens will deliberately work less, resulting in a lower income, which in turn qualifies them for aid provision. Aid dependent countries are associated with having a lowly motivated workforce, a result from being accustomed to constant aid, and therefore the country is less likely to make economic progress and the living-standards are less likely to be improved. A country with long-term aid dependency remains unable to be self-sufficient and is less likely to make meaningful GDP growth which would allow for them to rely less on aid from richer countries. Food aid has been criticized heavily along with other aid imports due to its damage to the domestic economy. A higher dependency on aid imports results in a decline in the domestic demand for those products. In the long-run, the agricultural industry in LDC countries grows weaker due to long-term declines in demand as a result from food aid. In the future when aid is decreased, many LDC countries's agricultural markets are under-developed and therefore it is cheaper to import agricultural products. This occurred in Haiti, where 80% of their grain stocks come from the United States even after a large decrease in aid. In countries where there is a primary-product dependency on an item being imported as aid, such as wheat, economic shocks can occur and push the country further into an economic crisis.

Political dependency

Political dependency occurs when donors have too much influence in the governance of the receiving country. Many donors maintain a strong say in the government due to the country's reliance on their money, causing a decrease in the effectiveness and democratic-quality of the government. This results in the receiving country's government making policy that the donor agrees with and supports rather than what the people of the country desire. Government corruptibility increases as a result and inhibits reform of the government and political process in the country. These donors can include other countries or organizations with underlying intentions that may not be in favor of the people. Political dependency is an even stronger negative effect of aid dependency in countries where many of the problems stem from already corrupt politics and a lack of civil rights. For example, Zimbabwe and the Democratic Republic of the Congo both have extremely high aid dependency ratios and have experienced political turmoil. The politics of the Democratic Republic of the Congo have involved civil war and changing of regimes in the 21st century and have one of the highest aid dependency ratios in Africa.

As aid dependence can shift accountability away from the public and to being between state and donors, “presidentialism” can arise. Presidentialism is when the president and the cabinet within a political system have the power in political decision-making. In a democracy, budgets and public investment plans are to be approved by parliament. It is common for donors to fund projects outside of this budget and therefore go without parliament review. This further reinforces presidentialism and establishes practices that undermine democracy. Disputes over taxation and use of revenues are important in a democracy and can lead to better lives for citizens, but this cannot happen if citizens and parliaments don't know the complete proposed budget and spending priorities.

Aid dependency also compromises ownership which is marked by the ability of a government to implement its own ideas and policies. In aid dependent countries, the interests and ideas of aid agencies start to become priority and therefore erode ownership.

Corruption

Aid dependent countries rank worse in terms of level of corruption than in countries that are not dependent. Foreign aid is a potential source of rents, and rent-seeking can manifest as increased public sector employment. As public firms displace private investment, there is less pressure on the government to remain accountable and transparent as a result of the weakened private sector. Aid assists corruption which then fosters more corruption and creates a cycle. Foreign aid provides corrupt governments with free cash flow which further facilitates the corruption. Corruption works against economic growth and development, holding these poor countries down.

Efforts to end aid dependence

Since 2000, aid dependency has decreased by about ⅓. This can be seen in countries like Ghana, whose aid dependency decreased from 47% to 27%, as well as in Mozambique, where the aid dependency decreased from 74% to 58%. Target areas to decrease aid dependence include job creation, regional integration, and commercial engagement and trade. Long-term investment in agriculture and infrastructure are key requirements to end aid dependency as it will allow the country to slowly decrease the amount of food aid received and begin to develop its own agricultural economy and solve the food insecurity

Countering political corruption

Political corruption has been a strong force associated with maintaining dependency and being unable to see economic growth. During the Obama administration, congress claimed that the anti-corruption criteria The Millennium Challenge Corporation (MCC) used was not strict enough and was one of the obstacles to decreasing aid dependence. Often, in countries with a high corruption perception index the aid money is taken from government officials in the public sector or taken from other corrupt individuals in the private sector. Efforts to disapprove aid to countries where corruption is very prevalent have been a common tool used by organizations and governments to ensure funding is used properly but also to encourage other countries to fix the corruption.

Other methods of aid

It has been proven that foreign aid can prove useful in the long-run when directed towards the appropriate sector and managed accordingly. Specific pairing between organizations and donors with similar goals has produced more success in decreasing dependency than the tradition form of international aid which involves government to government communication. Botswana is a successful example of this. Botswana first began receiving aid in 1966. In this case, Botswana decided which areas needed aid and found donors accordingly rather than simply accepting aid from other countries whose governments had a say in where the money would be distributed towards. Recipient-led cases such as Botswana are more effective partially because it negates the donor's desirability to report numbers on the efficiency of their programs (that often include short-term figures such as food distributed) and instead focuses more on long-term growth and development that may be directed more towards infrastructure, education, and job development.

Magnetoencephalography

From Wikipedia, the free encyclopedia
 
Magnetoencephalography
Person undergoing a MEG
MeSHD015225

Magnetoencephalography (MEG) is a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain, using very sensitive magnetometers. Arrays of SQUIDs (superconducting quantum interference devices) are currently the most common magnetometer, while the SERF (spin exchange relaxation-free) magnetometer is being investigated for future machines. Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback. This can be applied in a clinical setting to find locations of abnormalities as well as in an experimental setting to simply measure brain activity.

History

Dr. Cohen's shielded room at MIT, in which first MEG was measured with a SQUID
First MEG measured with SQUID, in Dr. Cohen's room at MIT

MEG signals were first measured by University of Illinois physicist David Cohen in 1968, before the availability of the SQUID, using a copper induction coil as the detector. To reduce the magnetic background noise, the measurements were made in a magnetically shielded room. The coil detector was barely sensitive enough, resulting in poor, noisy MEG measurements that were difficult to use. Later, Cohen built a much better shielded room at MIT, and used one of the first SQUID detectors, just developed by James E. Zimmerman, a researcher at Ford Motor Company, to again measure MEG signals. This time the signals were almost as clear as those of EEG. This stimulated the interest of physicists who had been looking for uses of SQUIDs. Subsequent to this, various types of spontaneous and evoked MEGs began to be measured.

At first, a single SQUID detector was used to successively measure the magnetic field at a number of points around the subject's head. This was cumbersome, and, in the 1980s, MEG manufacturers began to arrange multiple sensors into arrays to cover a larger area of the head. Present-day MEG arrays are set in a helmet-shaped vacuum flask that typically contain 300 sensors, covering most of the head. In this way, MEGs of a subject or patient can now be accumulated rapidly and efficiently.

Recent developments attempt to increase portability of MEG scanners by using spin exchange relaxation-free (SERF) magnetometers. SERF magnetometers are relatively small, as they do not require bulky cooling systems to operate. At the same time, they feature sensitivity equivalent to that of SQUIDs. In 2012, it was demonstrated that MEG could work with a chip-scale atomic magnetometer (CSAM, type of SERF). More recently, in 2017, researchers built a working prototype that uses SERF magnetometers installed into portable individually 3D-printed helmets, which they noted in interviews could be replaced with something easier to use in future, such as a bike helmet.

The basis of the MEG signal

Synchronized neuronal currents induce weak magnetic fields. The brain's magnetic field, measuring at 10 femtotesla (fT) for cortical activity and 103 fT for the human alpha rhythm, is considerably smaller than the ambient magnetic noise in an urban environment, which is on the order of 108 fT or 0.1 μT. The essential problem of biomagnetism is, thus, the weakness of the signal relative to the sensitivity of the detectors, and to the competing environmental noise.

Origin of the brain's magnetic field. The electric current also produces the EEG signal.

The MEG (and EEG) signals derive from the net effect of ionic currents flowing in the dendrites of neurons during synaptic transmission. In accordance with Maxwell's equations, any electrical current will produce a magnetic field, and it is this field that is measured. The net currents can be thought of as current dipoles, i.e. currents with a position, orientation, and magnitude, but no spatial extent. According to the right-hand rule, a current dipole gives rise to a magnetic field that points around the axis of its vector component.

To generate a signal that is detectable, approximately 50,000 active neurons are needed. Since current dipoles must have similar orientations to generate magnetic fields that reinforce each other, it is often the layer of pyramidal cells, which are situated perpendicular to the cortical surface, that gives rise to measurable magnetic fields. Bundles of these neurons that are orientated tangentially to the scalp surface project measurable portions of their magnetic fields outside of the head, and these bundles are typically located in the sulci. Researchers are experimenting with various signal processing methods in the search for methods that detect deep brain (i.e., non-cortical) signal, but no clinically useful method is currently available.

It is worth noting that action potentials do not usually produce an observable field, mainly because the currents associated with action potentials flow in opposite directions and the magnetic fields cancel out. However, action fields have been measured from peripheral nerve system.

Magnetic shielding

Since the magnetic signals emitted by the brain are on the order of a few femtoteslas, shielding from external magnetic signals, including the Earth's magnetic field, is necessary. Appropriate magnetic shielding can be obtained by constructing rooms made of aluminium and mu-metal for reducing high-frequency and low-frequency noise, respectively.

Entrance to MSR, showing the separate shielding layers

Magnetically shielded room (MSR)

A magnetically shielded room (MSR) model consists of three nested main layers. Each of these layers is made of a pure aluminium layer plus a high-permeability ferromagnetic layer, similar in composition to molybdenum permalloy. The ferromagnetic layer is supplied as 1 mm sheets, while the innermost layer is composed of four sheets in close contact, and the outer two layers are composed of three sheets each. Magnetic continuity is maintained by overlay strips. Insulating washers are used in the screw assemblies to ensure that each main layer is electrically isolated. This helps eliminate radio frequency radiation, which would degrade SQUID performance. Electrical continuity of the aluminium is also maintained by aluminium overlay strips to ensure AC eddy current shielding, which is important at frequencies greater than 1 Hz. The junctions of the inner layer are often electroplated with silver or gold to improve conductivity of the aluminium layers.

Active shielding system

Active systems are designed for three-dimensional noise cancellation. To implement an active system, low-noise fluxgate magnetometers are mounted at the center of each surface and oriented orthogonally to it. This negatively feeds a DC amplifier through a low-pass network with a slow falloff to minimize positive feedback and oscillation. Built into the system are shaking and degaussing wires. Shaking wires increase the magnetic permeability, while the permanent degaussing wires are applied to all surfaces of the inner main layer to degauss the surfaces. Moreover, noise cancellation algorithms can reduce both low-frequency and high-frequency noise. Modern systems have a noise floor of around 2–3 fT/Hz0.5 above 1 Hz.

Source localization

The inverse problem

The challenge posed by MEG is to determine the location of electric activity within the brain from the induced magnetic fields outside the head. Problems such as this, where model parameters (the location of the activity) have to be estimated from measured data (the SQUID signals) are referred to as inverse problems (in contrast to forward problems where the model parameters (e.g. source location) are known and the data (e.g. the field at a given distance) is to be estimated.) The primary difficulty is that the inverse problem does not have a unique solution (i.e., there are infinite possible "correct" answers), and the problem of defining the "best" solution is itself the subject of intensive research. Possible solutions can be derived using models involving prior knowledge of brain activity.

The source models can be either over-determined or under-determined. An over-determined model may consist of a few point-like sources ("equivalent dipoles"), whose locations are then estimated from the data. Under-determined models may be used in cases where many different distributed areas are activated ("distributed source solutions"): there are infinitely many possible current distributions explaining the measurement results, but the most likely is selected. Localization algorithms make use of given source and head models to find a likely location for an underlying focal field generator.

One type of localization algorithm for overdetermined models operates by expectation-maximization: the system is initialized with a first guess. A loop is started, in which a forward model is used to simulate the magnetic field that would result from the current guess. The guess is adjusted to reduce the discrepancy between the simulated field and the measured field. This process is iterated until convergence.

Another common technique is beamforming, wherein a theoretical model of the magnetic field produced by a given current dipole is used as a prior, along with second-order statistics of the data in the form of a covariance matrix, to calculate a linear weighting of the sensor array (the beamformer) via the Backus-Gilbert inverse. This is also known as a linearly constrained minimum variance (LCMV) beamformer. When the beamformer is applied to the data, it produces an estimate of the power in a "virtual channel" at the source location.

The extent to which the constraint-free MEG inverse problem is ill-posed cannot be overemphasized. If one's goal is to estimate the current density within the human brain with say a 5mm resolution then it is well established that the vast majority of the information needed to perform a unique inversion must come not from the magnetic field measurement but rather from the constraints applied to the problem. Furthermore, even when a unique inversion is possible in the presence of such constraints said inversion can be unstable. These conclusions are easily deduced from published works.

Magnetic source imaging

The source locations can be combined with magnetic resonance imaging (MRI) images to create magnetic source images (MSI). The two sets of data are combined by measuring the location of a common set of fiducial points marked during MRI with lipid markers and marked during MEG with electrified coils of wire that give off magnetic fields. The locations of the fiducial points in each data set are then used to define a common coordinate system so that superimposing the functional MEG data onto the structural MRI data ("coregistration") is possible.

A criticism of the use of this technique in clinical practice is that it produces colored areas with definite boundaries superimposed upon an MRI scan: the untrained viewer may not realize that the colors do not represent a physiological certainty, not because of the relatively low spatial resolution of MEG, but rather some inherent uncertainty in the probability cloud derived from statistical processes. However, when the magnetic source image corroborates other data, it can be of clinical utility.

Dipole model source localization

A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs), which assumes the underlying neuronal sources to be focal. This dipole fitting procedure is non-linear and over-determined, since the number of unknown dipole parameters is smaller than the number of MEG measurements. Automated multiple dipole model algorithms such as multiple signal classification (MUSIC) and multi-start spatial and temporal modeling (MSST) are applied to the analysis of MEG responses. The limitations of dipole models for characterizing neuronal responses are (1) difficulties in localizing extended sources with ECDs, (2) problems with accurately estimating the total number of dipoles in advance, and (3) dependency on dipole location, especially depth in the brain.

Distributed source models

Unlike multiple-dipole modeling, distributed source models divide the source space into a grid containing a large number of dipoles. The inverse problem is to obtain the dipole moments for the grid nodes. As the number of unknown dipole moments is much greater than the number of MEG sensors, the inverse solution is highly underdetermined, so additional constraints are needed to reduce ambiguity of the solution. The primary advantage of this approach is that no prior specification of the source model is necessary. However, the resulting distributions may be difficult to interpret, because they only reflect a "blurred" (or even distorted) image of the true neuronal source distribution. The matter is complicated by the fact that spatial resolution depends strongly on various parameters such as brain area, depth, orientation, number of sensors etc.

Independent component analysis (ICA)

Independent component analysis (ICA) is another signal processing solution that separates different signals that are statistically independent in time. It is primarily used to remove artifacts such as blinking, eye muscle movement, facial muscle artifacts, cardiac artifacts, etc. from MEG and EEG signals that may be contaminated with outside noise. However, ICA has poor resolution of highly correlated brain sources.

Use in the field

Over 100 MEG systems are known to operate worldwide, with Japan possessing the greatest number of MEG systems per capita and the United States possessing the greatest overall number of MEG systems. A very small number of systems worldwide are designed for infant and/or fetal recordings.

In research, MEG's primary use is the measurement of time courses of activity. MEG can resolve events with a precision of 10 milliseconds or faster, while functional magnetic resonance imaging (fMRI), which depends on changes in blood flow, can at best resolve events with a precision of several hundred milliseconds. MEG also accurately pinpoints sources in primary auditory, somatosensory, and motor areas. For creating functional maps of human cortex during more complex cognitive tasks, MEG is most often combined with fMRI, as the methods complement each other. Neuronal (MEG) and hemodynamic fMRI data do not necessarily agree, in spite of the tight relationship between local field potentials (LFP) and blood oxygenation level-dependent (BOLD) signals. MEG and BOLD signals may originate from the same source (though the BOLD signals are filtered through the hemodynamic response).

MEG is also being used to better localize responses in the brain. The openness of the MEG setup allows external auditory and visual stimuli to be easily introduced. Some movement by the subject is also possible as long as it does not jar the subject's head. The responses in the brain before, during, and after the introduction of such stimuli/movement can then be mapped with greater spatial resolution than was previously possible with EEG. Psychologists are also taking advantage of MEG neuroimaging to better understand relationships between brain function and behavior. For example, a number of studies have been done comparing the MEG responses of patients with psychological troubles to control patients. There has been great success isolating unique responses in patients with schizophrenia, such as auditory gating deficits to human voices. MEG is also being used to correlate standard psychological responses, such as the emotional dependence of language comprehension.

Recent studies have reported successful classification of patients with multiple sclerosis, Alzheimer's disease, schizophrenia, Sjögren's syndrome, chronic alcoholism, facial pain and thalamocortical dysrhythmias. MEG can be used to distinguish these patients from healthy control subjects, suggesting a future role of MEG in diagnostics.

A large part of the difficulty and cost of using MEG is the need for manual analysis of the data. Progress has been made in analysis by computer, comparing a patient's scans with those drawn from a large database of normal scans, with the potential to reduce cost greatly.

Brain connectivity and neural oscillations

Based on its perfect temporal resolution, magnetoencephalography (MEG) is now heavily used to study oscillatory activity in the brain, both in terms of local neural synchrony and cross-area synchronisation. As an example for local neural synchrony, MEG has been used to investigate alpha rhythms in various targeted brain regions, such as in visual or auditory cortex. Other studies have used MEG to study the neural interactions between different brain regions (e.g., between frontal cortex and visual cortex). Magnetoencephalography can also be used to study changes in neural oscillations across different stages of consciousness, such as in sleep.

Focal epilepsy

The clinical uses of MEG are in detecting and localizing pathological activity in patients with epilepsy, and in localizing eloquent cortex for surgical planning in patients with brain tumors or intractable epilepsy. The goal of epilepsy surgery is to remove the epileptogenic tissue while sparing healthy brain areas. Knowing the exact position of essential brain regions (such as the primary motor cortex and primary sensory cortex, visual cortex, and areas involved in speech production and comprehension) helps to avoid surgically induced neurological deficits. Direct cortical stimulation and somatosensory evoked potentials recorded on electrocorticography (ECoG) are considered the gold standard for localizing essential brain regions. These procedures can be performed either intraoperatively or from chronically indwelling subdural grid electrodes. Both are invasive.

Noninvasive MEG localizations of the central sulcus obtained from somatosensory evoked magnetic fields show strong agreement with these invasive recordings. MEG studies assist in clarification of the functional organization of primary somatosensory cortex and to delineate the spatial extent of hand somatosensory cortex by stimulation of the individual digits. This agreement between invasive localization of cortical tissue and MEG recordings shows the effectiveness of MEG analysis and indicates that MEG may substitute invasive procedures in the future.

Fetal

MEG has been used to study cognitive processes such as vision, audition, and language processing in fetuses and newborns. Only two bespoke MEG systems, designed specifically for fetal recordings, operate worldwide. The first was installed at the University of Arkansas in 2000, and the second was installed at the University of Tübingen in 2008. Both devices are referred to as SQUID arrays for reproductive assessment (SARA) and utilize a concave sensor array whose shape compliments the abdomen of a pregnant woman. Fetal recordings of cortical activity are feasible with a SARA device from a gestational age of approximately 25 weeks onward until birth. Although built for fetal recordings, SARA systems can also record from infants placed in a cradle head-first toward the sensory array. While only a small number of devices worldwide are capable of fetal MEG recordings as of 2023, the proliferation of optically pumped magnetometers for MEG in neuroscience research will likely result in a greater number of research centers capable of recording and publishing fetal MEG data in the near future.

Traumatic brain injury

MEG can be used to identify traumatic brain injury, which is particularly common among soldiers exposed to explosions. Such injuries are not easily diagnosed by other methods, and are often misdiagnosed as post-traumatic stress disorder (PTSD).

Comparison with related techniques

MEG has been in development since the 1960s but has been greatly aided by recent advances in computing algorithms and hardware, and promises improved spatial resolution coupled with extremely high temporal resolution (better than 1 ms). Since the MEG signal is a direct measure of neuronal activity, its temporal resolution is comparable with that of intracranial electrodes.

MEG complements other brain activity measurement techniques such as electroencephalography (EEG), positron emission tomography (PET), and fMRI. Its strengths consist in independence of head geometry compared to EEG (unless ferromagnetic implants are present), non-invasiveness, use of no ionizing radiation, as opposed to PET and high temporal resolution as opposed to fMRI.

MEG in comparison to EEG

Although EEG and MEG signals originate from the same neurophysiological processes, there are important differences. Magnetic fields are less distorted than electric fields by the skull and scalp, which results in a better spatial resolution of the MEG. Whereas scalp EEG is sensitive to both tangential and radial components of a current source in a spherical volume conductor, MEG detects only its tangential components. Scalp EEG can, therefore, detect activity both in the sulci and at the top of the cortical gyri, whereas MEG is most sensitive to activity originating in sulci. EEG is, therefore, sensitive to activity in more brain areas, but activity that is visible in MEG can also be localized with more accuracy.

Scalp EEG is sensitive to extracellular volume currents produced by postsynaptic potentials. MEG detects intracellular currents associated primarily with these synaptic potentials because the field components generated by volume currents tend to cancel out in a spherical volume conductor. The decay of magnetic fields as a function of distance is more pronounced than for electric fields. Therefore, MEG is more sensitive to superficial cortical activity, which makes it useful for the study of neocortical epilepsy. Finally, MEG is reference-free, while scalp EEG relies on a reference that, when active, makes interpretation of the data difficult.

Democide

From Wikipedia, the free encyclopedia

Democide refers to "the intentional killing of an unarmed or disarmed person by government agents acting in their authoritative capacity and pursuant to government policy or high command." The term was first coined by Holocaust historian and statistics expert, R.J. Rummel in his book Death by Government, but has also been described as a better term than genocide to refer to certain types of mass killings, by renowned Holocaust historian Yehuda Bauer. According to Rummel, this definition covers a wide range of deaths, including forced labor and concentration camp victims, extrajudicial summary killings, and mass deaths due to governmental acts of criminal omission and neglect, such as in deliberate famines like the Holodomor, as well as killings by de facto governments, i.e. killings during a civil war. This definition covers any murder of any number of persons by any government.

Rummel created democide as an extended term to include forms of government murder not covered by genocide. According to Rummel, democide surpassed war as the leading cause of non-natural death in the 20th century.

Definition

Democide is the murder of any person or people by their government, including genocide, politicide, and mass murder. Democide is not necessarily the elimination of entire cultural groups but rather groups within the country that the government feels need to be eradicated for political reasons and due to claimed future threats.

According to Rummel, genocide has three different meanings. The ordinary meaning is murder by government of people due to their national, ethnic, racial or religious group membership. The legal meaning of genocide refers to the international treaty on genocide, the Convention on the Prevention and Punishment of the Crime of Genocide. This also includes nonlethal acts that in the end eliminate or greatly hinder the group. Looking back on history, one can see the different variations of democides that have occurred, but it still consists of acts of killing or mass murder. The generalized meaning of genocide is similar to the ordinary meaning but also includes government killings of political opponents or otherwise intentional murder. In order to avoid confusion over which meaning is intended, Rummel created democide for this third meaning.

In "How Many Did Communist Regimes Murder?", Rummel wrote:

First, however, I should clarify the term democide. It means for governments what murder means for an individual under municipal law. It is the premeditated killing of a person in cold blood, or causing the death of a person through reckless and wanton disregard for their life. Thus, a government incarcerating people in a prison under such deadly conditions that they die in a few years is murder by the state—democide—as would parents letting a child die from malnutrition and exposure be murder. So would government forced labor that kills a person within months or a couple of years be murder. So would government created famines that then are ignored or knowingly aggravated by government action be murder of those who starve to death. And obviously, extrajudicial executions, death by torture, government massacres, and all genocidal killing be murder. However, judicial executions for crimes that internationally would be considered capital offenses, such as for murder or treason (as long as it is clear that these are not fabricated for the purpose of executing the accused, as in communist show trials), are not democide. Nor is democide the killing of enemy soldiers in combat or of armed rebels, nor of noncombatants as a result of military action against military targets.

In his work and research, Rummel distinguished between colonial, democratic, and authoritarian and totalitarian regimes. He defined totalitarianism as follows:

There is much confusion about what is meant by totalitarian in the literature, including the denial that such systems even exist. I define a totalitarian state as one with a system of government that is unlimited constitutionally or by countervailing powers in society (such as by a church, rural gentry, labor unions, or regional powers); is not held responsible to the public by periodic secret and competitive elections; and employs its unlimited power to control all aspects of society, including the family, religion, education, business, private property, and social relationships. Under Stalin, the Soviet Union was thus totalitarian, as was Mao's China, Pol Pot's Cambodia, Hitler's Germany, and U Ne Win's Burma. Totalitarianism is then a political ideology for which a totalitarian government is the agency for realizing its ends. Thus, totalitarianism characterizes such ideologies as state socialism (as in Burma), Marxism-Leninism as in former East Germany, and Nazism. Even revolutionary Moslem Iran since the overthrow of the Shah in 1978–79 has been totalitarian—here totalitarianism was married to Moslem fundamentalism. In short, totalitarianism is the ideology of absolute power. State socialism, communism, Nazism, fascism, and Moslem fundamentalism have been some of its recent raiments. Totalitarian governments have been its agency. The state, with its international legal sovereignty and independence, has been its base. As will be pointed out, mortacracy is the result.

Estimates

In his estimates, Rudolph Rummel relied mostly on historical accounts, an approach that rarely provides accuracy compared with contemporary academic opinion. In the case of Mexican democide, Rummel wrote that while "these figures amount to little more than informed guesses", he thought "there is enough evidence to at least indict these authoritarian regimes for megamurder." According to Rummel, his research showed that the death toll from democide is far greater than the death toll from war. After studying over 8,000 reports of government-caused deaths, Rummel estimated that there have been 262 million victims of democide in the last century. According to his figures, six times as many people have died from the actions of people working for governments than have died in battle. One of his main findings was that democracies have much less democide than authoritarian regimes. Rummel argued that there is a relation between political power and democide. Political mass murder grows increasingly common as political power becomes unconstrained. At the other end of the scale, where power is diffuse, checked, and balanced, political violence is a rarity. According to Rummel, "[t]he more power a regime has, the more likely people will be killed. This is a major reason for promoting freedom." Rummel argued that "concentrated political power is the most dangerous thing on earth."

Rummel's estimates, especially about Communist democide, typically included a wide range and cannot be considered determinative. Rummel calculated nearly 43 million deaths due to democide inside and outside the Soviet Union during Stalin's regime. This is much higher than an often quoted figure in the popular press of 20 million, or a 2010s scholarly figure of 9 million. Rummel responded that the 20 million estimate is based on a figure from Robert Conquest's The Great Terror and that Conquest's qualifier "almost certainly too low" is usually forgotten. For Rummell, Conquest's calculations excluded camp deaths before 1936 and after 1950, executions (1939–1953), the forced population transfer in the Soviet Union (1939–1953), the deportation within the Soviet Union of minorities (1941–1944), and those the Soviet Red Army and Cheka (the secret police) executed throughout Eastern Europe after their conquest during the 1944–1945 period. Moreover, the Holodomor that killed 5 million in 1932–1934 (according to Rummel) is also not included. According to Rummel, forced labor, executions, and concentration camps were responsible for over one million deaths in the Democratic People's Republic of Korea from 1948 to 1987. After decades of research in the state archives, most scholars say that Stalin's regime killed between 6 and 9 million, which is considerably less than originally thought, while Nazi Germany killed at least 11 million, which is in line with previous estimates.

Application

Authoritarian and totalitarian regimes

Communist regimes

The concept of democide has been applied by Rummel to Communist regimes. In 1987, Rudolph Rummel's book Death by Government Rummel estimated that 148 million were killed by Communist governments from 1917 to 1987. The list of Communist countries with more than 1 million estimated victims included:

In 1993, Rummel wrote: "Even were we to have total access to all communist archives we still would not be able to calculate precisely how many the communists murdered. Consider that even in spite of the archival statistics and detailed reports of survivors, the best experts still disagree by over 40 percent on the total number of Jews killed by the Nazis. We cannot expect near this accuracy for the victims of communism. We can, however, get a probable order of magnitude and a relative approximation of these deaths within a most likely range." In 1994, Rummel updated his estimates for Communist regimes at about 110 million people, foreign and domestic, killed by Communist democide from 1900 to 1987. Due to additional information about Mao Zedong's culpability in the Great Chinese Famine according to Mao: The Unknown Story, a 2005 book authored by Jon Halliday and Jung Chang, Rummel revised upward his total for Communist democide to about 148 million, using their estimate of 38 million famine deaths.

Rummel's figures for Communist governments have been criticized for the methodology which he used to arrive at them, and they have also been criticized for being higher than the figures which have been given by most scholars (for example, The Black Book of Communism estimates the number of those killed in the USSR at 20 million).

Right-wing authoritarian, fascist, and feudal regimes

Estimates by Rummel for fascist or right-wing authoritarian regimes include:

Estimates for other regime-types include:

Democide in Communist and Nationalist China, Nazi Germany, and the Soviet Union are characterized by Rummel as deka-megamurderers (128,168,000), while those in Cambodia, Japan, Pakistan, Poland, Turkey, Vietnam, and Yugoslavia are characterized as the lesser megamurderers (19,178,000), and cases in Mexico, North Korea, and feudal Russia are characterized as suspected megamurderers (4,145,000). Rummel wrote that "even though the Nazis hardly matched the democide of the Soviets and Communist Chinese", they "proportionally killed more".

Colonial regimes

In response to David Stannard's figures about what he terms "the American Holocaust", Rummel estimated that over the centuries of European colonization about 2 million to 15 million American indigenous people were victims of democide, excluding military battles and unintentional deaths in Rummel's definition. Rummel wrote that "[e]ven if these figures are remotely true, then this still make this subjugation of the Americas one of the bloodier, centuries long, democides in world history."

  • Rummel stated that his estimate for those killed by colonialism is 50,000,000 persons in the 20th century; this was revised upwards from his initial estimate of 815,000 dead.

Democratic regimes

While democratic regimes are considered by Rummel to be the least likely to commit democide and engage in wars per the democratic peace theory, Rummel wrote that

  • "democracies themselves are responsible for some of this democide. Detailed estimates have yet to be made, but preliminarily work suggests that some 2,000,000 foreigners have been killed in cold blood by democracies."

Foreign policy and secret services of democratic regimes "may also carry on subversive activities in other states, support deadly coups, and actually encourage or support rebel or military forces that are involved in democidal activities. Such was done, for example, by the American CIA in the 1952 coup against Iran Prime Minister Mossadeq and the 1973 coup against Chile's democratically elected President Allende by General Pinochet. Then there was the secret support given the military in El Salvador and Guatemala although they were slaughtering thousands of presumed communist supporters, and that of the Contras in their war against the Sandinista government of Nicaragua in spite of their atrocities. Particularly reprehensible was the covert support given to the Generals in Indonesia as they murdered hundreds of thousands of communists and others after the alleged attempted communist coup in 1965, and the continued secret support given to General Agha Mohammed Yahya Khan of Pakistan even as he was involved in murdering over a million Bengalis in East Pakistan (now Bangladesh)."

According to Rummel, examples of democratic democide would include "those killed in indiscriminate or civilian targeted city bombing, as of Germany and Japan in World War II. It would include the large scale massacres of Filipinos during the bloody American colonization of the Philippines at the beginning of this century, deaths in British concentration camps in South Africa during the Boer War, civilian deaths due to starvation during the British blockade of Germany in and after World War I, the rape and murder of helpless Chinese in and around Peking in 1900, the atrocities committed by Americans in Vietnam, the murder of helpless Algerians during the Algerian War by the French, and the unnatural deaths of German prisoners of war in French and American POW camps after World War II."

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...