Search This Blog

Thursday, July 26, 2018

Bootstrapping our way to an ageless future

September 19, 2007 by Aubrey de Grey
Original link:  http://www.kurzweilai.net/bootstrapping-our-way-to-an-ageless-future
An excerpt from Ending Aging, St. Martin’s Press, Sept. 2007, Chapter 14. 

Biomedical gerontologist Aubrey de Grey expects many people alive today to live to 1000 years of age and to avoid age-related health problems even at that age. In this excerpt from his just-published, much-awaited book, Ending Aging, he explains how.
 
I have a confession to make. In Chapters 5 through 12, where I explained the details of SENS, I elided one rather important fact—a fact that the biologists among my audience will very probably have spotted. I’m going to address that omission in this chapter, building on a line of reasoning that I introduced in an ostensibly quite circumscribed context towards the end of Chapter 9.
It is this: the therapies that we develop in a decade or so in mice, and those that may come only a decade or two later for humans, will not be perfect. Other things being equal, there will be a residual accumulation of damage within our bodies, however frequently and thoroughly we apply these therapies, and we will eventually experience age-related decline and death just as now, only at a greater age. Probably not all that much greater either — probably only 30-50 years older than today.

But other things won’t be equal. In this chapter, I’m going to explain why not—and why, as you may already know from other sources, I expect many people alive today to live to 1000 years of age and to avoid age-related health problems even at that age.

I’ll start by describing why it’s unrealistic to expect these therapies to be perfect.

MUST WE AGE?

A long life in a healthy, vigorous, youthful body has always been one of humanity’s greatest dreams. Recent progress in genetic manipulations and calorie-restricted diets in laboratory animals hold forth the promise that someday science will enable us to exert total control over our own biological aging.

Nearly all scientists who study the biology of aging agree that we will someday be able to substantially slow down the aging process, extending our productive, youthful lives. Dr. Aubrey de Grey is perhaps the most bullish of all such researchers. As has been reported in media outlets ranging from 60 Minutes to The New York Times, Dr. de Grey believes that the key biomedical technology required to eliminate aging-derived debilitation and death entirely—technology that would not only slow but periodically reverse age-related physiological decay, leaving us biologically young into an indefinite future—is now within reach.

In Ending Aging, Dr. de Grey and his research assistant Michael Rae describe the details of this biotechnology. They explain that the aging of the human body, just like the aging of man-made machines, results from an accumulation of various types of damage. As with man-made machines, this damage can periodically be repaired, leading to indefinite extension of the machine’s fully functional lifetime, just as is routinely done with classic cars. We already know what types of damage accumulate in the human body, and we are moving rapidly toward the comprehensive development of technologies to remove that damage. By demystifying aging and its postponement for the nonspecialist reader, de Grey and Rae systematically dismantle the fatalist presumption that aging will forever defeat the efforts of medical science.

Evolution didn’t leave notes

I emphasised in Chapter 3 that the body is a machine, and that that’s both why it ages and why it can in principle be maintained. I made a comparison with vintage cars, which are kept fully functional even 100 years after they were built, using the same maintenance technologies that kept them going 50 years ago when they were already far older than they were ever designed to be. More complex machines can also be kept going indefinitely, though the expense and expertise involved may mean that this never happens in practice because replacing the machine is a reasonable alternative. This sounds very much like a reason to suppose that the therapies we develop to stave off aging for a few decades will indeed be enough to stave it off indefinitely.

But actually that’s overoptimistic. All we can reliably infer from a comparison with man-made machines is that a truly comprehensive panel of therapies, which truly repairs everything that goes wrong with us as a result of aging, is possible in principle— not that it is foreseeable. And in fact, if we look back at the therapies I’ve described in this book, we can see that actually one thing about them is very unlike maintenance of a man-made machine: these therapies strive to minimally alter metabolism itself, and target only the initially inert side-effects of metabolism, whereas machine maintenance may involve adding extra things to the machinery itself (to the fuel or the oil of a car, for example). We can get away with this sort of invasive maintenance of man-made machines because we (well, some of us!) know how they work right down to the last detail, so we can be adequately sure that our intervention won’t have unforeseen side-effects. With the body—even the body of a mouse—we are still profoundly ignorant of the details, so we have to sidestep our ignorance by interfering as little as possible.

What that means for efficacy of therapies is that, as we fix more and more aspects of aging, you can bet that new aspects will be unmasked. These new things—eighth and subsequent items to add to the “seven deadly things” listed in this book—will not be fatal at a currently normal age, because if they were, we’d know about them already. But they’ll be fatal eventually, unless we work out how to fix them too.

It’s not just “eighth things” we have to worry about, either. Within each of the seven existing categories, there are some subcategories that will be easier to fix than others. For example, there are lots of chemically distinct cross-links responsible for stiffening our arteries; some of them may be broken with ALT-711 and related molecules, but others will surely need more sophisticated agents that have not yet been developed. Another example: obviating mitochondrial DNA by putting modified copies of it into the cell’s chromosomes requires gene therapy, and thus far we have no gene therapy delivery system (“vector”) that can safely get into all cells, so for the foreseeable future we’ll probably only be able to protect a subset of cells from mtDNA mutations. Much better vectors will be needed if we are to reach all cells.

In practice, therefore, therapies that rejuvenate 60-year-olds by 20 years will not work so well the second time around. When the therapies are applied for the first time, the people receiving them will have 60 years of “easy” damage (the types that the therapies can remove) and also 60 years of “difficult” damage. But by the time beneficiaries of these therapies have returned to biologically 60 (which, let’s presume, will happen when they’re chronologically about 80), the damage their bodies contain will consist of 20 years of “easy” damage and 80 years of “difficult” damage. Thus, the therapies will only rejuvenate them by a much smaller amount, say ten years. So they’ll have to come back sooner for the third treatment, but that will benefit them even less… and very soon, just like Achilles catching up with the tortoise in Zeno’s paradox, aging will get the better of them. See Figure 1.

Figure 1
Figure 1. The diminishing returns delivered by repeated
application of a rejuvenation regime.

Back in Chapters 3 and 4 I explained that, contrary to one’s intuition, rejuvenation may actually be easier than retardation. Now it’s time to introduce an even more counterintuitive fact: that, even though it will be much harder to double a middle-aged human’s remaining lifespan than a middle-aged mouse’s, multiplying that remaining lifespan by much larger factors—ten or 30, say—will be much easier in humans than in mice.

The two-speed pace of technology

I’m now going to switch briefly from science to the history of science, or more precisely the history of technology.

It was well before recorded history that people began to take an interest in the possibility of flying: indeed, this may be a desire almost as ancient as the desire to live forever. Yet, with the notable but sadly unreproduced exception of Daedalus and Icarus, no success in this area was achieved until about a century ago. (If we count balloons then we must double that, but really only airships—balloons that can control their direction of travel reasonably well—should be counted, and they only emerged at around the same time as the aircraft.) Throughout the previous few centuries, engineers from Leonardo on devised ways to achieve controlled powered flight, and we must presume that they believed their designs to be only a few decades (at most) from realisation. But they were wrong.

Ever since the Wright brothers flew at Kitty Hawk, however, things have been curiously different. Having mastered the basics, aviation engineers seem to have progressed to ever greater heights (literally as well as metaphorically!) at an almost serenely smooth pace. To pick a representative selection of milestones: Lindbergh flew the Atlantic 24 years after the first powered flight occurred, the first commercial jetliner (the Comet) debuted 22 years after that, and the first supersonic airliner (Concorde) followed after a further 20 years.

This stark contrast between fundamental breakthroughs and incremental refinements of those breakthroughs is, I would contend, typical of the history of technological fields. Further, I would argue that it’s not surprising: both psychologically and scientifically, bigger advances are harder to estimate the difficulty of.

I mention all this, of course, because of what it tells us about the likely future progress of life extension therapies. Just as people were wrong for centuries about how hard it as to fly but eventually cracked it, we’ve been wrong since time immemorial about how hard aging is to combat but we’ll eventually crack it too. But just as people have been pretty reliably correct about how to make better and better aircraft once they had the first one, we can expect to be pretty reliably correct about how to repair the damage of aging more and more comprehensively once we can do it a little.

That’s not to say it’ll be easy, though. It’ll take time, just as it took time to get from the Wright Flyer to Concorde. And that is why, if you want to live to 1000, you can count yourself lucky that you’re a human and not a mouse. Let me take you through the scenario, step by step.

Suppose we develop Robust Mouse Rejuvenation in 2016, and we take a few dozen two-year-old mice and duly treble their one-year remaining lifespans. That will mean that, rather than dying in 2017 as they otherwise would, they’ll die in 2019. Well, maybe not—in particular, not if we can develop better therapies by 2018 that re-treble their remaining lifespan (which will by now be down to one year again). But remember, they’ll be harder to repair the second time: their overall damage level may be the same as before they received the first therapies, but a higher proportion of that damage will be of types that those first therapies can’t fix. So we’ll only be able to achieve that re-trebling if the therapies we have available by 2018 are considerably more powerful than those that we had in 2016. And to be honest, the chance that we’ll improve the relevant therapies that much in only two years is really pretty slim. In fact, the likely amount of progress in just two years is so small that it might as well be considered zero. Thus, our murine heroes will indeed die in 2019 (or 2020 at best), despite our best efforts.

But now, suppose we develop Robust Human Rejuvenation in 2031, and we take a few dozen 60-year-old humans and duly double their 30-year remaining lifespans. By the time they come back in (say) 2051, biologically 60 again but chronologically 80, they’ll need better therapies, just as the mice did in 2018. But luckily for them, we’ll have had not two but twenty years to improve the therapies. And 20 years is a very respectable period of time in technology—long enough, in fact, that we will with very high probability have succeeded in developing sufficient improvements to the 2031 therapies so that those 80-year-olds can indeed be restored from biologically 60 to biologically 40, or even a little younger, despite their enrichment (relative to 2031) in harder-to-repair types of damage. So unlike the mice, these humans will have just as many years (20 or more) of youth before they need third-generation treatments as they did before the second.

And so on …. See Figure 2.

Figure 2
Figure 2. How the diminishing returns depicted in Figure 1
are avoided by repeated application of a rejuvenation regime
that is sufficiently more effective each time than the previous
time.

Longevity Escape Velocity

The key conclusion of the logic I’ve set out above is that there is a threshold rate of biomedical progress that will allow us to stave off aging indefinitely, and that that rate is implausible for mice but entirely plausible for humans. If we can make rejuvenation therapies work well enough to give us time to make then work better, that will give us enough additional time to make them work better still, which will … you get the idea. This will allow us to escape age-related decline indefinitely, however old we become in purely chronological terms. I think the term “longevity escape velocity” (LEV) sums that up pretty well.1

One feature of LEV that’s worth pointing out is that we can accumulate lead-time. What I mean is that if we have a period in which we improve the therapies faster than we need to, that will allow us to have a subsequent period in which we don’t improve them so fast. It’s only the average rate of improvement, starting from the arrival of the first therapies that give us just 20 or 30 extra years, that needs to stay above the LEV threshold.

In case you’re having trouble assimilating all this, let me describe it in terms of the physical state of the body. Throughout this book, I’ve been discussing aging as the accumulation of molecular and cellular “damage” of various types, and I’ve highlighted the fact that a modest quantity of damage is no problem—metabolism just works around it, in the same way that a household only needs to put out the garbage once a week, not every hour. In those terms, the attainment and maintenance of escape velocity simply means that our best therapies must improve fast enough to outweigh the progressive shift in the composition of our aging damage to more repair-resistant forms, as the forms that are easier to repair are progressively eliminated by our therapies. If we can do this, the total amount of damage in each category can be kept permanently below the level that initiates functional decline.

Another, perhaps simpler, way of looking at this is to consider the analogy with literal escape velocity, i.e. the overcoming of gravity. Suppose you’re at the top of a cliff and you jump off. Your remaining life expectancy is short—and it gets shorter as you descend to the rocks below. This is exactly the same as with aging: the older you get, the less remaining time you can expect to live. The situation with the periodic arrival of ever better rejuvenation therapies is then a bit like jumping off a cliff with a jet-pack on your back. Initially the jetpack is turned off, but as you fall, you turn it on and it gives you a boost, slowing your fall. As you fall further, you turn up the power on the jetpack, and eventually you start to pull out of the dive and even start shooting upwards. And the further up you go, the easier it is to go even further.

The political and social significance of discussing LEV

I’ve had a fairly difficult time convincing my colleagues in biogerontology of the feasibility of the various SENS components, but in general I’ve been successful once I’ve been given enough time to go through the details. When it comes to LEV, on the other hand, the reception to my proposals can best be described as blank incomprehension. This is not too surprising, in hindsight, because the LEV concept is even further distant from the sort of scientific thinking that my colleagues normally do than my other ideas are: it’s not only an area of science that’s distant from mainstream gerontology, it’s not even science at all in the strict sense, but rather the history of technology. But I regard that as no excuse. The fact is, the history of technology is evidence, just like any other evidence, and scientists have no right to ignore it.

Another big reason for my colleagues’ resistance to the LEV concept is, of course, that if I’m seen to be right that achievement of LEV is foreseeable, they can no longer go around saying that they’re working on postponing aging by a decade or two but no more. As I outlined in Chapter 13, there is an intense fear within the senior gerontology community of being seen as having anything to do with radical life extension, with all the uncertainties that it will surely herald. They want nothing to do with such talk.

You might think that my reaction to this would be to focus on the short term: to avoid antagonising my colleagues with the LEV concept and its implications of four-digit lifespans, in favour of increased emphasis on the fine details of getting the SENS strands to work in a first-generation form. But this is not an option for me, for one very simple and incontrovertible reason: I’m in this business to save lives. In order to maximise the number of lives saved—healthy years added to people’s lives, if you’d prefer a more precise measure—I need to address the whole picture. And that means ensuring that you, dear reader—the general public—appreciate the importance of this work enough to motivate its funding.

Now, your first thought may be: hang on, if indefinite life extension is so unpalatable, wouldn’t funding be attracted more easily by keeping quiet about it? Well, no—and for a pretty good reason.

The world’s richest man, Bill Gates, set up a foundation a few years ago whose primary mission is to address health issues in the developing world.2 This is a massively valuable humanitarian effort, which I wholeheartedly support, even though it doesn’t directly help SENS at all. I’m not the only person who supports it, either: in 2006 the world’s second richest man, Warren Buffett, committed a large proportion of his fortune to be donated in annual increments to the Gates Foundation.3

The eagerness of extremely wealthy individuals to contribute to world health is, in more general terms, an enormous boost for SENS. This is mainly because a rising tide raises all boats: once it has become acceptable (even meritorious) among that community to be seen as a large-scale health philanthropist, those with “only” a billion or two to their name will be keener to join the trend than if it is seen as a crazy way to spend your hard-earned money.

But there’s a catch. That logic only works if the moral status of SENS is seen to compare with that of the efforts that are now being funded so well. And that’s where LEV makes all the difference.

SENS therapies will be expensive to develop and expensive to administer, at least at first. Let’s consider how the prospect of spending all that money might be received if the ultimate benefit would be only to add a couple of decades to the lives of people who are already living longer than most in the developing world, after which those people would suffer the same duration of functional decline that they do now.

It’s not exactly the world’s most morally imperative action, is it?

Indeed, I would go so far as to say that, if I were in control of a few billion dollars, I would be quite hesitant to spend it on such a marginal improvement in the overall quality and quantity of life of those who are already doing better in that respect than most, when the alternative exists of making a similar or greater improvement to the quality and quantity of life of the world’s less fortunate inhabitants.

The LEV concept doesn’t make much difference in the short term to who would benefit from these therapies, of course: it will necessarily be those who currently die of aging, so in the first instance it will predominantly be those in wealthy nations. But there is a very widespread appreciation in the industrialised world—an appreciation that, I feel, extends to the wealthy sectors of society—that progress in the long term relies on aiming high, and in particular that the moral imperative to help those at the rear of the field to catch up is balanced by the moral imperative to maximise the average rate of progress across the whole population, which initially means helping those who are already ahead. The fact that SENS is likely to lead to LEV means that developing SENS gives a huge boost to the quality and quantity of life of whomever receives it: so huge, in fact, that there is no problem justifying it in comparison the alternative uses to which a similar sum of money might be put. The fact that lifespan is extended indefinitely rather than by only a couple of decades is only part of the difference that LEV makes, of course: arguably an even more important difference in terms of the benefit that SENS gives is that the whole of that life will be youthful, right up until a beneficiary mistimes the speed of an oncoming truck. The average quality of life, therefore, will rise much more than if all that was in prospect were a shift from (say) 7:1 to 9:1 in the ratio of healthy life to frail life.

Quantifying longevity escape velocity more precisely

This chapter has, I hope, closed down the remaining escape routes that might still have remained for those still seeking ways to defend a rejection of the SENS agenda. I have shown that SENS can be functionally equivalent to a way to eliminate aging completely, even though in actual therapeutic terms it will only be able to postpone aging by a finite amount at any given moment in time. I’ve also shown that this makes it morally just as desirable— imperative, even—as the many efforts into which a large amount of private philanthropic funding is already being injected.

I’m not complacent though: I know that people are quite ingenious when it comes to finding ways to avoid combating aging. Thus, in order to keep a few steps ahead, I have recently embarked on a collaboration with a stupendous programmer and futurist named Chris Phoenix, in which we are determining the precise degree of healthy life extension that one can expect from a given rate of progress in improving the SENS therapies. This is leading to a series of publications highlighting a variety of scenarios, but the short answer is that no wool has been pulled over your eyes above: the rate of progress we need to achieve starts out at roughly a doubling of the efficacy of the SENS therapies every 40 years and actually declines thereafter. By “doubling of efficacy” I mean a halving of the amount of damage that still cannot be repaired.

So there you have it. We will almost certainly take centuries to reach the level of control over aging that we have over the aging of vintage cars—totally comprehensive, indefinite maintenance of full function—but because longevity escape velocity is not very fast, we will probably achieve something functionally equivalent within only a few decades from now, at the point where we have therapies giving middle-aged people 30 extra years of youthful life.

I think we can call that the fountain of youth, don’t you?



Notes

1. I first used the phrase “escape velocity” in print in the paper arising from the second SENS workshop—de Grey ADNJ, Baynes JW, Berd D, Heward CB, Pawelec G, Stock G. Is human aging still mysterious enough to be left only to scientists? BioEssays 2002;24(7):667-676. My first thorough description of the concept, however, didn’t appear until two years later: de Grey ADNJ. Escape velocity: why the prospect of extreme human life extension matters now. PLoS Biology 2004;2(6):e187.

2. Gates disburses these funds through the Bill and Melinda Gates Foundation, http://www.gatesfoundation.org

3. Buffett’s decision to donate most of his wealth to the Gates Foundation was announced in June 2006 and is the largest act of charitable giving in United States history.

© 2007 Aubrey de Grey
Footnotes
Book Cover
 
Ending Aging by Aubrey de Grey with Michael Rae, St. Martin’s Press, Sept. 4, 2007, ISBN: 0312367066

Rent-seeking

From Wikipedia, the free encyclopedia

In public choice theory and in economics, rent-seeking involves seeking to increase one's share of existing wealth without creating new wealth. Rent-seeking results in reduced economic efficiency through poor allocation of resources, reduced actual wealth-creation, lost government revenue, increased income inequality, and (potentially) national decline.

Attempts at capture of regulatory agencies to gain a coercive monopoly can result in advantages for the rent seeker in a market while imposing disadvantages on (incorrupt) competitors. This constitutes one of many possible forms of rent-seeking behavior.

Description

The idea of rent-seeking was developed by Gordon Tullock in 1967,[2] while the expression rent-seeking itself was coined in 1974 by Anne Krueger.[3] The word "rent" does not refer specifically to payment on a lease but rather to Adam Smith's division of incomes into profit, wage, and rent.[4] The origin of the term refers to gaining control of land or other natural resources.

Georgist economic theory describes rent-seeking in terms of land rent, where the value of land largely comes from government infrastructure and services (e.g. roads, public schools, maintenance of peace and order, etc.) and the community in general, rather than from the actions of any given landowner, in their role as mere titleholder. This role must be separated from the role of a property developer, which need not be the same person.

Rent-seeking is an attempt to obtain economic rent (i.e., the portion of income paid to a factor of production in excess of what is needed to keep it employed in its current use) by manipulating the social or political environment in which economic activities occur, rather than by creating new wealth. Rent-seeking implies extraction of uncompensated value from others without making any contribution to productivity. The classic example of rent-seeking, according to Robert Shiller, is that of a feudal lord who installs a chain across a river that flows through his land and then hires a collector to charge passing boats a fee (or rent of the section of the river for a few minutes) to lower the chain. There is nothing productive about the chain or the collector. The lord has made no improvements to the river and is not adding value in any way, directly or indirectly, except for himself. All he is doing is finding a way to make money from something that used to be free.[5]

In many market-driven economies, much of the competition for rents is legal, regardless of harm it may do to an economy. However, some rent-seeking competition is illegal – such as bribery or corruption.

Rent-seeking is distinguished in theory from profit-seeking, in which entities seek to extract value by engaging in mutually beneficial transactions.[6] Profit-seeking in this sense is the creation of wealth, while rent-seeking is "profiteering" by using social institutions, such as the power of the state, to redistribute wealth among different groups without creating new wealth.[7] In a practical context, income obtained through rent-seeking may contribute to profits in the standard, accounting sense of the word.

Tullock paradox

Tullock paradox refers to the apparent paradox, described by Tullock, on the low costs of rent-seeking relative to the gains from rent-seeking.[8][9]

The paradox is that rent-seekers wanting political favors can bribe politicians at a cost much lower than the value of the favor to the rent-seeker. For instance, a rent seeker who hopes to gain a billion dollars from a particular political policy may need to bribe politicians only to the tune of ten million dollars, which is about 1% of the gain to the rent-seeker. Luigi Zingales frames it by asking, "Why is there so little money in politics?" because a naive model of political bribery and/or campaign spending should result in beneficiaries of government subsidies being willing to spend an amount up to the value of the subsidies themselves, when in fact only a small fraction of that is spent.

Possible explanations

Several possible explanations have been offered for the Tullock paradox:[10]
  1. Voters may punish politicians who take large bribes, or live lavish lifestyles. This makes it hard for politicians to demand large bribes from rent-seekers.
  2. Competition between different politicians eager to offer favors to rent-seekers may bid down the cost of rent-seeking.
  3. Lack of trust between the rent-seekers and the politicians, due to the inherently underhanded nature of the deal and the unavailability of both legal recourse and reputational incentives to enforce compliance, pushes down the price that politicians can demand for favors.

Examples

An example of rent-seeking in a modern economy is spending money on lobbying for government subsidies in order to be given wealth that has already been created, or to impose regulations on competitors, in order to increase market share.[11] Another example of rent-seeking is the limiting of access to lucrative occupations, as by medieval guilds or modern state certifications and licensuresTaxi licensing is a textbook example of rent-seeking.[12] To the extent that the issuing of licenses constrains overall supply of taxi services (rather than ensuring competence or quality), forbidding competition by livery vehicles, unregulated taxis and/or illegal taxis renders the (otherwise consensual) transaction of taxi service a forced transfer of part of the fee, from customers to taxi business proprietors.

The concept of rent-seeking would also apply to corruption of bureaucrats who solicit and extract "bribe" or "rent" for applying their legal but discretionary authority for awarding legitimate or illegitimate benefits to clients.[13] For example, tax officials may take bribes for lessening the tax burden of the taxpayers.

Regulatory capture is a related term for the collusion between firms and the government agencies assigned to regulate them, which is seen as enabling extensive rent-seeking behavior, especially when the government agency must rely on the firms for knowledge about the market. Studies of rent-seeking focus on efforts to capture special monopoly privileges such as manipulating government regulation of free enterprise competition.[14] The term monopoly privilege rent-seeking is an often-used label for this particular type of rent-seeking. Often-cited examples include a lobby that seeks economic regulations such as tariff protection, quotas, subsidies,[15] or extension of copyright law.[16] Anne Krueger concludes that "empirical evidence suggests that the value of rents associated with import licenses can be relatively large, and it has been shown that the welfare cost of quantitative restrictions equals that of their tariff equivalents plus the value of the rents".[17]

Economists such as the chair of British financial regulator the Financial Services Authority Lord Adair Turner have argued that innovation in the financial industry is often a form of rent-seeking.[18][19]

Development of theory

The phenomenon of rent-seeking in connection with monopolies was first formally identified in 1967 by Gordon Tullock.[20]

Recent studies have shown that the incentives for policy-makers to engage in rent-provision is conditional on the institutional incentives they face, with elected officials in stable high-income democracies the least likely to indulge in such activities vis-à-vis entrenched bureaucrats and/or their counterparts in young and quasi-democracies.[21]

Criticism

Critics of the concept point out that, in practice, there may be difficulties distinguishing between beneficial profit-seeking and detrimental rent-seeking.[22]

Often a further distinction is drawn between rents obtained legally through political power and the proceeds of private common-law crimes such as fraud, embezzlement and theft. This viewpoint sees "profit" as obtained consensually, through a mutually agreeable transaction between two entities (buyer and seller), and the proceeds of common-law crime non-consensually, by force or fraud inflicted on one party by another. Rent, by contrast with these two, is obtained when a third party deprives one party of access to otherwise accessible transaction opportunities, making nominally "consensual" transactions a rent-collection opportunity for the third party. The high profits of the illegal drug trade are considered rents by this definition, as they are neither legal profits nor the proceeds of common-law crimes.

People accused of rent-seeking typically argue that they are indeed creating new wealth (or preventing the reduction of old wealth) by improving quality controls, guaranteeing that charlatans do not prey on a gullible public, and preventing bubbles.

Possible consequences

From a theoretical standpoint, the moral hazard of rent-seeking can be considerable. If "buying" a favorable regulatory environment seems cheaper than building more efficient production, a firm may choose the former option, reaping incomes entirely unrelated to any contribution to total wealth or well-being. This results in a sub-optimal allocation of resources – money spent on lobbyists and counter-lobbyists rather than on research and development, on improved business practices, on employee training, or on additional capital goods – which retards economic growth. Claims that a firm is rent-seeking therefore often accompany allegations of government corruption, or the undue influence of special interests.[23]

Rent-seeking can prove costly to economic growth; high rent-seeking activity makes more rent-seeking attractive because of the natural and growing returns that one sees as a result of rent-seeking. Thus organizations value rent-seeking over productivity. In this case there are very high levels of rent-seeking with very low levels of output.[citation needed] Rent-seeking may grow at the cost of economic growth because rent-seeking by the state can easily hurt innovation. Ultimately, public rent-seeking hurts the economy the most because innovation drives economic growth.[24]

Government agents may initiate rent-seeking – such agents soliciting bribes or other favors from the individuals or firms that stand to gain from having special economic privileges, which opens up the possibility of exploitation of the consumer.[25] It has been shown that rent-seeking by bureaucracy can push up the cost of production of public goods.[26] It has also been shown that rent-seeking by tax officials may cause loss in revenue to the public exchequer.[13]

Mancur Olson traced the historic consequences of rent seeking in The Rise and Decline of Nations. As a country becomes increasingly dominated by organized interest groups, it loses economic vitality and falls into decline. Olson argued that countries that have a collapse of the political regime and the interest groups that have coalesced around it can radically improve productivity and increase national income because they start with a clean slate in the aftermath of the collapse. An example of this is Japan after World War Two. But new coalitions form over time, once again shackling society in order to redistribute wealth and income to themselves. However, social and technological changes have allowed new enterprises and groups to emerge in the past.[27]

A study by Laband and John Sophocleus in 1988[28] estimated that rent-seeking had decreased total income in the USA by 45 percent. Both Dougan and Tullock affirm the difficulty of finding the cost of rent-seeking. Rent-seekers of government-provided benefits will in turn spend up to that amount of benefit in order to gain those benefits, in the absence of, for example, the collective-action constraints highlighted by Olson. Similarly, taxpayers lobby for loopholes and will spend the value of those loopholes, again, to obtain those loopholes (again absent collective-action constraints). The total of wastes from rent-seeking is then the total amount from the government-provided benefits and instances of tax avoidance (valuing benefits and avoided taxes at zero). Dougan says that the "total rent-seeking costs equal the sum of aggregate current income plus the net deficit of the public sector".[29]

Mark Gradstein writes about rent-seeking in relation to public goods provision, and says that public goods are determined by rent seeking or lobbying activities. But the question is whether private provision with free-riding incentives or public provision with rent-seeking incentives is more inefficient in its allocation.[30]

The economist Joseph Stiglitz has argued that rent-seeking contributes significantly to income inequality in the United States through lobbying for government policies that let the wealthy and powerful get income, not as a reward for creating wealth, but by grabbing a larger share of the wealth that would otherwise have been produced without their effort.[31][32] Piketty, Saez, and Stantcheva have analyzed international economies and their changes in tax rates to conclude that much of income inequality is a result of rent-seeking among wealthy tax payers.

Communicating with the universe

 
Over the next million years, a descendant of the Internet will maintain contact with inhabited planets throughout our galaxy and begin to spread out into the larger universe, linking up countless new or existing civilizations into the Universenet, a network of ultimate intelligence. (updated)

July 4, 2010 by Amara D. Angelica
Original link:  http://www.kurzweilai.net/communicating-with-the-universe
Originally published in Year Million: Science at the Far Edge of Knowledge.



The Earth has already input information to the Universenet. Whenever microwave towers or satellites send Internet traffic, some of the energy leaks off, transmitting data unintentionally into space. The first email messages transmitted via microwave towers in 1969 by the predecessor of the Internet, ARPANET, have (theoretically) traveled thirty-nine light-years so far, way past the nearest star system, Alpha Centauri, four light-years away. In practice, such feeble signals are probably buried in cosmic radio noise.

Now NASA plans to do it intentionally. The Interplanetary Internet (IPI) should allow NASA to link up the Internets of Earth, spacecraft, and eventually Moon, Mars, and beyond.[1] By the Year Million, billions of “smart dust” sensors will be connected to a distant descendant of the IPI, exchanging data in real time or via store-and-forward protocol or wireless mesh (a network that handles many-to-many connections and is capable of dynamically updating and optimizing these connections), on planets and in spacecraft.[2]

Meanwhile, one important near-future use will be for tracking asteroids, comets, and space junk, exchanging three-dimensional position location and time data (similar to GPS on Earth) via multiple hops between sensors. Once affordable personal space travel is available, the IPI could serve as the core of an interplanetary version of air traffic control. The IPI scheme could also become the standard communications protocol as we expand out beyond the solar system’s planets, and then beyond the stars and to other galaxies. We could start with possibly habitable planets beyond the solar system, such as Gliese 581d, the third planet of the red dwarf star Gliese 581 (about twenty light-years away from Earth), if we detect signs of intelligent life there.

But using radio waves or lasers to communicate with civilizations around other stars, let alone in other galaxies, requires huge amounts of energy. Exactly how much energy? That depends mainly on distance, frequency, directional efficiency of antennas, and assumed ability of the receiving civilization to detect signals amid the extreme electromagnetic noise of space. In 1974, the Arecibo telescope beamed a 210-byte radio message aimed at the globular star cluster M13, some twenty-five thousand light-years away. It was transmitted with a power of one megawatt-enough energy to power about one thousand homes, using a narrow beam to achieve an EIRP (effective isotropic radiated power) of 20 trillion watts. That made it the strongest human-made signal ever sent. (It has gone 0.14 percent of the way, so far.)

Arecibo uses a large dish. Another way to create a narrow beam of high-power microwave radio energy is to build a phased-array antenna with multiple dishes spread out over a large area. These could be located on the Moon or at a Lagrange point (one of the stable locations in the Earth-Moon-Sun axis). Or a high-powered laser could be used. How highly powered? Looking toward the Year Million, as we reach out to communication nodes orbiting more distant stars, or in other galaxies, we will need to use a lot of power-as much as the entire power of the Sun. A civilization able to do that kind of cosmic engineering is referred to as Kardashev Type II, or KT-II.

By modest contrast, our civilization used about fifteen terawatt-hours in 2004 (a terawatt-hour is one billion kilowatt-hours) of electrical power.[3] New York University Physics Professor Emeritus Martin Hoffert and other scientists calculate that if our power consumption grows by just two percent per year, then in just four hundred years we will need all the solar power received by the Earth (1016 watts = 10,000 terawatts). And in a thousand years, we’ll require all of the power of the Sun (4×1026 watts).[4] Hoffert and other scientists propose space-based solar power as one major future solution. Solar flux is eight times higher in space than the surface average on cloudy Earth and available 24 hours a day, unlike solar energy panels on Earth. Power satellites located in geosynchronous orbit (like communication satellites) would use a bank of photovoltaic receptors to convert the Sun’s energy to radio waves. This energy would be beamed wirelessly down to a large “rectenna” or rectifying antenna where the incoming microwave energy is rectified (converted) for use in the electrical power grid on Earth, turning it to electricity for distribution. Alternatively, laser beams could replace radio-frequency signals.[5] Once the infrastructure is in place for economically launching space-based solar power satellites, the same types of microwave or laser systems could be aimed at the stars for communicating elsewhere.

Eventually, when we have become first a KT-I and then a KT-II civilization, we will reach even farther out to supergalaxies and even to clusters of supergalaxies, which could require a Type III civilization-one capable of controlling the power of an entire galaxy, some 1036 watts. The communication latencies (transmission delays) for such a system would be millions or even hundred of millions of years. (Two-way latency is already a problem for astronauts in the solar system, increasing as we transmit information to places farther from the Earth, or wherever humans and posthumans end up, perhaps uploaded into a Matrioshka brain that will have replaced the existing solar system.) Even the nearest star, Alpha Centauri, could not reply to a message sooner than eight years after it was sent. Talk about bad netiquette.

Possibly the denizens of the Year Million will solve this time lag with extreme cosmic engineering feats such as wormholes, or even communication via parallel universes.[6] One intriguing possibility is the use of quantum entanglement-that is, allowing an entangled atom or photon to carry information across a distance, theoretically anywhere in the universe (once the initial photons have been received), or “spooky action-at-a-distance,” as Einstein called it.[7] An experiment testing the possibility of communication using this principle is in progress in the Laser Physics Facility at the University of Washington by professor John G. Cramer.[8] Cramer astonished physicists at a joint American Institute of Physics/American Association for the Advancement of Science conference in 2006 by presenting experimental evidence that the outcome of a laser experiment could be affected by a future measurement: a message was sent to a time fifty microseconds in the past.[9] This leads to an even more bizarre idea: retrocausal communication-the future affecting the past, as theoretical physicist Jack Sarfatti (the inspiration for Doc in the movie Back to the Future) has proposed.[10] So in principle, perhaps one could bypass the speed-of-light limitation and have messages show up in a distant galaxy long before they could have been received by radio or laser transmission, or even before they were sent!

Web to ET: Download This

Humans might not be the first technological species to explore the galaxy. Suppose alien probes await us in orbit or on the Moon (like the obelisk in Arthur C. Clarke’s “The Sentinel” and its movie version, 2001: A Space Odyssey) or at Lagrange points.[11] If so, we might only need to respond with the right signals to trigger a connection-similar to logging on to an FTP server with the right IP address, user name, and password. Such probes might even now be scattered around the solar system as smart dust particles that we haven’t yet analyzed. IBM has developed a prototype of a molecular switch that could replace current silicon-based chip technology with atom-based processors, making it theoretically possible to run a supercomputer on a chip the size of a speck of dust. IBM is also developing technology to store a bit on a single atom, portending hard drives that can pack up to a thousand times as much information on a hard disk as current technologies.[12]

Instead of transmitting via radio or laser, sending a physical data spore might be a simpler and more effective alternative. Rutgers University electrical engineer Christopher Rose has shown that for long messages conveyed across long distances (where transmitting a signal would be extremely expensive, have limited range, or be too hard to find), it is more effective to send physical messages than transmit them. That was one rationale for sending the greeting plaque on Pioneer 10 and 11 in 1972 and 1973, and a more complex inscribed disk on the Voyager probe in 1977. Rose thinks there could be such inscribed objects now orbiting planets in our solar system, or on asteroids.[13]

But transmitting information into space still fires up the imagination of several scientists. SETI senior astronomer Seth Shostak has proposed that rather than sending simple coded messages, why not just feed the Google servers into the transmitter and send the aliens the entire Web? It would take about half a year to transmit the Web in the microwave region at one megabyte a second; infrared lasers operating at a gigabyte per second would shorten the broadcast time to no more than two days.[14] Transmitting the Web into space could also serve as a backup for civilization. William E. Burrows has suggested creating a self-sufficient colony on the Moon where a “backup drive” could store the history and wisdom of civilization in case a calamity strikes Earth.[15] To achieve this, Burrows set up an organization, Alliance to Rescue Civilization (ARC), subsequently absorbed by the Lifeboat Foundation, which is developing solutions to prevent the extinction of mankind. Acquiring knowledge from ancient extraterrestrial civilizations could be critical to our long-term human survival, says Lifeboat Foundation president Eric Klien. “The Universenet could give us the final signals of a civilization right before it destroyed itself,” he wrote in a Skype message. “We could use this information to avoid our own destruction, perhaps the most important reason to continue the SETI project. If we learned that a civilization was destroyed by, say, nanoweapons, we could start creating defenses against this situation.”[16]

Such signals might not be obvious. For example, pulsars, discovered in 1967, are rotating neutron stars that emit electromagnetic waves. Their rapid rotation causes their radiation to become pulsed. Could this radiation be modulated deliberately to form a sort of cosmic transmitter? Astronomers at first thought the pulses meant they might be ET; so far, they haven’t found any evidence of an actual message. Or are there encoded messages too subtle to detect? And pulsars are far from the most powerful possible signal sources from space. Quasars can release the energy equal to hundreds of average galaxies combined, equivalent to one trillion suns. Could they be galactic Web sites run by Type III civilizations? (Unlikely, since most quasars are very far away, which means distant in time, and seem to have been formed not long after the emergence of the universe from the Big Bang.)

Computer scientist Stephen Wolfram believes current methods used in SETI are inefficient and unlikely to produce reliable results because our detection methods seek to detect only regular patterns. A more efficient method would use sophisticated, noise-immune coding, producing something similar to spread spectrum signals. To SETI’s present system of analysis this kind of signal sounds and looks like random noise, and would be overlooked and discarded.[17] Wolfram suggests we need more sophisticated software-based signal processing. Maybe we need someone like Hedy Lamarr, the brilliant actress who famously said, “Any girl can be glamorous; all she has to do is stand still and look stupid,” and then went on to invent spread spectrum technology. Could ET be using it? There’s no way to know with the current SETI technology. Complex artifacts made by an advanced civilization could look very much like natural objects, Wolfram argues. Could the stars themselves be extraterrestrial artifacts? “They could have been built for a purpose,” says Wolfram. “It’s extremely difficult to rule it out.”[18]

Is Alien Intelligence Hidden in Junk DNA?

Cardiff University astronomer and mathematics professor Chandra Wickramasinghe, a long-time collaborator with the late cosmologist Sir Fred Hoyle, has suggested that life on this planet began on comets, since their combination of clay and water is an ideal breeding ground for life. He believes that explanation to be a quadrillion times more likely than Earth’s having spawned life.[19] If that’s the case, then comets and asteroids could be carrying physical messages, a sort of “sneakernet”-physical file sharing in the interests of added security-for the Universenet.

Astrobiologist Paul Davies, now at Arizona State University, suggests that ET could embed messages in highly conserved sections of viral DNA-most likely in its so-called “junk” sections-and send them out as hitchhikers on asteroids or comets. (Genomics researchers at the Lawrence Berkeley National Laboratory in California, who compared human and mouse DNA, have reported millions of base pairs of highly conserved sequences of junk DNA, meaning they have a survival value).[20] These messages could even have been incorporated into terrestrial life, Davies thinks, and lurk in our DNA, awaiting interpretation. (There could be an interesting d-mail-DNA-mail-waiting to be discovered as we search through the decoded genome.) Rather than beaming information randomly in the hope that somewhere, someday, an intelligent species will decode them, this method would use a pre-existing “legion of small, cheap, self-repairing and self-replicating machines that can keep editing and copying information and perpetuate themselves over immense durations in the face of unforeseen environmental hazards. Fortunately, such machines already exist. They are called living cells.”[21]

Transmitting People to the Stars

Futurist/inventor Ray Kurzweil has suggested that once the intelligent life on a planet invents machine computation, it is only a matter of a few centuries before its intelligence saturates the matter and energy in its vicinity. At that point, he suggests, nanobots will be dispersed like the spores of plants. This colonization will eventually expand outward, approaching the speed of light (as discussed in Robin Hanson’s Chapter 9).[22] In Fred Hoyle and John Elliot’s 1962 novel A For Andromeda, a radio signal from the direction of the galaxy M31 in Andromeda gives scientists a computer program for the creation of a living organism, adapting borrowed human DNA. They name this young cloned woman Andromeda, and through her agency the computer tries to take over the world.[23] Author James Gardner has seriously suggested a version of such “interstellar cloning”: an advanced civilization could transmit a software program to us with instructions on replicating its own inhabitants-even an entire civilization.[24]

Dr. Martine Rothblatt, who founded Sirius Satellite and other satellite companies, has suggested a related method for connecting with Universenet: sending bemes or units of being-highly individual elements of personality, mannerisms, feelings, recollections, beliefs, values, and attitudes. Bemes are fundamental, transmittable, mutable units of being-ness in the spirit of memes (Richard Dawkins’s term for the replicators of cultural information that a mind transmits, verbally or by demonstration, to another mind). The main difference is that memes are culturally transmittable elements that have common meanings, whereas bemes reflect individual characteristics.

Rothblatt suggests that a new Beme Neural Architecture (BNA) will outcompete DNA in populating the universe. “At any moment, and certainly at some moment, a giant star in our general stellar neighborhood will blow up and thereby fry everything in its vicinity,” she points out. Some of these explosions, known as gamma-ray bursts, are so violent that they damage everything within hundreds of light-years. Yet there are two or three gamma-ray bursts somewhere in the observable universe every day and about one thousand less-explosive but still life-ending supernovae every day throughout the galaxies (that we can observe). One explanation for the Fermi Paradox-why is there no evidence of ET, although the galaxy seems capable of so many extraterrestrial civilizations-is that sooner or later a supernova nabs everyone’s life zone. “Perhaps the only way we can survive the risk of astrobiological or mega-volcanic catastrophe is to spread ourselves out among the stars,” Rothblatt suggests. And as self-replicating code, bemes are much more quickly assembled, replicated, and transported than genes strung along chromosomes and transmitted by sex. Computer technology is vastly more efficient than wet biology in copying information. Expressed in digital bits rather than in nucleotide base pairs, information can be transported farther (beyond Earth to evade killer asteroid impacts) and faster (at the speed of light).

DNA is not well suited for space travel. It can replicate effectively only within bodies. Humans require vast quantities of life-preserving supplies and besides, at the moment, we don’t live long enough to make the journey to other stars (a factor, as Pamela Sargent notes in the previous chapter, that is subject to change). On the other hand, by replicating our minds into BNA and storing them in a computer substrate, we can travel far longer and far faster, since we would be traveling with minimal mass in seeds or spores. Arriving at a promising planet, our BNA can be loaded into nanotech-built machine bodies to prepare a new home. Once that home flourishes, human (and other) DNA can be reconstructed from either stored samples or digital codes and basic chemicals, which can be nurtured into mature bodies free to develop their own minds or to receive a transfer of a BNA mind.

Alternatively, Rothblatt suggests that just by spacecasting your bemes, you can already achieve a level of immortality, and so can all of humanity. In March 2007, Rothblatt’s CyBeRev Project began experimentally spacecasting bemes in the form of digitized video, audio, text, personality tests, and other recordings, of attributes of a person’s being, such as memories, mannerisms, personality, feelings, recollections, beliefs, attitudes, and values.[25] These bemes are transmitted out into the universe via a microwave dish normally used to communicate with satellites. Any spacecast signal, she speculates, has a chance of being decoded from the background cosmic noise in the same way a cellphone’s CDMA (spread spectrum) encoded signal is decoded out of random electromagnetic noise. Your bemes could then be interpreted, and yourself recreated from the transmission. This requires interception by an advanced, intelligent civilization that would receive and decode the signals, then instantiate the bemes as either a regenerated traditional cellular or a bionanotechnological, body. (If this happened, we might find by the Year Million that the galaxy is swarming with other humans downloaded by far-flung extraterrestrials or their machines.)

Each spacecast of an individual’s bemes is accompanied by an informed-consent form authorizing that individual’s re-instantiation from the transmitted bemes. The CyBeRev project is based on the hypothesis that advanced intelligence will respect sentient autonomy and be capable of filling in the blanks of a person’s consciousness via interpolation of the spacecast bemes, using background cultural information transmitted from Earth. The project’s backers do not believe extraterrestrials will unethically revive persons such as television personalities whose images, behavior, and personal information have been telecast, but who have not authorized their re-instantiation. Still, such cultural transmissions will be useful in the aggregate, providing revived spacecasters with a familiar environment. “Given the vast amount of television and Internet information streaming into space, the revivers of our spacecasters will have abundant contextual information with which to work,” concluded Rothblatt.[26]

Programming the Universe

By converting matter into what some futurists call computronium (hypothetical material designed to be an optimized computational substrate), Year Million scientists could create the beginnings of an ultimately powerful computer.[27] Taking it to the extreme, MIT scientist Seth Lloyd has calculated that a computer made up of all the energy in the entire known universe (that is, within the visible “horizon” of forty-two billion light-years) can store about 1092105 computations/second.[28] The universe itself is a quantum computer, he says, and it has made a mind-boggling 10122 computations since the Big Bang (for that part of the universe within the “horizon”).[29] Compare that to about 2×1028 operations performed over the entire history of computation on Earth (“because of Moore’s law, half of this computation has taken place in the last year and a half,” he wrote in 2006). What’s more, the observable horizon of the universe, space itself, is expanding at three times the speed of light (in three dimensions), so the amount of computation performable within the horizon increases over time.

Lloyd has also proposed that a black hole could serve as a quantum computer and data storage bank. In black holes, he says, Hawking radiation, which escapes the black hole, unintentionally carries information about material inside the black hole. This is because the matter falling into the black hole becomes entangled with the radiation leaving its vicinity, and this radiation captures information on nearly all the matter that falls into the black hole. “We might be able to figure out a way to essentially program the black hole by putting in the right collection of matter,” he suggests.[30]

There is a supermassive black hole in the center of our galaxy, perhaps the remnant of an ancient quasar. Will this become the mainframe and central file sharing system for galaxy hackers of the Year Million? What’s more, a swarm of ten thousand or more smaller black holes may be orbiting it.[31] Might they be able to act as distributed computing nodes and a storage network? Toward the Year Million, an archival network between stars and between galaxies could develop an Encyclopedia Universica, storing critical information about the universe at multiple redundant locations in those and many other black holes.

Clash of the Titans

Far beyond the Year Million, our galaxy faces a crisis. The supermassive black holes in our galaxy and the Andromeda galaxy are headed for a cosmic collision in two billion years. Will they have incompatible operating systems-a sort of Mac-versus-PC confrontation? (Of course, they might just pass by each other-or be steered past by hyperintelligent operators.)

In The Intelligent Universe, James Gardner adapted a bold notion originally proposed by cosmologist Lee Smolin. For Smolin, Darwinian principles constrain the nature of any universe such that new baby universes produced via black holes will resemble their parent cosmos, and will be surprisingly life-friendly as well. Gardner extends this idea into a fundamentally radical (but falsifiable) hypothesis called the Selfish Biocosm-the cosmological equivalent of Richard Dawkins’s selfish gene. The idea is that eventually intelligent life must acquire the capacity to shape the entire cosmos. In addition, the universe has a Smolin-style “utility function”: propagation of baby universes exhibiting the same life-friendly physical qualities as their parent-universe, including a system of physical laws and constants that enables life and intelligence to emerge and eventually repeat the cycle.

Under this scenario, the mission of sufficiently evolved intelligent life in the universe is to serve as a cosmic reproductive organ-the equivalent of DNA in living creatures-spawning an endless succession of life-friendly offspring that are themselves endowed with the same reproductive capacities as their predecessors. (Rothblatt’s BNA might well be the fundamental mechanism for this evolutionary process-veteran physicist John Wheeler’s legendary IT from BIT, things arising from information rather than the other way round).

Gardner believes that we’ve already received a message from ET: the laws and constants of our universe, including the inexplicable cosmological constant which at this time is accelerating cosmic expansion. His hypothesis makes sense of the observation that the constants seem rigged in favor of the emergence of life. For example, they are improbably hospitable to carbon-based intelligent life-an unlikely and as-yet unexplained anthropic oddity that some scientists have identified as the deepest mystery in all of science. As Gardner claims:
We are likely not alone in the universe, but are probably part of a vast-yet undiscovered-transterrestrial community of lives and intelligences spread across billions of galaxies and countless parsecs. . . . We share a possible common fate with that hypothesized community: to help shape the future of the universe and transform it from a collection of lifeless atoms into a vast, transcendent mind.
In the Year Million, such a cosmic community will be linked up by the Universenet.

Notes

[1] In the late 1990s, Dr. Vint Cerf, who co-designed the Internet’s TCP/IP protocol, designed the Interplanetary Internet (IPN, http://www.ipnsig.org) to link up the Earth with other planets and spaceships in transit over millions of miles. Cerf’s clever scheme solved a big problem. With interplanetary communication delays-the average two-way latency (delay time) between Earth and Mars, 228 million km apart, is 25 minutes 21 seconds-the Internet TCP/IP protocol we use today would simply time out. Who has half an hour to wait for a carriage return? So Cerf and his team came up with a store-and-forward architecture-a sort of relay race. Transmit messages to an Earth-orbiting satellite, let’s say, and store them there until the next local pass of the Moon, which then transmits them to Mars.
[2] Tomas Krag and Sebastian Büettrich, “Wireless Mesh Networking,” O’Reilly Network, Jan. 22, 2004: http://www.oreillynet.com/pub/a/wireless/2004/01/22/wirelessmesh.html
[3] Energy Information Administration , U.S. Department of Energy: http://www.eia.doe.gov/emeu/international/electricityconsumption.html. The world electrical power generation is increasing by 2.4 percent per year (see http://www.eia.doe.gov/oiaf/ieo/electricity.html) and is expected to grow to thirty terawatt-hours in the year 2030.
[4] Martin I. Hoffert, et al, “Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet,” Science, Vol. 298 (2002): 981-987. As Dougal Dixon notes in Chapter 2, we are running out of oil, and what is worse, many countries, especially China, are burning huge amounts of coal, increasingly polluting the atmosphere with toxins and carbon dioxide and accelerating global warming. It can only get worse: 850 new coal-fired power plants are to be built by 2012 by the United States, China, and India. Terrestrial solar installations, biofuel, wind power, and geothermal power will help, but they all have limitations (ground-based solar panels don’t work at night, for example) and, says Hoffert, can’t economically provide the amount of power needed, especially in Africa and Asia.
[5] Martin Hoffert, “Energy from Space,” Marshall Institute, Aug. 7, 2007, http://www.marshall.org/article.php?id=550
[6] John G. Cramer, “Wormholes and Time Machines,” Analog Science Fiction and Fact, June 1989, communication access via parallel universes, or retrocausal and faster-than-light (FTL) signaling John G. Cramer, “EPR Communication: Signals from the Future?,” Analog Science Fiction and Fact, December 2006, http://www.analogsf.com/0612/altview.shtml; Max Tegmark, “Parallel Universes,” Scientific American, May 2003.
[7] Seth Lloyd, Programming The Universe (New York: Knopf, 2006): 165.
[8] John G. Cramer, “An Experimental Test of Signaling using Quantum Nonlocality,” http://faculty.washington.edu/jcramer/NLS/NL_signal.htm.
[9] John G. Cramer, “Reverse Causation and the Transactional Interpretation of Quantum Mechanics, in Frontiers of Time: Retrocausation-Experiment and Theory,” in AIP Conference Proceedings, Vol. 263, ed. Daniel P. Sheehan (Melville, NY: AIP, 2006): 20-26; John G. Cramer, “Reverse Causation-EPR Communication: Signals from the Future?,” Analog Science Fiction and Fact, December 2006, http://www.analogsf.com/0612/altview.shtml; B. Dopfer, PhD Thesis, University of Innsbruck (1998); A. Zeilinger, Rev. Mod. Physics, 71, S288-S297 (1999).
[10] Jack Sarfatti, Super Cosmos (Author House, 2006): 20.
[11] Robert A. Freitas Jr. and Francisco Valdes, “The search for extraterrestrial artifacts (SETA),” Acta Astronautica, Vol. 12 (1985): 1027-1034.
[12] Peter Liljeroth, Jascha Repp, and Gerhard Meyer, “Current-Induced Hydrogen Tautomerization and Conductance Switching of Naphthalocyanine Molecules,” Science, Vol. 317. no. 5842, pp. 1203 – 1206, http://www.sciencemag.org/cgi/content/abstract/317/5842/1203
[13] Christopher Rose and Gregory Wright, “Inscribed Matter as an Energy Efficient Means of Communication with an Extraterrestrial Civilization,” Nature, Vol. 431, September 2004. http://www.winlab.rutgers.edu/~crose/papers/nature.pdf
[14] Seth Shostak, “What Do You Say to an Extraterrestrial?” SETI Institute News, December 2, 2004, http://www.seti.org/news/features/what-do-you-say-to-et.php
[15] William E. Burrows, The Survival Imperative: Using Space to Protect Earth (New York: Forge, 2006).
[16] Personal communication, September 3, 2007.
[17] Stephen Wolfram, A New Kind of Science (Wolfram Media, 2002): 1188, http://www.wolframscience.com/nksonline/page-1188b-text
[18] Marcus Chown, “Looking for Alien Intelligence in the Computational Universe,” New Scientist, November 26, 2005, http://www.newscientist.com/channel/fundamentals/mg18825271.600
[19] Hazel Muir, “Did Life Begin on Comets?” NewScientist.com news service, http://space.newscientist.com/channel/astronomy/astrobiology/dn12506, August 17, 2007.
[20] Mark Peplow, “ET Write Home,” Nature News, http://www.nature.com/news/2004/040830/full/040830-4.html, September 1, 2004.
[21] Paul Davies, “Do We Have to Spell It Out?” New Scientist, August 7, 2004, http://www.newscientist.com/article/mg18324595.300.
[22] Ray Kurzweil, The Singularity Is Near (Viking 2005).
[23] Fred Hoyle and John Elliot, A For Andromeda (Harper & Row, 1962), adapted from the 1961 BBC TV serial, now lost: http://www.imdb.com/title/tt0054511/
[24] James Gardner, The Intelligent Universe (New Page Books, 2007).
[25] In Chapter 7 of this book, Wil McCarthy estimates that people could store most of their memories in about two terabytes, which could be transmitted via satellite in just a few hours.[26] CyBeRev, Terasem Movement, Inc., http://www.cyberev.org
[27] http://en.wikipedia.org/wiki/Computronium
[28] Seth Lloyd, Programming The Universe (New York: Knopf, 2006): 165.
[29] Based on the Margolis-Levitin theorem: take the amount of energy within the horizon (1071 joules), multiply by 4, and divide by Planck’s constant. What has the universe computed? Itself. Seth Lloyd, Programming The Universe (New York: Knopf, 2006): 165-167.
[30] Maggie McKee, “Black Holes: The Ultimate Quantum Computers?” NewScientist.com news service, March 13, 2006, http://space.newscientist.com/article.ns?id=dn8836&feedId=online-news_rss20%3E
[31] “Chandra Finds Evidence for Swarm of Black Holes Near the Galactic Center,” January 12, 2005, http://www.sciencedaily.com/releases/2005/01/050111114024.htm
* http://www.amazon.com/Year-Million-Science-Edge-Knowledge/dp/1934633054/

© 2008 Amara D. Angelica

Protectionism

From Wikipedia, the free encyclopedia

Political poster from the Liberal Party clearly displaying the differences between an economy based on Free Trade and Protectionism. The Free Trade shop is full to the brim of customers due to its low prices whilst the shop based upon Protectionism has suffered from high prices and a lack of customers.

Protectionism is the economic policy of restricting imports from other countries through methods such as tariffs on imported goods, import quotas, and a variety of other government regulations. Proponents claim that protectionist policies shield the producers, businesses, and workers of the import-competing sector in the country from foreign competitors. However, they also reduce trade and adversely affect consumers in general (by raising the cost of imported goods), and harm the producers and workers in export sectors, both in the country implementing protectionist policies, and in the countries protected against.

There is a consensus among economists that protectionism has a negative effect on economic growth and economic welfare,[1][2][3][4] while free trade, deregulation, and the reduction of trade barriers has a positive effect on economic growth.[2][5][6][7][8][9] In fact protectionism has been implicated by some scholars as the cause of some economic crises, in particular the Great Depression.[10] However, trade liberalization can sometimes result in large and unequally distributed losses and gains, and can, in the short run, cause significant economic dislocation of workers in import-competing sectors.[11]

Protectionist policies

Logo of Belgium's National League for the Franc's Defense, 1924

A variety of policies have been used to achieve protectionist goals. These include:
  • Protection of technologies, patents, technical and scientific knowledge [12][13][14]
  • Prevent foreign investors from taking control of domestic firms[15][16]
  • Tariffs: Typically, tariffs (or taxes) are imposed on imported goods. Tariff rates usually vary according to the type of goods imported. Import tariffs will increase the cost to importers, and increase the price of imported goods in the local markets, thus lowering the quantity of goods imported, to favour local producers. Tariffs may also be imposed on exports, and in an economy with floating exchange rates, export tariffs have similar effects as import tariffs. However, since export tariffs are often perceived as "hurting" local industries, while import tariffs are perceived as "helping" local industries, export tariffs are seldom implemented.
  • Import quotas: To reduce the quantity and therefore increase the market price of imported goods. The economic effects of an import quota is similar to that of a tariff, except that the tax revenue gain from a tariff will instead be distributed to those who receive import licenses. Economists often suggest that import licenses be auctioned to the highest bidder, or that import quotas be replaced by an equivalent tariff.
  • Administrative barriers: Countries are sometimes accused of using their various administrative rules (e.g. regarding food safety, environmental standards, electrical safety, etc.) as a way to introduce barriers to imports.
  • Anti-dumping legislation: "Dumping" is the practice of firms selling to export markets at lower prices than are charged in domestic markets. Supporters of anti-dumping laws argue that they prevent import of cheaper foreign goods that would cause local firms to close down. However, in practice, anti-dumping laws are usually used to impose trade tariffs on foreign exporters.
  • Direct subsidies: Government subsidies (in the form of lump-sum payments or cheap loans) are sometimes given to local firms that cannot compete well against imports. These subsidies are purported to "protect" local jobs, and to help local firms adjust to the world markets.
  • Export subsidies: Export subsidies are often used by governments to increase exports. Export subsidies have the opposite effect of export tariffs because exporters get payment, which is a percentage or proportion of the value of exported. Export subsidies increase the amount of trade, and in a country with floating exchange rates, have effects similar to import subsidies.
  • Exchange rate control: A government may intervene in the foreign exchange market to lower the value of its currency by selling its currency in the foreign exchange market. Doing so will raise the cost of imports and lower the cost of exports, leading to an improvement in its trade balance. However, such a policy is only effective in the short run, as it will lead to higher inflation in the country in the long run, which will in turn raise the real cost of exports, and reduce the relative price of imports.
  • International patent systems: There is an argument for viewing national patent systems as a cloak for protectionist trade policies at a national level. Two strands of this argument exist: one when patents held by one country form part of a system of exploitable relative advantage in trade negotiations against another, and a second where adhering to a worldwide system of patents confers "good citizenship" status despite 'de facto protectionism'. Peter Drahos explains that "States realized that patent systems could be used to cloak protectionist strategies. There were also reputational advantages for states to be seen to be sticking to intellectual property systems. One could attend the various revisions of the Paris and Berne conventions, participate in the cosmopolitan moral dialogue about the need to protect the fruits of authorial labor and inventive genius...knowing all the while that one's domestic intellectual property system was a handy protectionist weapon."[17]
  • Political campaigns advocating domestic consumption (e.g. the "Buy American" campaign in the United States, which could be seen as an extra-legal promotion of protectionism.)
  • Preferential governmental spending, such as the Buy American Act, federal legislation which called upon the United States government to prefer US-made products in its purchases.
In the modern trade arena many other initiatives besides tariffs have been called protectionist. For example, some commentators, such as Jagdish Bhagwati, see developed countries efforts in imposing their own labor or environmental standards as protectionism. Also, the imposition of restrictive certification procedures on imports are seen in this light.

Further, others point out that free trade agreements often have protectionist provisions such as intellectual property, copyright, and patent restrictions that benefit large corporations. These provisions restrict trade in music, movies, pharmaceuticals, software, and other manufactured items to high cost producers with quotas from low cost producers set to zero.[18]

History

Tariff Rates in Japan (1870–1960)
 
Tariff Rates in Spain and Italy (1860–1910)
 
Historically, protectionism was associated with economic theories such as mercantilism (which focused on achieving positive trade balance and accumulating gold), and import substitution.

In the 18th century, Adam Smith famously warned against the "interested sophistry" of industry, seeking to gain advantage at the cost of the consumers.[19] Friedrich List saw Adam Smith's views on free trade as disingenuous, believing that Smith advocated for freer trade so that British industry could lock out underdeveloped foreign competition.[20]

Some have argued that no major country has ever successfully industrialized without some form of economic protection.[21][22] Economic historian Paul Bairoch wrote that "historically, free trade is the exception and protectionism the rule".[23]

According to economic historians Douglas Irwin and Kevin O'Rourke, "shocks that emanate from brief financial crises tend to be transitory and have little long-run effect on trade policy, whereas those that play out over longer periods (early 1890s, early 1930s) may give rise to protectionism that is difficult to reverse. Regional wars also produce transitory shocks that have little impact on long-run trade policy, while global wars give rise to extensive government trade restrictions that can be difficult to reverse."[24]

One paper notes that sudden shifts in comparative advantage for specific countries have led said countries to become protectionist: "The shift in comparative advantage associated with the opening up of New World frontiers, and the subsequent “grain invasion” of Europe, led to higher agricultural tariffs from the late 1870s onwards, which as we have seen reversed the move toward freer trade that had characterized mid-nineteenth-century Europe. In the decades after World War II, Japan’s rapid rise led to trade friction with other countries. Japan’s recovery was accompanied by a sharp increase in its exports of certain product categories: cotton textiles in the 1950s, steel in the 1960s, automobiles in the 1970s, and electronics in the 1980s. In each case, the rapid expansion in Japan’s exports created difficulties for its trading partners and the use of protectionism as a shock absorber."[24]

According to some political theorists, protectionism is advocated mainly by parties that hold far-left or left-wing economic positions, while economically right-wing political parties generally support free trade.

In the United States

Tariff Rates (France, UK, US)
 
Average Tariff Rates in US (1821–2016)
 
US Trade Balance (1895–2015)

According to economic historian Douglas Irwin, a common myth about US trade policy is that low tariffs harmed American manufacturers in the early 19th century and then that high tariffs made the United States into a great industrial power in the late 19th century.[30] A review by the Economist of Irwin's 2017 book Clashing over Commerce: A History of US Trade Policy notes:[30]
Political dynamics would lead people to see a link between tariffs and the economic cycle that was not there. A boom would generate enough revenue for tariffs to fall, and when the bust came pressure would build to raise them again. By the time that happened, the economy would be recovering, giving the impression that tariff cuts caused the crash and the reverse generated the recovery. 'Mr Irwin' also attempts to debunk the idea that protectionism made America a great industrial power, a notion believed by some to offer lessons for developing countries today. As its share of global manufacturing powered from 23% in 1870 to 36% in 1913, the admittedly high tariffs of the time came with a cost, estimated at around 0.5% of GDP in the mid-1870s. In some industries, they might have sped up development by a few years. But American growth during its protectionist period was more to do with its abundant resources and openness to people and ideas.
According to Paul Bairoch, the United States was "the mother country and bastion of modern protectionism" since the end of the 18th century and until the post-World War II period.[31]

The Bush administration implemented tariffs on Chinese steel in 2002; according to a 2005 review of existing research on the tariff, all studies found that the tariffs caused more harm than gains to the US economy and employment.[32] The Obama administration implemented tariffs on Chinese tires between 2009 and 2012 as an anti-dumping measure; a 2016 study found that these tariffs had no impact on employment and wages in the US tire industry.[33]

In 2018, Cecilia Malmstrom considered that the US is playing a “dangerous game” seeing Trump's decision both as “pure protectionist” and “illegal”.[34]

In Europe

Europe became increasingly protectionist during the eighteenth century.[35] Economic historians Findlay and O'Rourke write that "the immediate aftermath of the Napoleonic Wars, European trade policies were almost universally protectionist," with the exceptions being smaller countries such as the Netherlands and Denmark.[35]

Europe increasingly liberalized its trade during the 19th century.[36] Countries such as Britain, the Netherlands, Denmark, Portugal and Switzerland, and arguably Sweden and Belgium, had fully moved towards free trade prior to 1860.[36] Economic historians see the repeal of the Corn Laws in 1846 as the decisive shift toward free trade in Britain.[36][37] A 1990 study by the Harvard economic historian Jeffrey Williamsson showed that the Corn Laws (which imposed restrictions and tariffs on imported grain) substantially increased the cost of living for unskilled and skilled British workers, and hampered the British manufacturing sector by reducing the disposable incomes that British workers could have spent on manufactured goods.[38] The shift towards liberalization in Britain occurred in part due to "the influence of economists like David Ricardo", but also due to "the growing power of urban interests".[36]

Findlay and O'Rourke characterize the 1860 Cobden Chevalier treaty between France and the United Kingdom as "a decisive shift toward European free trade."[36] This treaty was followed by numerous free trade agreements: "France and Belgium signed a treaty in 1861; a Franco-Prussian treaty was signed in 1862; Italy entered the “network of Cobden-Chevalier treaties” in 1863 (Bairoch 1989, 40); Switzerland in 1864; Sweden, Norway, Spain, the Netherlands, and the Hanseatic towns in 1865; and Austria in 1866. By 1877, less than two decades after the Cobden Chevalier treaty and three decades after British Repeal, Germany “had virtually become a free trade country” (Bairoch, 41). Average duties on manufactured products had declined to 9–12% on the Continent, a far cry from the 50% British tariffs, and numerous prohibitions elsewhere, of the immediate post-Waterloo era (Bairoch, table 3, p. 6, and table 5, p. 42)."[36]

Some European powers did not liberalize during the 19th century, such as the Russian Empire and Austro-Hungarian Empire which remained highly protectionist. The Ottoman Empire also became increasingly protectionist.[39] In the Ottoman Empire's case, however, it previously had liberal free trade policies during the 18th to early 19th centuries, which British prime minister Benjamin Disraeli cited as "an instance of the injury done by unrestrained competition" in the 1846 Corn Laws debate, arguing that it destroyed what had been "some of the finest manufactures of the world" in 1812.[40]

The countries of Western Europe began to steadily liberalize their economies after World War II and the protectionism of the interwar period.[35]

In Canada

Since 1971 Canada has protected producers of eggs, milk, cheese, chickens, and turkeys with a system of supply management. Though prices for these foods in Canada exceed global prices, the farmers and processors have had the security of a stable market to finance their operations. Doubts about the safety of bovine growth hormone, sometimes used to boost dairy production, led to hearings before the Senate of Canada, resulting in a ban in Canada. Thus supply management of milk products is consumer protection of Canadians.[41]

In Quebec, the Federation of Quebec Maple Syrup Producers manages the supply of maple syrup.

In Latin America

According to one assessment, tariffs were "far higher" in Latin America than the rest of the world in the century prior to the Great Depression.[42][43]

Impact

There is a broad consensus among economists that protectionism has a negative effect on economic growth and economic welfare, while free trade and the reduction of trade barriers has a positive effect on economic growth.

Protectionism is frequently criticized by economists as harming the people it is meant to help. Mainstream economists instead support free trade.[19][46] The principle of comparative advantage shows that the gains from free trade outweigh any losses as free trade creates more jobs than it destroys because it allows countries to specialize in the production of goods and services in which they have a comparative advantage.[47] Protectionism results in deadweight loss; this loss to overall welfare gives no-one any benefit, unlike in a free market, where there is no such total loss. According to economist Stephen P. Magee, the benefits of free trade outweigh the losses by as much as 100 to 1.

Living standards

A 2016 study found that "that trade typically favors the poor", as they spend a greater share of their earnings on goods, and as free trade reduces the costs of goods.[49] Other research found that China's entry to the WTO benefitted US consumers, as the price of Chinese goods were substantially reduced.[50] Harvard economist Dani Rodrik argues that while globalization and free trade does contribute to social problems, "a serious retreat into protectionism would hurt the many groups that benefit from trade and would result in the same kind of social conflicts that globalization itself generates. We have to recognize that erecting trade barriers will help in only a limited set of circumstances and that trade policies will rarely be the best response to the problems [of globalization]".[51]

Growth

According to economic historians Findlay and O'Rourke, there is a consensus in the economics literature that protectionist policies in the interwar period "hurt the world economy overall, although there is a debate about whether the effect was large or small."[35]

Economic historian Paul Bairoch argued that economic protection was positively correlated with economic and industrial growth during the 19th century. For example, GNP growth during Europe's "liberal period" in the middle of the century (where tariffs were at their lowest), averaged 1.7% per year, while industrial growth averaged 1.8% per year. However, during the protectionist era of the 1870s and 1890s, GNP growth averaged 2.6% per year, while industrial output grew at 3.8% per year, roughly twice as fast as it had during the liberal era of low tariffs and free trade.[52] One study found that tariffs imposed on manufactured goods increase economic growth in developing countries, and this growth impact remains even after the tariffs are repealed.[53]

According to Dartmouth economist Douglas Irwin, "that there is a correlation between high tariffs and growth in the late nineteenth century cannot be denied. But correlation is not causation... there is no reason for necessarily thinking that import protection was a good policy just because the economic outcome was good: the outcome could have been driven by factors completely unrelated to the tariff, or perhaps could have been even better in the absence of protection."[54] Irwin furthermore writes that "few observers have argued outright that the high tariffs caused such growth."[54]

According to Oxford economic historian Kevin O'Rourke, "It seems clear that protection was important for the growth of US manufacturing in the first half of the 19th century; but this does not necessarily imply that the tariff was beneficial for GDP growth. Protectionists have often pointed to German and American industrialization during this period as evidence in favour of their position, but economic growth is influenced by many factors other than trade policy, and it is important to control for these when assessing the links between tariffs and growth."[55]

A prominent 1999 study by Jeffrey A. Frankel and David H. Romer found, contrary to free trade skeptics' claims, while controlling for relevant factors, that trade does indeed have a positive impact on growth and incomes.[56]

Developing world

There is broad consensus among economists that free trade helps workers in developing countries, even though they are not subject to the stringent health and labour standards of developed countries. This is because "the growth of manufacturing—and of the myriad other jobs that the new export sector creates—has a ripple effect throughout the economy" that creates competition among producers, lifting wages and living conditions.[57] The Nobel laureates, Milton Friedman and Paul Krugman, have argued for free trade as a model for economic development.[5] Alan Greenspan, former chair of the American Federal Reserve, has criticized protectionist proposals as leading "to an atrophy of our competitive ability. ... If the protectionist route is followed, newer, more efficient industries will have less scope to expand, and overall output and economic welfare will suffer."[58]

Protectionists postulate that new industries may require protection from entrenched foreign competition in order to develop. This was Alexander Hamilton's argument in his "Report on Manufactures",[citation needed] and the primary reason why George Washington signed the Tariff Act of 1789.[citation needed] Mainstream economists do concede that tariffs can in the short-term help domestic industries to develop, but are contingent on the short-term nature of the protective tariffs and the ability of the government to pick the winners.[59][60] The problems are that protective tariffs will not be reduced after the infant industry reaches a foothold, and that governments will not pick industries that are likely to succeed.[60] Economists have identified a number of cases across different countries and industries where attempts to shelter infant industries failed.

Economists such as Paul Krugman have speculated that those who support protectionism ostensibly to further the interests of workers in least developed countries are in fact being disingenuous, seeking only to protect jobs in developed countries.[66] Additionally, workers in the least developed countries only accept jobs if they are the best on offer, as all mutually consensual exchanges must be of benefit to both sides, or else they wouldn't be entered into freely. That they accept low-paying jobs from companies in developed countries shows that their other employment prospects are worse. A letter reprinted in the May 2010 edition of Econ Journal Watch identifies a similar sentiment against protectionism from 16 British economists at the beginning of the 20th century.[67]

Conflict

Protectionism has also been accused of being one of the major causes of war. Proponents of this theory point to the constant warfare in the 17th and 18th centuries among European countries whose governments were predominantly mercantilist and protectionist, the American Revolution, which came about ostensibly due to British tariffs and taxes, as well as the protective policies preceding both World War I and World War II. According to a slogan of Frédéric Bastiat (1801–1850), "When goods cannot cross borders, armies will."[68]

Current world trends

Protectionist measures taken since 2008 according to Global Trade Alert.[69]

Since the end of World War II, it has been the stated policy of most First World countries to eliminate protectionism through free trade policies enforced by international treaties and organizations such as the World Trade Organization[70] Certain policies of First World governments have been criticized as protectionist, however, such as the Common Agricultural Policy[71] in the European Union, longstanding agricultural subsidies and proposed "Buy American" provisions[72] in economic recovery packages in the United States.

Heads of the G20 meeting in London on 2 April 2009 pledged "We will not repeat the historic mistakes of protectionism of previous eras". Adherence to this pledge is monitored by the Global Trade Alert,[73] providing up-to-date information and informed commentary to help ensure that the G20 pledge is met by maintaining confidence in the world trading system, detering beggar-thy-neighbor acts, and preserving the contribution that exports could play in the future recovery of the world economy.

Although they were reiterating what they had already committed to, last November in Washington, 17 of these 20 countries were reported by the World Bank as having imposed trade restrictive measures since then. In its report, the World Bank says most of the world's major economies are resorting to protectionist measures as the global economic slowdown begins to bite. Economists who have examined the impact of new trade-restrictive measures using detailed bilaterally monthly trade statistics estimated that new measures taken through late 2009 were distorting global merchandise trade by 0.25% to 0.5% (about $50 billion a year).[74]

Since then, however, President Donald Trump announced in January 2017 the U.S. was abandoning the TPP (Trans-Pacific Partnership) deal, saying, “We’re going to stop the ridiculous trade deals that have taken everybody out of our country and taken companies out of our country, and it’s going to be reversed.”

Reproductive rights

From Wikipedia, the free encyclo...