Search This Blog

Thursday, July 5, 2018

Dystopia

From Wikipedia, the free encyclopedia

A dystopia (from the Greek δυσ- "bad" and τόπος "place"; alternatively, cacotopia,[1] kakotopia, or simply anti-utopia) is a community or society that is undesirable or frightening.[2][3] It is translated as "not-good place" and is an antonym of utopia, a term that was coined by Sir Thomas More and figures as the title of his best known work, Utopia, published 1516, a blueprint for an ideal society with minimal crime, violence and poverty.

Dystopian societies appear in many artistic works, particularly in stories set in the future. Some of the most famous examples are George Orwell's 1984 and Aldous Huxley's Brave New World. Dystopias are often characterized by dehumanization,[2] tyrannical governments, environmental disaster,[3] or other characteristics associated with a cataclysmic decline in society. Dystopian societies appear in many sub-genres of fiction and are often used to draw attention to real-world issues regarding society, environment, politics, economics, religion, psychology, ethics, science, or technology. However, some authors also use the term to refer to actually-existing societies, many of which are or have been totalitarian states, or societies in an advanced state of collapse and disintegration.

Etymology

Though several earlier usages are known, dystopia was deployed as an antonym for Utopia by J. S. Mill in one of his Parliamentary Speeches 1868[4] (Hansard Commons) by adding the prefix "dys" (Ancient Greek: δυσ- "bad"), reinterpreting the initial U as the prefix "eu" (Ancient Greek: ευ- "good") instead of "ou" (Ancient Greek: οὐ "not"). It was used to denounce the government's Irish land policy: "It is, perhaps, too complimentary to call them Utopians, they ought rather to be called dys-topians, or caco-topians. What is commonly called Utopian is something too good to be practicable; but what they appear to favour is too bad to be practicable."

Decades before the first documented use of the word "dystopia" was "cacotopia" (using Ancient Greek: κακόs, "bad, wicked")[9] originally proposed in 1818 by Jeremy Bentham: "As a match for utopia (or the imagined seat of the best government) suppose a cacotopia (or the imagined seat of the worst government) discovered and described."[10] Though dystopia became the most popular term, cacotopia finds occasional use, for example by Anthony Burgess, author of A Clockwork Orange, who said it was a better fit for Orwell's Nineteen Eighty-Four because "it sounds worse than dystopia".[11] Some scholars, such as Gregory Claeys and Lyman Tower Sargent, make certain distinctions between typical synonyms of dystopias. For example, Claeys and Sargent define literary dystopias as societies imagined as substantially worse than the contemporaneous society in which the author writes, whereas anti-utopias function as criticisms of attempts to implement various concepts of utopia.[12]

Common theme

Politics

In When the Sleeper Wakes, H. G. Wells depicted the governing class as hedonistic and shallow.[13] George Orwell contrasted Wells's world to that depicted in Jack London's The Iron Heel, where the dystopian rulers are brutal and dedicated to the point of fanaticism, which Orwell considered more plausible.[14]

Whereas the political principles at the root of fictional utopias (or "perfect worlds") are idealistic in principle and successfully result in positive consequences for the inhabitants,[15] the political principles on which fictional dystopias are based, while often based on utopian ideals, result in negative consequences for inhabitants because of at least one fatal flaw.[16]

Dystopias are often filled with pessimistic views of the ruling class or a government that is brutal or uncaring, ruling with an "iron hand" or "iron fist".[citation needed] Dystopian governments are sometimes ruled by a fascist regime or dictator. These dystopian government establishments often have protagonists or groups that lead a "resistance" to enact change within their society, as is seen in Alan Moore's V for Vendetta.[17]

Dystopian political situations are depicted in novels such as We, Parable of the Sower, Darkness at Noon, Nineteen Eighty-Four, Brave New World, The Hunger Games and Fahrenheit 451; and in such films as Metropolis, Brazil, Battle Royale, FAQ: Frequently Asked Questions, Soylent Green, and The Running Man.

Economics

The economic structures of dystopian societies in literature and other media have many variations, as the economy often relates directly to the elements that the writer is depicting as the source of the oppression. However, there are several archetypes that such societies tend to follow.

A commonly occurring theme is the dichotomy of planned economies versus free market economies, a conflict which is found in such works as Ayn Rand's Anthem and Henry Kuttner's short story "The Iron Standard". Another example of this is reflected in Norman Jewison's 1975 film Rollerball.

Some dystopias, such as that of Nineteen Eighty-Four, feature black markets with goods that are dangerous and difficult to obtain, or the characters may be totally at the mercy of the state-controlled economy. Kurt Vonnegut's Player Piano depicts a dystopia in which the centrally controlled economic system has indeed made material abundance plentiful, but deprived the mass of humanity of meaningful labor; virtually all work is menial and unsatisfying, and only a small number of the small group that achieves education is admitted to the elite and its work.[18] In Tanith Lee's Don't Bite the Sun, there is no want of any kind – only unabashed consumption and hedonism, leading the protagonist to begin looking for a deeper meaning to existence.[19]

Even in dystopias where the economic system is not the source of the society's flaws, as in Brave New World, the state often controls the economy. In Brave New World, a character, reacting with horror to the suggestion of not being part of the social body, cites as a reason that everyone works for everyone else.[20]

Other works feature extensive privatization and corporatism; both byproducts of capitalism, where privately owned and unaccountable large corporations have effectively replaced the government in setting policy and making decisions. They manipulate, infiltrate, control, bribe, are contracted by, or otherwise function as government. This is seen in the novels Jennifer Government and Oryx and Crake and the movies Alien, Avatar, RoboCop, Visioneers, Idiocracy, Soylent Green, THX 1138, WALL-E and Rollerball. Corporate republics common in the cyberpunk genre, as in Neal Stephenson's Snow Crash and Philip K. Dick's Do Androids Dream of Electric Sheep? (as well as the film Blade Runner, loosely based on Dick's novel).

Social stratification

Dystopian fiction frequently draws stark contrasts between the privileges of the ruling class and the dreary existence of the working classes.[citation needed]

In the novel Brave New World, written in 1931 by Aldous Huxley, a class system is prenatally designated in terms of Alphas, Betas, Gammas, Deltas, and Epsilons, with the lower classes having reduced brain-function and special conditioning to make them satisfied with their position in life.[21] Outside of this society there also exist several human settlements that still exist in the conventional way but which the class system describe as "savages".

In Ypsilon Minus by Herbert W. Franke, people are divided into numerous alphabetically ranked groups.

Family

Some fictional dystopias, such as Brave New World and Fahrenheit 451, have eradicated the family and deploy continuing efforts to keep it from re-establishing itself as a social institution. In Brave New World, where children are reproduced artificially, the concepts "mother" and "father" are considered obscene. In some novels, the State is hostile to motherhood: for example, in Nineteen Eighty-Four, children are organized to spy on their parents; and in We, by Yevgeny Zamyatin, the escape of a pregnant woman from One State is a revolt.[22]

Religion

Religious groups play the role of the oppressed and oppressors. In Brave New World, for example, the establishment of the state included lopping off the tops of all crosses (as symbols of Christianity) to make them "T"s, (as symbols of Henry Ford's Model T).[23] Margaret Atwood's novel The Handmaid's Tale, on the other hand, takes place in a future United States under a Christianity-based theocratic regime.[24] One of the earliest examples of this theme is Robert Hugh Benson's Lord of the World, about a futuristic world where the Freemasons have taken over the world and the only other religion left is a Roman Catholic minority.[citation needed]

Identity

In the Russian novel We by Yevgeny Zamyatin, first published in 1921, people are permitted to live out of public view twice a week for one hour and are only referred to by numbers instead of names.
In some dystopian works, such as Kurt Vonnegut's Harrison Bergeron, society forces individuals to conform to radical egalitarian social norms that discourage or suppress accomplishment or even competence as forms of inequality.

Violence

Violence is prevalent in many dystopias, often in the form of war (e.g. Nineteen Eighty-Four); urban crimes led by gangs (often of teenagers) (e.g. A Clockwork Orange); rampant crime met by blood sports (e.g. Battle Royale, The Running Man, The Hunger Games and Divergent). Also explained in Suzanne Berne's essay "Ground Zero", where she explains her experience with the aftermath of 11 September 2001.[25]

Nature

Fictional dystopias are commonly urban and frequently isolate their characters from all contact with the natural world.[26] Sometimes they require their characters to avoid nature, as when walks are regarded as dangerously anti-social in Ray Bradbury's Fahrenheit 451, as well as within Bradbury's short story "The Pedestrian."[citation needed] In C. S. Lewis's That Hideous Strength, science coordinated by government is directed toward the control of nature and the elimination of natural human instincts. In Brave New World, the lower classes of society are conditioned to be afraid of nature, but also to visit the countryside and consume transportation and games to promote economic activity.[27] Lois Lowry's "The Giver", in a manner similar to Brave New World, shows a society where technology and the desire to create a utopia has led humanity to enforce climate control on the environment, as well as to eliminate many undomesticated species, and to provide psychological and pharmaceutical repellent against basic human instincts. E. M. Forster's "The Machine Stops" depicts a highly changed global environment which forces people to live underground due to an atmospheric contamination.[28] As Angel Galdon-Rodriguez points out, this sort of isolation caused by external toxic hazard is later used by Hugh Howey in his series of dystopias of the Silo Series.[29]

Excessive pollution that destroys nature is common in many dystopian films, such as The Matrix, RoboCop, WALL-E, and Soylent Green. A few "green" fictional dystopias do exist, such as in Michael Carson's short story "The Punishment of Luxury", and Russell Hoban's Riddley Walker. The latter is set in the aftermath of nuclear war, "a post-nuclear holocaust Kent, where technology has reduced to the level of the Iron Age".[30][citation needed]

Science and technology

Contrary to the technologically utopian claims, which view technology as a beneficial addition to all aspects of humanity, technological dystopia concerns itself with and focuses largely (but not always) on the negative effects caused by new technology.[31]

Typical dystopian claims

1. Technologies reflect and encourage the worst aspects of human nature.[31] Jaron Lanier, a digital pioneer, has become a technological dystopian. “I think it’s a way of interpreting technology in which people forgot taking responsibility,” he says.

“‘Oh, it’s the computer that did it, not me.’ ‘There’s no more middle class? Oh, it’s not me. The computer did it’” (Lanier). This quote explains that people begin to not only blame the technology for the changes in lifestyle but also believe that technology is an omnipotence. It also points to a technological determinist perspective in terms of reification.[32]

2. Technologies harm our interpersonal communication, relationships, and communities.[33]
  • decrease in communication within family members and friend groups due to increased time in technology use
  • virtual space misleadingly heightens the impact of real presence; people resort to technological medium for communication nowadays
3. Technologies reinforce hierarchies - concentrate knowledge and skills; increase surveillance and erode privacy; widen inequalities of power and wealth; giving up control to machines). Douglas Rushkoff, a technological utopian, states in his article that the professional designers “re-mystified” the computer so it wasn’t so readable anymore; users had to depend on the special programs built into the software that was incomprehensible for normal users.[31]

4. New technologies are sometimes regressive (worse than previous technologies).[31]

5. The unforeseen impacts of technology are negative.[31] “ ‘The most common way is that there’s some magic artificial intelligence in the sky or in the cloud or something that knows how to translate, and what a wonderful thing that this is available for free. But there’s another way to look at it, which is the technically true way: You gather a ton of information from real live translators who have translated phrases… It’s huge but very much like Facebook, it’s selling people back to themselves… [With translation] you’re producing this result that looks magical but in the meantime, the original translators aren’t paid for their work… You’re actually shrinking the economy.’”[33]

6. More efficiency and choices can harm our quality of life (by causing stress, destroying jobs, making us more materialistic).[34] In his article “Prest-o! Change-o!,” technological dystopian James Gleick mentions the remote control being the classic example of technology that does not solve the problem “it is meant to solve.” Gleick quotes Edward Tenner, a historian of technology, that the ability and ease of switching channels by the remote control serves to increase distraction for the viewer. Then it is only expected that people will become more dissatisfied with the channel they are watching.[34]

7. New technologies cannot solve problems of old technologies or just create new problems.[31] The remote control example explains this claim as well, for the increase in laziness and dissatisfaction levels was clearly not a problem in times without the remote control. He also takes social psychologist Robert Levine’s example of Indonesians “‘whose main entertainment consists of watching the same few plays and dances, month after month, year after year,’ and with Nepalese Sherpas who eat the same meals of potatoes and tea through their entire lives. The Indonesians and Sherpas are perfectly satisfied.” Because of the invention of the remote control, it merely created more problems.[34]

8. Technologies destroy nature (harming human health and the environment). The need for business replaced community and the “story online” replaced people as the “soul of the Net.” Because information was now able to be bought and sold, there was not as much communication taking place.[31]

In society

People Leaving the Cities, artwork by Zbigniew Libera picturing a dystopian future in which people leave dying metropolises

Dystopias typically reflect contemporary sociopolitical realities and extrapolate worst-case scenarios as warnings for necessary social change or caution.[35] Dystopian fictions invariably reflect the concerns and fears of its contemporaneous culture.[36] Due to this they are a subject of social studies.[citation needed] Syreeta McFadden notes that contemporary dystopian literature and films increasingly pull their inspiration from the worst imaginings of ourselves and present reality, making it often hard to distinguish between entertainment and reality.[35]

In a 1967 study Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Christopher Schmidt notes that while the world goes to waste for future generations we distract ourselves from disaster by passively watching it as entertainment.[37]

In recent years there has seen a surge of popular dystopian young adult literature and blockbuster films.[38][37] Theo James, actor in Divergent, notes that "young people in particular have such a fascination with this kind of story", saying "It's becoming part of the consciousness. You grow up in a world where it's part of the conversation all the time – the statistics of our planet warming up. The environment is changing. The weather is different. There are things that are very visceral and very obvious, and they make you question the future and how we will survive. It's so much a part of everyday life that young people inevitably — consciously or not — are questioning their futures and how the Earth will be. I certainly do. I wonder what kind of world my children's kids will live in."[38]

Some have commented on this trend, saying that "it is easier to imagine the end of the world than it is to imagine the end of capitalism".

Internet culture

From Wikipedia, the free encyclopedia

Internet culture, or cyberculture, is the culture that has emerged, or is emerging, from the use of computer networks for communication, entertainment, and business. Internet culture is also the study of various social phenomena associated with the Internet and other new forms of the network communication, such as online communities, online multi-player gaming, wearable computing, social gaming, social media, mobile apps, augmented reality, and texting,[1] and includes issues related to identity, privacy, and network formation.

Overview

The internet is one gigantic well-stocked fridge ready for raiding; for some strange reason, people go up there and just give stuff away.
Mega 'Zines, Macworld (1995)[2]
Since the boundaries of cyberculture are difficult to define, the term is used flexibly, and its application to specific circumstances can be controversial. It generally refers at least to the cultures of virtual communities, but extends to a wide range of cultural issues relating to "cyber-topics", e.g. cybernetics, and the perceived or predicted cyborgization of the human body and human society itself. It can also embrace associated intellectual and cultural movements, such as cyborg theory and cyberpunk. The term often incorporates an implicit anticipation of the future.

The Oxford English Dictionary lists the earliest usage of the term "cyberculture" in 1963, when A.M. Hilton wrote the following, "In the era of cyberculture, all the plows pull themselves and the fried chickens fly right onto our plates."[3] This example, and all others, up through 1995 are used to support the definition of cyberculture as "the social conditions brought about by automation and computerization."[3] The American Heritage Dictionary broadens the sense in which "cyberculture" is used by defining it as, "The culture arising from the use of computer networks, as for communication, entertainment, work, and business".[4] However, what both the OED and the American Heritage Dictionary miss is that cyberculture is the culture within and among users of computer networks. This cyberculture may be purely an online culture or it may span both virtual and physical worlds. This is to say, that cyberculture is a culture endemic to online communities; it is not just the culture that results from computer use, but culture that is directly mediated by the computer. Another way to envision cyberculture is as the electronically enabled linkage of like-minded, but potentially geographically disparate (or physically disabled and hence less mobile) persons.

Cyberculture is a wide social and cultural movement closely linked to advanced information science and information technology, their emergence, development and rise to social and cultural prominence between the 1960s and the 1990s. Cyberculture was influenced at its genesis by those early users of the internet, frequently including the architects of the original project. These individuals were often guided in their actions by the hacker ethic. While early cyberculture was based on a small cultural sample, and its ideals, the modern cyberculture is a much more diverse group of users and the ideals that they espouse.

Numerous specific concepts of cyberculture have been formulated by such authors as Lev Manovich,[5][6] Arturo Escobar and Fred Forest.[7] However, most of these concepts concentrate only on certain aspects, and they do not cover these in great detail. Some authors aim to achieve a more comprehensive understanding distinguished between early and contemporary cyberculture (Jakub Macek),[8] or between cyberculture as the cultural context of information technology and cyberculture (more specifically cyberculture studies) as "a particular approach to the study of the 'culture + technology' complex" (David Lister et al.).[9]

Manifestations

Manifestations of cyberculture include various human interactions mediated by computer networks. They can be activities, pursuits, games, place's and metaphors, and include a diverse base of applications. Some are supported by specialized software and others work on commonly accepted internet protocols. Examples include but are not limited to:

Qualities

First and foremost, cyberculture derives from traditional notions of culture, as the roots of the word imply. In non-cyberculture, it would be odd to speak of a single, monolithic culture. In cyberculture, by extension, searching for a single thing that is cyberculture would likely be problematic. The notion that there is a single, definable cyberculture is likely the complete dominance of early cyber territory by affluent North Americans. Writing by early proponents of cyberspace tends to reflect this assumption (see Howard Rheingold).[10]

The ethnography of cyberspace is an important aspect of cyberculture that does not reflect a single unified culture. It "is not a monolithic or placeless 'cyberspace'; rather, it is numerous new technologies and capabilities, used by diverse people, in diverse real-world locations." It is malleable, perishable, and can be shaped by the vagaries of external forces on its users. For example, the laws of physical world governments, social norms, the architecture of cyberspace, and market forces shape the way cybercultures form and evolve. As with physical world cultures, cybercultures lend themselves to identification and study.

There are several qualities that cybercultures share that make them warrant the prefix "cyber-". Some of those qualities are that cyberculture:
  • Is a community mediated by ICTs.
  • Is culture "mediated by computer screens".[10]:63
  • Relies heavily on the notion of information and knowledge exchange.
  • Depends on the ability to manipulate tools to a degree not present in other forms of culture (even artisan culture, e.g., a glass-blowing culture).
  • Allows vastly expanded weak ties and has been criticized for overly emphasizing the same (see Bowling Alone and other works).
  • Multiplies the number of eyeballs on a given problem, beyond that which would be possible using traditional means, given physical, geographic, and temporal constraints.
  • Is a "cognitive and social culture, not a geographic one".[10]:61
  • Is "the product of like-minded people finding a common 'place' to interact."[11]:58
  • Is inherently more "fragile" than traditional forms of community and culture (John C. Dvorak).
Thus, cyberculture can be generally defined as the set of technologies (material and intellectual), practices, attitudes, modes of thought, and values that developed with cyberspace.[12]

Identity – "Architectures of credibility"

Cyberculture, like culture in general, relies on establishing identity and credibility. However, in the absence of direct physical interaction, it could be argued that the process for such establishment is more difficult.

How does cyberculture rely on and establish identity and credibility? This relationship is two-way, with identity and credibility being both used to define the community in cyberspace and to be created within and by online communities.

In some senses, online credibility is established in much the same way that it is established in the offline world; however, since these are two separate worlds, it is not surprising that there are differences in their mechanisms and interactions of the markers found in each.

Following the model put forth by Lawrence Lessig in Code: Version 2.0,[13] the architecture of a given online community may be the single most important factor regulating the establishment of credibility within online communities. Some factors may be:
  • Anonymous versus Known
  • Linked to Physical Identity versus Internet-based Identity Only
  • Unrated Commentary System versus Rated Commentary System
  • Positive Feedback-oriented versus Mixed Feedback (positive and negative) oriented
  • Moderated versus Unmoderated

Anonymous versus known

Many sites allow anonymous commentary, where the user-id attached to the comment is something like "guest" or "anonymous user". In an architecture that allows anonymous posting about other works, the credibility being impacted is only that of the product for sale, the original opinion expressed, the code written, the video, or other entity about which comments are made (e.g., a Slashdot post). Sites that require "known" postings can vary widely from simply requiring some kind of name to be associated with the comment to requiring registration, wherein the identity of the registrant is visible to other readers of the comment. These "known" identities allow and even require commentators to be aware of their own credibility, based on the fact that other users will associate particular content and styles with their identity. By definition, then, all blog postings are "known" in that the blog exists in a consistently defined virtual location, which helps to establish an identity, around which credibility can gather. Conversely, anonymous postings are inherently incredible. Note that a "known" identity need have nothing to do with a given identity in the physical world.

Linked to physical identity versus internet-based identity only

Architectures can require that physical identity be associated with commentary, as in Lessig's example of Counsel Connect.[13]:94–97 However, to require linkage to physical identity, many more steps must be taken (collecting and storing sensitive information about a user) and safeguards for that collected information must be established-the users must have more trust of the sites collecting the information (yet another form of credibility). Irrespective of safeguards, as with Counsel Connect,[13]:94–97 using physical identities links credibility across the frames of the internet and real space, influencing the behaviors of those who contribute in those spaces. However, even purely internet-based identities have credibility. Just as Lessig describes linkage to a character or a particular online gaming environment, nothing inherently links a person or group to their internet-based persona, but credibility (similar to "characters") is "earned rather than bought, and because this takes time and (credibility is) not fungible, it becomes increasingly hard" to create a new persona.[13]:113

Unrated commentary system versus rated commentary system

In some architectures those who review or offer comments can, in turn, be rated by other users. This technique offers the ability to regulate the credibility of given authors by subjecting their comments to direct "quantifiable" approval ratings.

Positive feedback-oriented versus mixed feedback (positive and negative) oriented

Architectures can be oriented around positive feedback or a mix of both positive and negative feedback. While a particular user may be able to equate fewer stars with a "negative" rating, the semantic difference is potentially important. The ability to actively rate an entity negatively may violate laws or norms that are important in the jurisdiction in which the internet property is important. The more public a site, the more important this concern may be, as noted by Goldsmith & Wu regarding eBay.[14]

Moderated versus unmoderated

Architectures can also be oriented to give editorial control to a group or individual. Many email lists are worked in this fashion (e.g., Freecycle). In these situations, the architecture usually allows, but does not require that contributions be moderated. Further, moderation may take two different forms: reactive or proactive. In the reactive mode, an editor removes posts, reviews, or content that is deemed offensive after it has been placed on the site or list. In the proactive mode, an editor must review all contributions before they are made public.

In a moderated setting, credibility is often given to the moderator. However, that credibility can be damaged by appearing to edit in a heavy-handed way, whether reactive or proactive (as experienced by digg.com). In an unmoderated setting, credibility lies with the contributors alone. The very existence of an architecture allowing moderation may lend credibility to the forum being used (as in Howard Rheingold's examples from the WELL),[10] or it may take away credibility (as in corporate web sites that post feedback, but edit it highly).

Cyberculture studies

The field of cyberculture studies examines the topics explained above, including the communities emerging within the networked spaces sustained by the use of modern technology. Students of cyberculture engage with political, philosophical, sociological, and psychological issues that arise from the networked interactions of human beings by humans who act in various relations to information science and technology.
Donna Haraway, Sadie Plant, Manuel De Landa, Bruce Sterling, Kevin Kelly, Wolfgang Schirmacher, Pierre Levy, David Gunkel, Victor J.Vitanza, Gregory Ulmer, Charles D. Laughlin, and Jean Baudrillard are among the key theorists and critics who have produced relevant work that speaks to, or has influenced studies in, cyberculture. Following the lead of Rob Kitchin, in his work Cyberspace: The World in the Wires, we might view cyberculture from different critical perspectives. These perspectives include futurism or techno-utopianism, technological determinism, social constructionism, postmodernism, poststructuralism, and feminist theory.[11]:56–72

Technological utopianism

From Wikipedia, the free encyclopedia

Technological utopianism (often called techno-utopian-ism or technoutopianism) is any ideology based on the premise that advances in science and technology could and should bring about a utopia, or at least help to fulfill one or another utopian ideal. A techno-utopia is therefore an ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, as advanced science and technology will allow these ideal living standards to exist; for example, post-scarcity, transformations in human nature, the avoidance or prevention of suffering and even the end of death. Technological utopianism is often connected with other discourses presenting technologies as agents of social and cultural change, such as technological determinism or media imaginaries.

Douglas Rushkoff, a leading theorist on technology and cyberculture claims that technology gives everyone a chance to voice their own opinions, fosters individualistic thinking, and dilutes hierarchy and power structures by giving the power to the people.[2] He says that the whole world is in the middle of a new Renaissance, one that is centered on technology and self-expression. However, Rushkoff makes it clear that “people don’t live their lives behind a desk with their hands on a keyboard” [3]

A tech-utopia does not disregard any problems that technology may cause,[4] but strongly believes that technology allows mankind to make social, economic, political, and cultural advancements.[5] Overall, Technological Utopianism views technology’s impacts as extremely positive.

In the late 20th and early 21st centuries, several ideologies and movements, such as the cyberdelic counterculture, the Californian Ideology, transhumanism,[6] and singularitarianism, have emerged promoting a form of techno-utopia as a reachable goal. Cultural critic Imre Szeman argues technological utopianism is an irrational social narrative because there is no evidence to support it. He concludes that it shows the extent to which modern societies place faith in narratives of progress and technology overcoming things, despite all evidence to the contrary.[7]

History

From the 19th to mid-20th centuries

Karl Marx believed that science and democracy were the right and left hands of what he called the move from the realm of necessity to the realm of freedom. He argued that advances in science helped delegitimize the rule of kings and the power of the Christian Church.[8]

19th-century liberals, socialists, and republicans often embraced techno-utopianism. Radicals like Joseph Priestley pursued scientific investigation while advocating democracy. Robert Owen, Charles Fourier and Henri de Saint-Simon in the early 19th century inspired communalists with their visions of a future scientific and technological evolution of humanity using reason. Radicals seized on Darwinian evolution to validate the idea of social progress. Edward Bellamy’s socialist utopia in Looking Backward, which inspired hundreds of socialist clubs in the late 19th century United States and a national political party, was as highly technological as Bellamy’s imagination. For Bellamy and the Fabian Socialists, socialism was to be brought about as a painless corollary of industrial development.[8]

Marx and Engels saw more pain and conflict involved, but agreed about the inevitable end. Marxists argued that the advance of technology laid the groundwork not only for the creation of a new society, with different property relations, but also for the emergence of new human beings reconnected to nature and themselves. At the top of the agenda for empowered proletarians was "to increase the total productive forces as rapidly as possible". The 19th and early 20th century Left, from social democrats to communists, were focused on industrialization, economic development and the promotion of reason, science, and the idea of progress.[8]

Some technological utopians promoted eugenics. Holding that in studies of families, such as the Jukes and Kallikaks, science had proven that many traits such as criminality and alcoholism were hereditary, many advocated the sterilization of those displaying negative traits. Forcible sterilization programs were implemented in several states in the United States.[9]

H.G. Wells in works such as The Shape of Things to Come promoted technological utopianism.

The horrors of the 20th century – namely fascist dictatorships and the world wars – caused many to abandon optimism. The Holocaust, as Theodor Adorno underlined, seemed to shatter the ideal of Condorcet and other thinkers of the Enlightenment, which commonly equated scientific progress with social progress.[10]

From late 20th and early 21st centuries

The Goliath of totalitarianism will be brought down by the David of the microchip.
— Ronald Reagan, The Guardian, 14 June 1989
A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the "bible" of its adherents.

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular – of which the Internet was but a modest harbinger – would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. "Self-empowered knowledge workers" would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an "obsolete remnant of the industrial age".

Similar forms of "digital utopianism" has often entered in the political messages of party and social movements that point to the Web or more broadly to new media as harbingers of political and social change.[14] Its adherents claim it transcended conventional "right/left" distinctions in politics by rendering politics obsolete. However, techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Therefore, techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent "oracles" of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of "permanent prosperity" had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality.[12][13]

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism.[15][16] However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent society.[17] Cyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.

Principles

Bernard Gendron, a professor of philosophy at the University of Wisconsin–Milwaukee, defines the four principles of modern technological utopians in the late 20th and early 21st centuries as follows:[18]
  1. We are presently undergoing a (post-industrial) revolution in technology;
  2. In the post-industrial age, technological growth will be sustained (at least);
  3. In the post-industrial age, technological growth will lead to the end of economic scarcity;
  4. The elimination of economic scarcity will lead to the elimination of every major social evil.
Rushkoff presents us with multiple claims that surround the basic principles of Technological Utopianism:[19]
  1. Technology reflects and encourages the best aspects of human nature, fostering “communication, collaboration, sharing, helpfulness, and community.”[20]
  2. Technology improves our interpersonal communication, relationships, and communities. Early Internet users shared their knowledge of the Internet with others around them.
  3. Technology democratizes society. The expansion of access to knowledge and skills led to the connection of people and information. The broadening of freedom of expression created “the online world...in which we are allowed to voice our own opinions.”[21] The reduction of the inequalities of power and wealth meant that everyone has an equal status on the internet and is allowed to do as much as the next person.
  4. Technology inevitably progresses. The interactivity that came from the inventions of the TV remote control, video game joystick, computer mouse and computer keyboard allowed for much more progress.
  5. Unforeseen impacts of technology are positive. As more people discovered the Internet, they took advantage of being linked to millions of people, and turned the Internet into a social revolution. The government released it to the public, and its “social side effect… [became] its main feature.”[20]
  6. Technology increases efficiency and consumer choice. The creation of the TV remote, video game joystick, and computer mouse liberated these technologies and allowed users to manipulate and control them, giving them many more choices.
  7. New technology can solve the problems created by old technology. Social networks and blogs were created out of the collapse of dot.com bubble businesses’ attempts to run pyramid schemes on users.

Criticisms

Critics claim that techno-utopianism's identification of social progress with scientific progress is a form of positivism and scientism. Critics of modern libertarian techno-utopianism point out that it tends to focus on "government interference" while dismissing the positive effects of the regulation of business. They also point out that it has little to say about the environmental impact of technology[22] and that its ideas have little relevance for much of the rest of the world that are still relatively quite poor (see global digital divide).[11][12][13]

In his 2010 study System Failure: Oil, Futurity, and the Anticipation of Disaster, Canada Research Chairholder in cultural studies Imre Szeman argues that technological utopianism is one of the social narratives that prevent people from acting on the knowledge they have concerning the effects of oil on the environment.[7]

In a controversial article "Techno-Utopians are Mugged by Reality", Wall Street Journal explores the concept of the violation of free speech by shutting down social media to stop violence. As a result of British cities being looted consecutively, Prime British Minister David Cameron argued that the government should have the ability to shut down social media during crime sprees so that the situation could be contained. A poll was conducted to see if Twitter users would prefer to let the service be closed temporarily or keep it open so they can chat about the famous television show X-Factor. The end report showed that every Tweet opted for X-Factor. The negative social effects of technological utopia is that society is so addicted to technology that we simply can't be parted even for the greater good. While many Techno-Utopians would like to believe that digital technology is for the greater good, it can also be used negatively to bring harm to the public.[23]

Other critics of a techno-utopia include the worry of the human element. Critics suggest that a techno-utopia may lessen human contact, leading to a distant society. Another concern is the amount of reliance society may place on their technologies in these techno-utopia settings.[24] These criticisms are sometimes referred to as a technological anti-utopian view or a techno-dystopia.

Even today, the negative social effects of a technological utopia can be seen. Mediated communication such as phone calls, instant messaging and text messaging are steps towards a utopian world in which one can easily contact another regardless of time or location. However, mediated communication removes many aspects that are helpful in transferring messages. As it stands today, most text, email, and instant messages offer fewer nonverbal cues about the speaker’s feelings than do face-to-face encounters.[25] This makes it so that mediated communication can easily be misconstrued and the intended message is not properly conveyed. With the absence of tone, body language, and environmental context, the chance of a misunderstanding is much higher, rendering the communication ineffective. In fact, mediated technology can be seen from a dystopian view because it can be detrimental to effective interpersonal communication. These criticisms would only apply to messages that are prone to misinterpretation as not every text based communication requires contextual cues. The limitations of lacking tone and body language in text based communication are likely to be mitigated by video and augmented reality versions of digital communication technologies.[26]

Essay by Ray Kurzweil | A new era: medicine is an information technology

 
The impact on health care is bigger than genetics
 

 
Is it time to rethink the promise of genomics?

There has been recent disappointment expressed in the progress in the field of genomics. In my view, this results from an overly narrow view of the science of genes and biological information processing in general. It reminds me of the time when the field of “artificial intelligence” (AI) was equated with the methodology of “expert systems.” If someone referred to AI they were actually referring to expert systems and there were many articles on how limited this technique was and all of the things that it could not and would never be able to do.

At the time, I expressed my view that although expert systems was a useful approach for a certain limited class of problems it did indeed have restrictions and that the field of AI was far broader.

The human brain works primarily by recognizing patterns (we have about a billion pattern recognizers in the neocortex, for example) and there were at the time many emerging methods in the field of pattern recognition that were solving real world problems and that should properly be considered part of the AI field. Today, no one talks much about expert systems and there is a thriving multi-hundred billion dollar AI industry and a consensus in the AI field that nonbiological intelligence will continue to grow in sophistication, flexibility, and diversity.

The same thing is happening here. The problem starts with the word “genomics.” The word sounds like it refers to “all things having to do with genes.” But as practiced, it deals almost exclusively with single genes and their ability to predict traits or conditions, which has always been a narrow concept. The idea of sequencing genes of an individual is even narrower and typically involves individual single-nucleotide polymorphisms (SNPs) which are variations in a single nucleotide (A, T, C or G) within a gene, basically a two bit alteration.

I have never been overly impressed with this approach and saw it as a first step based on the limitations of early technology. There are some useful SNPs such as Apo E4 but even here it only gives you statistical information on your likelihood of such conditions as Alzheimer’s Disease and macular degeneration based on population analyses. It is certainly not deterministic and has never been thought of that way. As Dr. Venter points out in his Der Spiegel interview, there are hundreds of diseases that can be traced to defects in individual genes, but most of these affect developmental processes. So if you provide a medication that reverses the effect of the faulty gene you still have the result of the developmental process (of, say, the nervous system) that has been going on for many years. You would need to detect and reverse the condition very early, which of course is possible and a line of current investigation.

To put this narrow concept of genomics into perspective, think of genes as analogous to lines of code in a software program. If you examine a software program, you generally cannot assign each line of code to a property of the program. The lines of code work together in a complex way to produce a result. Now it is possible that in some circumstances you may be able to find one line of code that is faulty and improve the program’s performance by fixing that one line or even by removing it. But such an approach would be incidental and accidental, it is not the way that one generally thinks of software. To understand the program you would need to understand the language it is written in and how the various lines interact with each other. In this analogy, a SNP would be comparable to a single letter within a single line (actually a quarter of one letter to be precise since a letter is usually represented by 8 bits and a nucleotide by 2 bits). You might be able to find a particularly critical letter in a software program, but again that is not a well motivated approach.

The collection of the human genome was indeed an exponential process with the amount of genetic data doubling each year and the cost of sequencing coming down by half each other. But its completion around 2003 was just the beginning of another even more daunting process, which is to understand it. The language is the three-dimensional properties and interaction of proteins. We started with individual genes as a reasonable place to start but that was always going to be inherently limited if you consider my analogy above to the role of single lines in a software program.
The structure of DNA. (Image: The U.S. National Library of Medicine)

As we consider the genome, the first thing we notice is only about 3 percent of the human genome codes for proteins. With about 23,000 genes, there are over 23,000 proteins (as some portions of genes also produce proteins) and, of course, these proteins interact with each other in complicated pathways.

A trait in a complex organism such as a human being is actually an emergent property of this complex and organized collection of proteins. The 97 percent of the genome that does not code for proteins was originally called “junk DNA.”

We now understand that this portion of the genome has an important role in controlling and influencing gene expression. It is the case that there is less information in these non-coding regions and they are replete with redundancies that we do not see in the coding regions.

For example, one lengthy sequence called ALU is repeated hundreds of thousands of times. Gene expression is a vital aspect of understanding these genetic processes. The noncoding DNA plays an important role in this, but so do environmental factors. Even ignoring the concept that genes work in networks not as individual entities, genes have never been thought of as deterministic.

The “nature versus nurture” discussion goes back eons. What our genetic heritage describes (and by genetic heritage I include the epigenetic information that influences gene expression) is an entity (a human being) that is capable of evolving in and adapting to a complex environment. Our brain, for example, only becomes capable of intelligent decision making through its constant adaptation to and learning from its environment.

To reverse-engineer biology we need to examine phenomena at different levels, especially looking at the role that proteins (which are coded for in the genome) play in biological processes. In understanding the brain, for example, there is indeed exponential progress being made in simulating neurons, neural clusters, and entire regions. This work includes understanding the “wiring” of the brain (which incidentally includes massive redundancy) and how the modules in the brain (which involve multiple neuron types) process information. Then we can link these processes to biochemical pathways, which ultimately links back to genetic information. But in the process of reverse-engineering the brain, genetic information is only one source and not the most important one at that.

So genes are one level of understanding biology as an information process, but there are other levels as well, and some of these other levels (such as actual biochemical pathways, or mechanisms in organs including the brain) are more accessible than genetic information. In any event, just examining individual genes, let alone SNPs, is like looking through a very tiny keyhole.

As another example of why the idea of examining individual genes is far from sufficient, I am currently involved with a cancer stem cell project with MIT scientists Dr. William Thilly and Dr. Elena Gostjeva. What we have found is that mutations in certain stem cells early in life will turn that stem cell into a cancer stem cell which in turn will reproduce and ultimately seed a cancer tumor. It can take years and often decades for the tumor to become clinically evident. But you won’t find these mutations in a blood test because they are mutations originally in a single cell (which then reproduces to create nearby cells), not in all of your cells. However, understanding the genetic mutations is helping us to understand the process of metastasis, which we hope will lead to treatments that can inhibit the formation of new tumors. This is properly part of gene science but is not considered part of the narrow concept of “genomics,” as that term is understood.

Indeed there is a burgeoning field of stem cell treatments using adult stem cells in the positive sense of regenerating needed tissues. This is certainly a positive and clinically relevant result of the overall science and technology of genes.

If we consider the science and technology of genes and information processing in biology in its proper broad context, there are many exciting developments that have current or near term clinical implications, and enormous promise going forward.

A few years ago, Joslin Diabetes Center researchers showed that by inhibiting a particular gene (which they called the fat insulin receptor gene) in the fat cells (but not the muscle cells as that would negatively affect muscles) enabled caloric restriction without the restriction. The test animals ate ravenously and remained slim. They did not get diabetes or heart disease and lived 20 percent longer, getting most of the benefit of caloric restriction. This research is continuing now focusing on doing the same thing in humans, and the researchers whom I spoke with recently, are optimistic.

We have a new technology that can turn genes off, and that has emerged since the completion of the human genome project (and which has already been recognized with the Noble prize), which is RNA interference (RNAi). There are hundreds of drugs and other processes in the development and testing pipeline using this methodology. As I said above, human characteristics, including disease, result from the interplay of multiple genes. There are often individual genes which if inhibited can have a significant therapeutic effect (such as we might disable a rogue software program by overwriting one line of code or one machine instruction).

There are also new methods of adding genes. I am an advisor (and board member) to United Therapeutics, which has developed a method to take lung cells out of the body, add a new gene in vitro (so that the immune system is not triggered — which was a downside of the old methods of gene therapy), inspect the new cell, and replicate it several million fold. You now have millions of cells with your DNA but with a new gene that was not there before. These are injected back into the body and end up lodged in the lungs. This has cured a fatal disease (pulmonary hypertension) in animal trials and is now undergoing human testing. There are also hundreds of such projects using this and other new forms of gene therapy.

As we understand the network of genes that are responsible for human conditions, especially reversible diseases, we will have the means of changing multiple genes, and turning some off or inhibiting them, turning others on or amplifying them. Some of these approaches are entering human trials. More complex approaches involving multiple genes will require greater understanding of gene networks but that is coming.

There is a new wave of drugs entering trials, some late stage trials that are based on gene results. For example, an experimental drug PLX4032 from Roche is designed to attack tumor cells with a mutation in a particular gene called BRAF. For patients with this genetic variant, 81 percent of patients with advanced melanoma had their tumors shrink (rather than grow), which is an impressive result for a form of cancer that is generally resistant to conventional treatment.

There is the whole area of regenerative medicine from stem cells. Some of this is now being done from adult autologous stem cells. Particularly exciting is the recent breakthrough in induced pluripotent stem cells (IPSCs). This involves using in-vitro genetic engineering to add genes to normal adult cells (such as skin cells) to convert them into the equivalent of embryonic stem cells which can subsequently be converted into any type of cell (with your own DNA). IPSCs have been shown to be pluripotent, to have efficacy, and to not trigger the immune system because they are genetically identical. IPSCs offer the potential to repair essentially any organ from hearts to the liver and pancreas. These methods are part of genetic engineering which in turn is part of gene science and technology.

And then of course there is the entire new field of synthetic biology which is based on synthetic genomes. A major enabling breakthrough was recently announced by Craig Venter’s company in which an organism with a synthetic genome (which previously existed only as a computer file) was created. This field is based on entire genomes not just individual genes and it is certainly part of the broad field of gene science and technology. The goal is to create organisms that can do useful work such as produce vaccines and other medicines, biofuels and other valuable industrial substances.

You could write a book (or many books) about all of the advances that are being made in which knowledge of genetic processes and other biological information processes play a critical role. Health and medicine used to be entirely hit or miss without any concept of how biology worked on an information level. Our knowledge is still very incomplete, but our knowledge of these processes is growing exponentially and that is feeding into medical research which is already bearing fruit. To focus just on the narrow concepts that were originally associated with “genomics” is as limited a view as the old idea of AI being just expert systems.

Cognitive liberty

From Wikipedia, the free encyclopedia
Cognitive liberty, or the "right to mental self-determination", is the freedom of an individual to control his or her own mental processes, cognition, and consciousness. It has been argued to be both an extension of, and the principle underlying, the right to freedom of thought.[1][2][3] Though a relatively recently defined concept, many theorists see cognitive liberty as being of increasing importance as technological advances in neuroscience allow for an ever-expanding ability to directly influence consciousness.[4] Cognitive liberty is not a recognized right in any international human rights treaties, but has gained a limited level of recognition in the United States, and is argued to be the principle underlying a number of recognized rights.[5]

Overview

The term "cognitive liberty" was coined by neuroethicist Dr. Wrye Sententia and legal theorist and lawyer Richard Glen Boire, the founders and directors of the non-profit Center for Cognitive Liberty and Ethics (CCLE).[6] Sententia and Boire define cognitive liberty as "the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought."[7]

Sententia and Boire conceived of the concept of cognitive liberty as a response to the increasing ability of technology to monitor and manipulate cognitive function, and the corresponding increase in the need to ensure individual cognitive autonomy and privacy.[8] Sententia divides the practical application of cognitive liberty into two principles:
  1. As long as their behavior does not endanger others, individuals should not be compelled against their will to use technologies that directly interact with the brain or be forced to take certain psychoactive drugs.
  2. As long as they do not subsequently engage in behavior that harms others, individuals should not be prohibited from, or criminalized for, using new mind-enhancing drugs and technologies.[9]
These two facets of cognitive liberty are reminiscent of Timothy Leary's "Two Commandments for the Molecular Age", from his 1968 book The Politics of Ecstasy:
  1. Thou shalt not alter the consciousness of thy fellow man
  2. Thou shalt not prevent thy fellow man from altering his own consciousness.[10]
Supporters of cognitive liberty therefore seek to impose both a negative and a positive obligation on states: to refrain from non-consensually interfering with an individual's cognitive processes, and to allow individuals to self-determine their own "inner realm" and control their own mental functions.[11]

Freedom from interference

This first obligation, to refrain from non-consensually interfering with an individual's cognitive processes, seeks to protect individuals from having their mental processes altered or monitored without their consent or knowledge, "setting up a defensive wall against unwanted intrusions".[12] Ongoing improvements to neurotechnologies such as transcranial magnetic stimulation and electroencephalography (or "brain fingerprinting"); and to pharmacology in the form of selective serotonin reuptake inhibitors (SSRIs), Nootropics, Modafinil and other psychoactive drugs, are continuing to increase the ability to both monitor and directly influence human cognition.[13][14][15] As a result, many theorists have emphasized the importance of recognizing cognitive liberty in order to protect individuals from the state using such technologies to alter those individuals’ mental processes: "states must be barred from invading the inner sphere of persons, from accessing their thoughts, modulating their emotions or manipulating their personal preferences."[16]

This element of cognitive liberty has been raised in relation to a number of state-sanctioned interventions in individual cognition, from the mandatory psychiatric 'treatment' of homosexuals in the US before the 1970s, to the non-consensual administration of psychoactive drugs to unwitting US citizens during CIA Project MKUltra, to the forcible administration of mind-altering drugs on individuals to make them competent to stand trial.[17][18] Futurist and bioethicist George Dvorsky, Chair of the Board of the Institute for Ethics and Emerging Technologies has identified this element of cognitive liberty as being of relevance to the debate around the curing of autism spectrum conditions.[19] Duke University School of Law Professor Nita Farahany has also proposed legislative protection of cognitive liberty as a way of safeguarding the protection from self-incrimination found in the Fifth Amendment to the US Constitution, in the light of the increasing ability to access human memory.[20]

Though this element of cognitive liberty is often defined as an individual’s freedom from state interference with human cognition, Jan Christoph Bublitz and Reinhard Merkel among others suggest that cognitive liberty should also prevent other, non-state entities from interfering with an individual’s mental "inner realm".[21][22] Bublitz and Merkel propose the introduction of a new criminal offense punishing "interventions severely interfering with another’s mental integrity by undermining mental control or exploiting pre-existing mental weakness."[23] Direct interventions that reduce or impair cognitive capacities such as memory, concentration, and willpower; alter preferences, beliefs, or behavioral dispositions; elicit inappropriate emotions; or inflict clinically identifiable mental injuries would all be prima facie impermissible and subject to criminal prosecution.[24] Sententia and Boire have also expressed concern that corporations and other non-state entities might utilize emerging neurotechnologies to alter individuals' mental processes without their consent.[25][26]

Freedom to self-determine

Where the first obligation seeks to protect individuals from interference with cognitive processes by the state, corporations or other individuals, this second obligation seeks to ensure that individuals have the freedom to alter or enhance their own consciousness.[27] An individual who enjoys this aspect of cognitive liberty has the freedom to alter their mental processes in any way they wish to; whether through indirect methods such as meditation, yoga or prayer; or through direct cognitive intervention through psychoactive drugs or neurotechnology.

As psychotropic drugs are a powerful method of altering cognitive function, many advocates of cognitive liberty are also advocates of drug law reform; claiming that the "war on drugs" is in fact a "war on mental states".[28] The CCLE, as well as other cognitive liberty advocacy groups such as Cognitive Liberty UK, have lobbied for the re-examination and reform of prohibited drug law; one of the CCLE's key guiding principles is that: "governments should not criminally prohibit cognitive enhancement or the experience of any mental state".[29] Calls for reform of restrictions on the use of prescription cognitive-enhancement drugs (also called smart drugs or nootropics) such as Prozac, Ritalin and Adderall have also been made on the grounds of cognitive liberty.[30]

This element of cognitive liberty is also of great importance to proponents of the transhumanist movement, a key tenet of which is the enhancement of human mental function. Dr Wrye Sententia has emphasized the importance of cognitive liberty in ensuring the freedom to pursue human mental enhancement, as well as the freedom to choose against enhancement.[31] Sententia argues that the recognition of a "right to (and not to) direct, modify, or enhance one's thought processes" is vital to the free application of emerging neurotechnology to enhance human cognition; and that something beyond the current conception of freedom of thought is needed.[32] Sententia claims that "cognitive liberty's strength is that it protects those who do want to alter their brains, but also those who do not".[33]

Relationship with recognized human rights

Cognitive liberty is not currently recognized as a human right by any international human rights treaty.[34] While freedom of thought is recognized by Article 18 of the Universal Declaration of Human Rights (UDHR), freedom of thought can be distinguished from cognitive liberty in that the former is concerned with protecting an individual’s freedom to think whatever they want, whereas cognitive liberty is concerned with protecting an individual’s freedom to think however they want.[35] Cognitive liberty seeks to protect an individual’s right to determine their own state of mind and be free from external control over their state of mind, rather than just protecting the content of an individuals’ thoughts.[36] It has been suggested that the lack of protection of cognitive liberty in previous human rights instruments was due to the relative lack of technology capable of directly interfering with mental autonomy at the time the core human rights treaties were created.[37] As the human mind was considered invulnerable to direct manipulation, control or alteration, it was deemed unnecessary to expressly protect individuals from unwanted mental interference.[38] With modern advances in neuroscience and in anticipation of its future development however, it is argued that such express protection is becoming increasingly necessary.[39]

Cognitive liberty then can be seen as an extension of or an "update" to the right to freedom of thought as it has been traditionally understood.[40] Freedom of thought should now be understood to include the right to determine one’s own mental state as well as the content of one’s thoughts. However, some have instead argued that cognitive liberty is already an inherent part of the international human rights framework as the principle underlying the rights to freedom of thought, expression and religion.[41] The freedom to think in whatever manner one chooses is a "necessary precondition to those guaranteed freedoms."[42] Daniel Waterman and Casey William Hardison have argued that cognitive liberty is fundamental to Freedom of Thought because it encompasses the ability to have certain types of experiences, including the right to experience altered or non-ordinary states of consciousness.[43] It has also been suggested that cognitive liberty can be seen to be a part of the inherent dignity of human beings as recognized by Article 1 of the UDHR.[44]

Most proponents of cognitive liberty agree however that cognitive liberty should be expressly recognized as a human right in order to properly provide protection for individual cognitive autonomy.[45][46][47]

Legal recognition

In the United States

Richard Glen Boire of the Center for Cognitive Liberty and Ethics filed an amicus brief with the US Supreme Court in the case of Sell v. United States, in which the Supreme Court examined whether the court had the power to make an order to forcibly administer antipsychotic medication to an individual who had refused such treatment, for the sole purpose of making them competent to stand trial.[48][49]

In the United Kingdom

In the case of R v Hardison, the defendant, charged with eight counts under the Misuse of Drugs Act 1971 (MDA), including the production of DMT and LSD, claimed that cognitive liberty was safeguarded by Article 9 of the European Convention on Human Rights.[50] Hardison argued that "individual sovereignty over one's interior environment constitutes the very core of what it means to be free", and that as psychotropic drugs are a potent method of altering an individual's mental process, prohibition of them under the MDA was in opposition to Article 9.[51] The court however disagreed, calling Hardison's arguments a "portmanteau defense" and relying upon the UN Drug Conventions and the earlier case of R v Taylor to deny Hardison's right to appeal to a superior court.[52] Hardison was convicted and given a 20-year prison sentence, though he was released on 29 May 2013 after nine years in prison.[53]

Criticism

While there has been little publicized criticism of the concept of cognitive liberty itself, drug policy reform and the concept of human enhancement, both closely linked to cognitive liberty, remain highly controversial issues. The risks inherent in removing restrictions on controlled cognitive-enhancing drugs, including of widening the gap between those able to afford such treatments and those unable to do so, have caused many to remain skeptical about the wisdom of recognizing cognitive liberty as a right.[54] Political philosopher and Harvard University professor Michael J. Sandel, when examining the prospect of memory enhancement, wrote that "some who worry about the ethics of cognitive enhancement point to the danger of creating two classes of human beings – those with access to enhancement technologies, and those who must make do with an unaltered memory that fades with age."[55] Cognitive liberty then faces opposition obliquely in these interrelated debates.

Political psychology

From Wikipedia, the free encyclopedia ...