Search This Blog

Monday, August 13, 2018

Philosophy of artificial intelligence

From Wikipedia, the free encyclopedia

The philosophy of artificial intelligence attempts to answer such questions as follows:
  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same way that a human being can? Can it feel how things are?
These three questions reflect the divergent interests of AI researchers, linguists, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include:
  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.[2]
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."[3]
  • Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[4]
  • Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[5]
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."[6]

Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.[7]
The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:
  • Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.[3]
Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

The first step to answering the question is to clearly define "intelligence".

Intelligence

The "standard interpretation" of the Turing test.[8]

Turing test

Alan Turing[9] reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[2] Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks".[10] Turing's test extends this polite convention to machines:
  • If a machine acts as intelligently as human being, then it is as intelligent as a human being.
One criticism of the Turing test is that it is explicitly anthropomorphic[citation needed]. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people?[This quote needs a citation] Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".[11]

Intelligent agent definition

Simple reflex agent

Recent A.I. research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[12]
  • If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.[13]
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for human traits that we[who?] may not want to consider intelligent, like the ability to be insulted or the temptation to lie[dubious ]. They have the disadvantage that they fail to make the commonsense[when defined as?] differentiation between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.[14]

Arguments that a machine can display general intelligence

The brain can be simulated


An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device".[15] This argument, first introduced as early as 1943[16] and vividly described by Hans Moravec in 1988,[17] is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029.[18] A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005[19] and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.

Few[quantify] disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle.[20] However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes.[21] Thus, merely mimicking the functioning of a brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind[citation needed].

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:
  • A physical symbol system has the necessary and sufficient means of general intelligent action.[4]
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).[22] Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":
  • The mind can be viewed as a device operating on bits of information according to formal rules.[23]
A distinction is usually made[by whom?] between the kind of high level symbols that directly correspond with objects in the world, such as and and the more complex "symbols" that are present in a machine like a neural network. Early research into AI, called "good old fashioned artificial intelligence" (GOFAI) by John Haugeland, focused on these kind of high level symbols.[24]

Arguments against symbol processing

These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.
Gödelian anti-mechanist arguments
In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism.[25] Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument.[26] Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement)[citation needed]. This is provably impossible for a Turing machine[clarification needed] (and, by an informal extension, any known type of mechanical computer) to do; therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a machine[dubious ].
However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.[27][28][29] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[30]

More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent[when defined as?].[31]

Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying".[32] But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:
  • Lucas can't assert the truth of this statement.[33]
This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.[34]
After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines.[citation needed][clarification needed]. By Penrose and Lucas's arguments, existing quantum computers are not sufficient, so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron.[35] However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.[36]
Dreyfus: the primacy of unconscious skills
Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and argued that these unconscious skills would never be captured in formal rules.[37]
Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior."[38] Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[39]

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning.[40] The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.[41]  Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[42]

Can a machine have a mind, consciousness, and mental states?

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":
  • A physical symbol system can have a mind and mental states.[5]
Searle distinguished this position from what he called "weak AI":
  • A physical symbol system can act intelligently.[5]
Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.[5]

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]."[43] Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[44]

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence." (See artificial consciousness.)

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciousness, minds, mental states, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind.  Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to describe this essential human property). For others[who?], the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something[citation needed]. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle.[45] What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem."[46] A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?[47]

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain.[48] The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Arguments that a computer cannot have a mind and mental states

Searle's Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates "general intelligent action." Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.[49]
Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains."[50] He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."[51]

Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and Blockhead

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill.[52] In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym".[53] Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room

Responses to the Chinese room emphasize several different points.
  • The systems reply and the virtual mind reply:[54] This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies:[55] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply:[56] To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[57]
  • Brain simulator reply:[58] What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply:[59] Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules).[60] The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.[61]
This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):
  • Reasoning is nothing but reckoning[6]
In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):
  • Mental states are just implementations of (the right) computer programs[62]
This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).[62]

Other related questions

Alan Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.[63]
Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression."[63] All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Can a machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people".[64] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[64] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[65]

However, emotions can also be defined in terms of their subjective quality, of what it feels like to have an emotion. The question of whether the machine actually feels an emotion, or whether it merely acts as if it is feeling an emotion is the philosophical question, "can a machine be conscious?" in another form.[43]

Can a machine be self-aware?

"Self awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, it is obvious that a program can be written that can report on its own internal states, such as a debugger.[63] Though arguably self-awareness often presumes a bit more capability; a machine that can ascribe meaning in some way to not only its own state but in general postulating questions without solid answers: the contextual nature of its existence now; how it compares to past states or plans for the future, the limits and value of its work product, how it perceives its performance to be valued-by or compared to others.

Can a machine be original or creative?

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest.[66] He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.[67] It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.)

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings.[68] Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Can a machine be benevolent or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.[43]
The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Singularity Institute). (The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind.)

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity."[69] He suggests that it may be somewhat or possibly very dangerous for humans.[70] This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[69]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[71] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[72][73]

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[74] They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[75]

Can a machine have a soul?

Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes
In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

Ethics of technology

From Wikipedia, the free encyclopedia
Ethics in technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age. Some prominent works of philosopher Hans Jonas are devoted to ethics of technology. The subject has also been explored, following the work of Mario Bunge, under the term technoethics.

Fundamental problems

It is often held that technology itself is incapable of possessing moral or ethical qualities, since "technology" is merely tool making. But many now believe that each piece of technology is endowed with and radiating ethical commitments all the time, given to it by those that made it, and those that decided how it must be made and used. Whether merely a lifeless amoral 'tool' or a solidified embodiment of human values "ethics of technology" refers to two basic subdivisions:-
  • The ethics involved in the development of new technology—whether it is always, never, or contextually right or wrong to invent and implement a technological innovation.
  • The ethical questions that are exacerbated by the ways in which technology extends or curtails the power of individuals—how standard ethical questions are changed by the new powers.
In the former case, ethics of such things as computer security and computer viruses asks whether the very act of innovation is an ethically right or wrong act. Similarly, does a scientist have an ethical obligation to produce or fail to produce a nuclear weapon? What are the ethical questions surrounding the production of technologies that waste or conserve energy and resources? What are the ethical questions surrounding the production of new manufacturing processes that might inhibit employment, or might inflict suffering in the third world?

In the latter case, the ethics of technology quickly break down into the ethics of various human endeavors as they are altered by new technologies. For example, bioethics is now largely consumed with questions that have been exacerbated by the new life-preserving technologies, new cloning technologies, and new technologies for implantation. In law, the right of privacy is being continually attenuated by the emergence of new forms of surveillance and anonymity. The old ethical questions of privacy and free speech are given new shape and urgency in an Internet age. Such tracing devices as RFID, biometric analysis and identification, genetic screening, all take old ethical questions and amplify their significance.

Technoethics

Technoethics (TE) is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.[1]

Technoethics views technology and ethics as socially embedded enterprises and focuses on discovering the ethical use of technology, protecting against the misuse of technology, and devising common principles to guide new advances in technological development and application to benefit society. Typically, scholars in technoethics have a tendency to conceptualize technology and ethics as interconnected and embedded in life and society. Technoethics denotes a broad range of ethical issues revolving around technology – from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life.[1]

Technoethical perspectives are constantly in transition as technology advances in areas unseen by creators, as users change the intended uses of new technologies. Humans cannot be separated from these technologies because it is an inherent part of consciousness and meaning in life therefore, requiring an ethical model. The short term and longer term ethical considerations for technologies do not just engage the creator and producer but makes the user question their beliefs in correspondence with this technology and how governments must allow, react to, change, and/or deny technologies.

Definitions

Using theories and methods from multiple domains, technoethics provides insights on ethical aspects of technological systems and practices, examines technology-related social policies and interventions, and provides guidelines for how to ethically use new advancements in technology.[1] Technoethics provides a systems theory and methodology to guide a variety of separate areas of inquiry into human-technological activity and ethics.[1] Moreover, the field unites both technocentric and bio-centric philosophies, providing "conceptual grounding to clarify the role of technology to those affected by it and to help guide ethical problem solving and decision making in areas of activity that rely on technology."[1] As a bio-techno-centric field, technoethics "has a relational orientation to both technology and human activity";[1] it provides "a system of ethical reference that justifies that profound dimension of technology as a central element in the attainment of a 'finalized' perfection of man.'[2]
  • Ethics address the issues of what is 'right', what is 'just', and what is 'fair'.[3] Ethics describe moral principles influencing conduct; accordingly, the study of ethics focuses on the actions and values of people in society (what people do and how they believe they should act in the world).[1]
  • Technology is the branch of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment; it may draw upon a variety of fields, including industrial arts, engineering, applied science, and pure science.[4] Technology "is core to human development and a key focus for understanding human life, society and human consciousness."[1]

History of technoethics

Though the ethical consequences of new technologies have existed since Socrates' attack on writing in Plato's dialogue, Phaedrus, the formal field of technoethics had only existed for a few decades. The first traces of TE can be seen in Dewey and Peirce's pragmatism. With the advent of the industrial revolution, it was easy to see that technological advances were going to influence human activity. This is why they put emphasis on the responsible use of technology.

The term "technoethics" was coined in 1977 by the philosopher Mario Bunge to describe the responsibilities of technologists and scientists to develop ethics as a branch of technology. Bunge argued that the current state of technological progress was guided by ungrounded practices based on limited empirical evidence and trial-and-error learning. He recognized that "the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term." He recognized a pressing need in society to create a new field called 'Technoethics' to discover rationally grounded rules for guiding science and technological progress.[5]

With the spurt in technological advances came technological inquiry. Societal views of technology were changing; people were becoming more critical of the developments that were occurring and scholars were emphasizing the need to understand and to take a deeper look and study the innovations. Associations were uniting scholars from different disciplines to study the various aspects of technology. The main disciplines being philosophy, social sciences and science and technology studies (STS). Though many technologies were already focused on ethics, each technology discipline was separated from each other, despite the potential for the information to intertwine and reinforce itself. As technologies became increasingly developed in each discipline, their ethical implications paralleled their development, and became increasingly complex. Each branch eventually became united, under the term technoethics, so that all areas of technology could be studied and researched based on existing, real-world examples and a variety of knowledge, rather than just discipline-specific knowledge.

Ethics theories

Technoethics involves the ethical aspects of technology within a society that is shaped by technology. This brings up a series of social and ethical questions regarding new technological advancements and new boundary crossing opportunities. Before moving forward and attempting to address any ethical questions and concerns, it is important to review the 3 major ethical theories to develop a perspective foundation :
  • Utilitarianism (Bentham, J) is an ethical theory which attempts to maximize happiness and reduce suffering for the greatest amount of people. Utilitarianism focused on results and consequences rather than rules.
  • Duty Ethics (Kant) notes the obligations that one has to society and follows society's universal rules. It focuses on the rightness of actions instead of the consequences, focusing on what an individual should do.[6]
  • Virtue Ethics is another main perspective in normative ethics. It highlights the role and virtues that an individual's character contains to be able to determine or evaluate ethical behaviour in society.
  • Relationship ethics states that care and consideration are both derived from human communication. Therefore, ethical communication is the core substance to maintain healthy relationships.[6]

Historical framing of technology – four main periods

  1. Greek civilization defined technology as techné. Techné is "the set principles, or rational method, involved in the production of an object or the accomplishment of an end; the knowledge such as principles of method; art."[7] This conceptualization of technology used during the early Greek and Roman period to denote the mechanical arts, construction, and other efforts to create, in Cicero's words, a "second nature" within the natural world.[6]
  2. Modern conceptualization of technology as invention materialized in the 17th century in Bacon's futuristic vision of a perfect society governed by engineers and scientists in Saloman's House, to raise the importance of technology in society.[6]
  3. The German term "Tecknik" was used in the 19th-20th century. Technik is the totality of processes, machines, tools and systems employed in the practical arts and Engineering. Webber popularized it when it was used in broader fields. Mumford said it was underlying a civilization. Known as: before 1750: Eotechnic, in 1750-1890: Paleoethnic and in 1890: Neoethnic. Place it at the center of social life in close connection to social progress and societal change. Mumford says that a machine cannot be divorced from its larger social pattern, for it is the pattern that gives it meaning and purpose.
  4. Rapid advances in technology provoked a negative reaction from scholars who saw technology as a controlling force in society with the potential to destroy how people live (Technological Determinism). Heidegger warned people that technology was dangerous in that it exerted control over people through its mediating effects, thus limiting authenticity of experience in the world that defines life and gives life meaning.[6] It is an intimate part of the human condition, deeply entrenched in all human history, society and mind.[6]

Significant technoethical developments in society

Many advancements within the past decades have added to the field of technoethics. There are multiple concrete examples that have illustrated the need to consider ethical dilemmas in relation to technological innovations. Beginning in the 1940s influenced by the British eugenic movement, the Nazis conduct "racial hygiene" experiments causing widespread, global anti-eugenic sentiment. In the 1950s the first satellite Sputnik 1 orbited the earth, the Obninsk Nuclear Power Plant was the first nuclear power plant to be opened, the American nuclear tests take place. The 1960s brought about the first manned moon landing, ARPANET created which leads to the later creation of the Internet, first heart transplantation completed, and the Telstar communications satellite is launched. The 70s, 80s, 90s, 2000s and 2010s also brought multiple developments.

Technological consciousness

Technological consciousness is the relationship between humans and technology. Technology is seen as an integral component of human consciousness and development. Technology, consciousness and society are intertwined in a relational process of creation that is key to human evolution. Technology is rooted in the human mind, and is made manifest in the world in the form of new understandings and artifacts. The process of technological consciousness frames the inquiry into ethical responsibility concerning technology by grounding technology in human life.

The structure of technological consciousness is relational but also situational, organizational, aspectual and integrative. Technological consciousness situates new understandings by creating a context of time and space. As well, technological consciousness organizes disjointed sequences of experience under a sense of unity that allows for a continuity of experience. The aspectual component of technological consciousness recognizes that individuals can only be conscious of aspects of an experience, not the whole thing. For this reason, technology manifests itself in processes that can be shared with others. The integrative characteristics of technological consciousness are assimilation, substitution and conversation. Assimilation allows for unfamiliar experiences to be integrated with familiar ones. Substitution is a metaphorical process allowing for complex experiences to be codified and shared with others — for example, language. Conversation is the sense of an observer within an individual's consciousness, providing stability and a standpoint from which to interact with the process.[1]

Misunderstandings of consciousness and technology

The common misunderstandings about consciousness and technology are listed as follows. The first misunderstanding is that consciousness is only in the head when in fact, consciousness is not only in the head meaning that "[c]onsciousness is responsible for the creation of new conscious relations wherever imagined, be it in the head, on the street or in the past."[1] The second misunderstanding is technology is not a part of consciousness. The truth is that technology is a part of consciousness as "the conceptualization of technology has gone through drastic changes." The third misunderstanding is that technology controls society and consciousness, when really technology does not control society and consciousness; meaning "that technology is rooted in consciousness as an integral part of mental life for everyone. This understanding will most likely alter how both patients and psychologists deal with the trials and tribunes of living with technology."[1] The last misunderstanding is society controls technology and consciousness which is not true, society does not control technology and consciousness. "…(other) accounts fail to acknowledge the complex relational nature of technology as an operation within mind and society. This realization shifts the focus on technology to its origins within the human mind as explained through the theory of technological consciousness."[1]
  • Consciousness (C) is only a part of the head: C is responsible for the creation of new conscious relations
  • Technology (T) is not part of C: Humans cannot be separated from technology
  • T controls society and C: Technology cannot control the mind
  • Society controls T and C: Society fails to take in account the consideration of society shaping what technology gets developed?

Ethical challenges

Ethical challenges arise in many different situations,
  • Human knowledge processes
  • Workplace discrimination
  • Strained work life balance in technologically enhanced work environments
  • digital divide: Inequalities in information access for parts of the population
  • Unequal opportunities for scientific and technological development
  • Norris says access to information and knowledge resources within a knowledge society tend to favour the economically privileged who have greater access to technological tools needed to access information and knowledge resources disseminated online and the privatization of knowledge
  • Inequality in terms of how scientific and technological knowledge is developed around the globe. Developing countries do not have the same opportunities as developed countries to invest in costly large-scale research and expensive research facilities and instrumentation
  • Organizational responsibility and accountability issues
  • Intellectual property issues
  • Information overload: Information processing theory is working memory that has a limited capacity and too much information can lead to cognitive overload resulting in loss of information from short term memory[6]
  • Limit an organization's ability to innovate and respond for change
  • Knowledge society is intertwined with changing technology requiring new skills of its workforce. Cutler says that there is the perception that older workers lack experience with new technology and that retaining programs may be less effective and more expensive for older workers. Cascio says that there is a growth of virtual organizations. Saetre & Sornes say that it is a blurring of the traditional time and space boundaries has also led to many cases in the blurring of work and personal life[6]
  • Negative impacts of many scientific and technological innovations have on humans and the environment has led to some skepticism and resistance to increasing dependence on technology within the Knowledge Society. Doucet calls for city empowerment to have the courage and foresight to make decisions that are acceptable to its inhabitants rather that succumb to global consumer capitalism and the forces of international corporations on national and local governments[6]
  • Scientific and technological innovations that have transformed organizational life within a global economy have also supplanted human autonomy and control in work within a technologically oriented workplace
  • The persuasive potential of technology raises the question of "how sensitive ... designers and programmers [should] be to the ethics of the persuasive technology they design."[8] Technoethics can be used to determine the level of ethical responsibility that should be associated with outcomes of the use of technology, whether intended or unintended
  • Rapidly changing organizational life and the history of unethical business practices have given rise to public debates concerning organizational responsibility and trust. The advent of virtual organizations and telework has bolstered ethical problems by providing more opportunities for fraudulent behaviour and the production of misinformation. Concerted efforts are required to uphold ethical values in advancing new knowledge and tools within societal relations which do not exclude people or limit liberties of some people at the expense of others[6]

Current issues

Copyright

Digital copyrights are a heated issue because there are so many sides to the discussion. There are ethical considerations surrounding the artist, producer, end user, and the country are intertwined. Not to mention the relationships with other countries and the impact on the use (or no use) of content housed in their countries. In Canada, national laws such as the Copyright Act and the history behind Bill C-32 are just the beginning of the government's attempt to shape the "wild west" of Canadian Internet activities.[9] The ethical considerations behind Internet activities such a peer-to-peer file sharing involve every layer of the discussion – the consumer, artist, producer, music/movie/software industry, national government, and international relations. Overall, technoethics forces the "big picture" approach to all discussions on technology in society. Although time consuming, this "big picture" approach offers some level of reassurance when considering that any law put in place could drastically alter the way we interact with our technology and thus the direction of work and innovation in the country.

The use of copyrighted material to create new content is a hotly debated topic.[citation needed] The emergence of the musical "mashup" genre has compounded the issue of creative licensing. A moral conflict is created between those who believe that copyright protects any unauthorized use of content, and those who maintain that sampling and mash-ups are acceptable musical styles and, though they use portions of copyrighted material, the end result is a new creative piece which is the property of the creator, and not of the original copyright holder. Whether or not the mashup genre should be allowed to use portions of copyrighted material to create new content is one which is currently under debate.[10]

Cybercriminality

For many years,[vague] new technologies took an important place in social, cultural, political, and economic life. Thanks to the democratization of informatics access and the network's globalization, the number of exchanges and transaction is in perpetual progress.
Many people[vague] are exploiting the facilities and anonymity that modern technologies offer in order to commit multiple criminal activities. Cybercrime is one of the fastest growing areas of crime. The problem is that some laws that profess to protect people from those who would do wrong things via digital means also threaten to take away people's freedom.[11]

Privacy vs. security: Full-body airport scanners

Since the introduction of full body X-ray scanners to airports in 2007, many concerns over traveler privacy have arisen. Individuals are asked to step inside a rectangular machine that takes an alternate wavelength image of the person's naked body for the purpose of detecting metal and non-metal objects being carried under the clothes of the traveler. This screening technology comes in two forms, millimeter wave technology (MM-wave technology) or backscatter X-rays (similar to x-rays used by dentists). Full-body scanners were introduced into airports to increase security and improve the quality of screening for objects such as weapons or explosives due to an increase of terrorist attacks involving airplanes occurring in the early 2000s.

Ethical concerns of both travelers and academic groups include fear of humiliation due to the disclosure of anatomic or medical details, exposure to a low level of radiation (in the case of backscatter X-ray technology), violation of modesty and personal privacy, clarity of operating procedures, the use of this technology to discriminate against groups, and potential misuse of this technology for reasons other than detecting concealed objects. Also people with religious beliefs that require them to remain physically covered (arms, legs, face etc.) at all times will be unable and morally opposed to stepping inside of this virtually intrusive scanning technology. The Centre for Society, Science and Citizenship have discussed their ethical concerns including the ones mentioned above and suggest recommendations for the use of this technology in their report titled "Whole Body Imaging at airport checkpoints: the ethical and policy context" (2010).[12]

Privacy and GPS technologies

The discourse around GPS tracking devices and geolocation technologies and this contemporary technology's ethical ramifications on privacy is growing[citation needed] as the technology becomes more prevalent in society. As discussed in the New York Times's Sunday Review on September 22, 2012, the editorial focused on the ethical ramifications that imprisoned a drug offender because of the GPS technology in his cellphone was able to locate the criminal's position. Now that most people carry on the person a cell, the authorities have the ability to constantly know the location of a large majority of citizens. The ethical discussion now can be framed from a legal perspective. As raised in the editorial, there are stark infractions that these geolocation devices on citizens' Fourth Amendment and their protection against unreasonable searches. This reach of this issue is not just limited to the United States but affects more democratic state that uphold similar citizens' rights and freedoms against unreasonable searches.[13]

These geolocation technologies are not only affecting how citizens interact with their state but also how employees interact with their workplaces. As discussed in article by the Canadian Broadcasting Company, "GPS and privacy", that a growing number of employers are installing geolocation technologies in "company vehicles, equipment and cellphones" (Hein, 2007). Both academia and unions are finding these new powers of employers to be indirect contradiction with civil liberties. This changing relationship between employee and employer because of the integration of GPS technology into popular society is demonstrating a larger ethical discussion on what are appropriate privacy levels. This discussion will only become more prevalent as the technology becomes more popular.[14]

Genetically modified organisms

Genetically modified foods have become quite common in developed countries around the world, boasting greater yields, higher nutritional value, and greater resistance to pests, but there are still many ethical concerns regarding their use. Even commonplace genetically modified crops like corn raise questions of the ecological consequences of unintended cross pollination, potential horizontal gene transfer, and other unforeseen health concerns for humans and animals.[15]

Trademarked organisms like the "Glofish" are a relatively new occurrence. These zebrafish, genetically modified to appear in several fluorescent colours and sold as pets in the United States, could have unforeseen effects on freshwater environments were they ever to breed in the wild.[citation needed]

Providing they receive approval from the U.S. Food and Drug Administration (FDA), another new type of fish may be arriving soon.[when?] The "AquAdvantage salmon", engineered to reach maturity within roughly 18 months (as opposed to three years in the wild), could help meet growing global demand. There are health and environmental concerns associated with the introduction any new GMO, but more importantly this scenario highlights the potential economic impact a new product may have. The FDA does perform an economic impact analysis to weigh, for example, the consequences these new genetically modified fish may have on the traditional salmon fishing industry against the long term gain of a cheaper, more plentiful source of salmon. These technoethical assessments, which regulatory organizations like the FDA are increasingly faced with worldwide, are vitally important in determining how GMOs—with all of their potential beneficial and harmful effects—will be handled moving forward.

Pregnancy screening technology

For over 40 years, newborn screening has been a triumph of the 20th century public health system.[citation needed] Through this technology, millions of parents are given the opportunity to screen for and test a number of disorders, sparing the death of their children or complications such as mental retardation. However, this technology is growing at a fast pace, disallowing researchers and practitioners from being able to fully understand how to treat diseases and provide families in need with the resources to cope.

A version of pre-natal testing, called tandem mass spectrometry, is a procedure that "measures levels and patterns of numerous metabolites in a single drop of blood, which are then used to identify potential diseases. Using this same drop of blood, tandem mass spectrometry enables the detection of at least four times the number of disorders than was possible with previous technologies." This allows for a cost-effective and fast method of pre-natal testing.[16]

However, critics of tandem mass spectrometry and technologies like it are concerned about the adverse consequences of expanding newborn screen technology and the lack of appropriate research and infrastructure needed to provide optimum medical services to patients. Further concerns include "diagnostic odysseys", a situation in which the patient aimlessly continues to search for diagnoses where none exists.

Among other consequences, this technology raises the issue of whether individuals other than newborn will benefit from newborn screening practices. A reconceptualization of the purpose of this screening will have far reaching economic, health and legal impact. This discussion is only just beginning and requires informed citizenry to reach legal if not moral consensus on how far we as a society are comfortable with taking this technology.

Citizen journalism

Citizen journalism is a concept describing citizens who wish to act as a professional journalist or media person by "collecting, reporting, analyzing, and disseminating news and information"[17] According to Jay Rosen, citizen journalists are "the people formerly known as the audience," who "were on the receiving end of a media system that ran one way, in a broadcasting pattern, with high entry fees and a few firms competing to speak very loudly while the rest of the population listened in isolation from one another— and who today are not in a situation like that at all. ... The people formerly known as the audience are simply the public made realer, less fictional, more able, less predictable".[18]
The internet has provided society with a modern and accessible public space. Due to the openness of the internet, there are discernible effects on the traditional profession of journalism. Although the concept of citizen journalism is a seasoned one, "the presence of online citizen journalism content in the marketplace may add to the diversity of information that citizens have access to when making decisions related to the betterment of their community or their life".[19] The emergence of online citizen journalism is fueled by the growing use of social media websites to share information about current events and issues locally, nationally and internationally.

The open and instantaneous nature of the internet affects the criteria of information quality on the web. A journalistic code of ethics is not instilled for those who are practicing citizen journalism. Journalists, whether professional or citizen, have needed to adapt to new priorities of current audiences: accessibility, quantity of information, quick delivery and aesthetic appeal.[20] Thus, technology has affected the ethical code of the profession of journalism with the popular free and instant sharing qualities of the internet. Professional journalists have had to adapt to these new practices to ensure that truthful and quality reporting is being distributed. The concept can be seen as a great advancement in how society communicates freely and openly or can be seen as contributing to the decay of traditional journalistic practices and codes of ethics.

Other issues to consider:
  • Privacy concerns: location services on cell devices which tell all users where a person is should they decide to turn on this feature, social media, online banking, new capabilities of cellular devices, Wi-fi, etc.
  • New music technology: People see more electronic music today with the new technology able to create it, as well as more advanced recording technology[21]

Recent developments

Despite the amassing body of scholarly work related to technoethics beginning in the 1970s, only recently has it become institutionalized and recognized as an important interdisciplinary research area and field of study. In 1998, the Epson Foundation founded the Instituto de Tecnoética in Spain under the direction of Josep Esquirol. This institute has actively promoted technoethical scholarship through awards, conferences, and publications.[22][23] This helped encourage scholarly work for a largely European audience. The major driver for the emergence of technoethics can be attributed to the publication of major reference works available in English and circulated globally. The "Encyclopedia of Science, Technology, and Ethics" included a section on technoethics which helped bring it into mainstream philosophy.[24] This helped to raise further interest leading to the publication of the first reference volume in the English language dedicated to the emerging field of Technoethics. The two volume Handbook of Research on Technoethics explores the complex connections between ethics and the rise of new technologies (e.g., life-preserving technologies, stem cell research, cloning technologies, new forms of surveillance and anonymity, computer networks, Internet advancement, etc.) This recent major collection provides the first comprehensive examination of technoethics and its various branches from over 50 scholars around the globe. The emergence of technoethics can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in recent years such as technoscience and technocriticism.[25]

With all the developments we've had in technology it has created a lot advancement for the music industry both positive and negative. A main concern is piracy and illegal downloading; with all that is available through the internet a lot of music (TV shows and movies as well) have become easily accessible to download and upload for free. This does create new challenges for artist, producers, and copyright laws. The advances it has positively made for the industry is a whole new genre of music. Computers are being used to create electronic music, as well as synthesizers (computerized/electronic piano).[21] This type of music is becoming rapidly more common and listened to. These advances have allowed the industry to try new things and make new explorations.

Future developments

The future of technoethics is a promising, yet evolving field. The studies of e-technology in workplace environments are an evolving trend in technoethics. With the constant evolution of technology, and innovations coming out daily, technoethics is looking to be a rather promising guiding framework for the ethical assessments of new technologies. Some of the questions regarding technoethics and the workplace environment that have yet to be examined and treated are listed below:

UNESCO

UNESCO – a specialized intergovernmental agency of the United Nations, focusing on promotion of education, culture social and natural sciences and communication and information. In the future, the use of principles as expressed in the UNESCO Universal Declaration on Bioethics and Human Rights (2005) will also be analyzed to broaden the description of bioethical reasoning (Adell & Luppicini, 2009).

Areas of technoethical inquiry

Biotech ethics

Biotech ethics concerned with ethical dilemmas surrounding the use of biotechnologies in fields including medical research, health care, and industrial applications. Topics such as cloning ethics, e-health ethics, telemedicine ethics, genetics ethics, neuroethics, and sport and nutrition ethics fall into this category; examples of specific issues include the debates surrounding euthanasia and reproductive rights.[1]

Technoethics and cognition

This area of technoethical inquiry is concerned with technology's relation to the human mind, artificial agents, and society. Topics of study that would fit into this category would be artificial morality and moral agents, technoethical systems and techno-addiction.[1]
  • An artificial agent describes any type of technology that is created to act as an agent, either of its own power or on behalf of another agent. An artificial agent may try to advance its own goals or those of another agent.[26]

Technoethics and society

This field is concerned with the uses of technology to ethically regulate aspects of a society. For example: digital property ethics, social theory, law, science, organizational ethics and global ethics.[1]

Technofeminism

Technoethics has concerned itself with society as a general group and made no distinctions between the genders, but considers technological effects and influences on each gender individually. This is an important consideration as some technologies are created for use by a specific gender, including birth control, abortion, fertility treatments, and Viagra. Feminists have had a significant influence on the prominence and development of reproductive technologies.[27] Technoethical inquiry must examine these technologies' effects on the intended gender while also considering their influence on the other gender. Another dimension of technofeminism concerns female involvement in technological development: women's participation in the field of technology has broadened society's understanding of how technology affects the female experience in society.

Information and communication technoethics

Information and communication technoethics is "concerned with ethical issues and responsibilities arising when dealing with information and communication technology in the realm of communication."[1] This field is related to internet ethics, rational and ethical decision making models, and information ethics. A major area of interest is the convergence of technologies: as technologies become more interdependent and provide people with multiple ways of accessing the same information, they transform society and create new ethical dilemmas. This is particularly evident in the realms of the internet. In recent years, users have had the unprecedented position of power in creating and disseminating news and other information globally via social networking; the concept of "citizen journalism" primarily relates to this. With developments in the media, has led to open media ethics as Ward writes, leading to citizen journalism.[28]

In cases such as the 2004 Indian Ocean Tsunami or the 2011 Arab Spring movements, citizen journalists were seen to have been significant sources of facts and information in relation to the events. These were re-broadcast by news outlets, and more importantly, re-circulated by and to other internet users. As Jay David Bolter and Richard Grusin state in their book Remediation: Understanding New Media (1999): "The liveness of the Web is a refashioned version of the liveness of broadcast television"[29] However, it is commonly political events (such as 'Occupy' movements or the Iran Elections of 2009) that tend to raise ethical questions and concerns. In the latter example, there had been efforts made by the Iranian government in censoring and prohibiting the spread of internal happenings to the outside by its citizen journalists. This occurrence questioned the importance of the spread of crucial information regarding the issue, and the source from which it came from (citizen journalists, government authorities, etc.). This goes to prove how the internet "enables new forms of human action and expression [but] at the same time it disables [it]"[30] Information and Communication Technoethics also identifies ways to develop ethical frameworks of research structures in order to capture the essence of new technologies.

Educational and professional technoethics

Technoethical inquiry in the field of education examines how technology impacts the roles and values of education in society. This field considers changes in student values and behavior related to technology, including access to inappropriate material in schools, online plagiarism using material copied directly from the internet, or purchasing papers from online resources and passing them off as the student's own work.[1][31] Educational technoethics also examines the digital divide that exists between educational institutions in developed and developing countries or between unequally-funded institutions within the same country: for instance, some schools offer students access to online material, while others do not. Professional technoethics focuses on the issue of ethical responsibility for those who work with technology within a professional setting, including engineers, medical professionals, and so on.[6] Efforts have been made to delineate ethical principles in professions such as computer programming (see programming ethics).

Environmental and engineering technoethics

Environmental technoethics originate from the 1960s and 1970s' interest in environment and nature. The field focuses on the human use of technologies that may impact the environment;[32] areas of concern include transport, mining, and sanitation. Engineering technoethics emerged in the late 19th century. As the Industrial Revolution triggered a demand for expertise in engineering and a need to improve engineering standards, societies began to develop codes of professional ethics and associations to enforce these codes.[1] Ethical inquiry into engineering examines the "responsibilities of engineers combining insights from both philosophy and the social sciences."[33]

Technoethical assessment and design

A technoethical assessment (TEA) is an interdisciplinary, systems-based approach to assessing ethical dilemmas related to technology. TEAs aim to guide actions related to technology in an ethical direction by advancing knowledge of technologies and their effects; successful TEAs thus produce a shared understanding of knowledge, values, priorities, and other ethical aspects associated with technology.[1] TEAs involve five key steps:
  1. Evaluate the intended ends and possible side effects of the technology in order to discern its overall value (interest).
  2. Compare the means and intended ends in terms of technical and non-technical (moral and social) aspects.
  3. Reject those actions where the output (overall value) does not balance the input in terms of efficiency and fairness.
  4. Consider perspectives from all stakeholder groups.
  5. Examine technological relations at a variety of levels (e.g. biological, physical, psychological, social, and environmental).[1]
Technoethical design (TED) refers to the process of designing technologies in an ethical manner, involving stakeholders in participatory design efforts, revealing hidden or tacit technological relations, and investigating what technologies make possible and how people will use them.[1] TED involves the following four steps:
  1. Ensure that the components and relations within the technological system are explicitly understood by those in the design context.
  2. Perform a TEA to identify relevant technical knowledge.
  3. Optimize the technological system in order to meet stakeholders' and affected individuals' needs and interests.
  4. Consult with representatives of stakeholder and affected groups in order to establish consensus on key design issues.[1]
Both TEA and TED rely on systems theory, a perspective that conceptualizes society in terms of events and occurrences resulting from investigating system operations. Systems theory assumes that complex ideas can be studied as systems with common designs and properties which can be further explained using systems methodology. The field of technoethics regards technologies as self-producing systems that draw upon external resources and maintain themselves through knowledge creation; these systems, of which humans are a part, are constantly in flux as relations between technology, nature, and society change. TEA attempts to elicit the knowledge, goals, inputs, and outputs that comprise technological systems. Similarly, TED enables designers to recognize technology's complexity and power, to include facts and values in their designs, and to contextualize technology in terms of what it makes possible and what makes it possible.[1]

Organizational technoethics

Recent advances in technology and their ability to transmit vast amounts of information in a short amount of time has changed the way information is being shared amongst co-workers and managers throughout organizations across the globe. Starting in the 1980s with information and communications technologies (ICTs), organizations have seen an increase in the amount of technology that they rely on to communicate within and outside of the workplace. However, these implementations of technology in the workplace create various ethical concerns and in turn a need for further analysis of technology in organizations. As a result of this growing trend, a subsection of technoethics known as organizational technoethics has emerged to address these issues.

Key scholarly contributions

Key scholarly contributions linking ethics, technology, and society can be found in a number of seminal works:
This resulting scholarly attention to ethical issues arising from technological transformations of work and life has helped given rise to a number of key areas (or branches) of technoethical inquiry under various research programs (i.e., computer ethics, engineering ethics, environmental technoethics, biotech ethics, Nanoethics, educational technoethics, information and communication ethics, media ethics, and Internet ethics).

Reproductive rights

From Wikipedia, the free encyclo...