Search This Blog

Saturday, March 21, 2015

Cyborg


From Wikipedia, the free encyclopedia

A cyborg (short for "cybernetic organism") is a theoretical or fictional being with both organic and biomechatronic parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.[1] D. S. Halacy's Cyborg: Evolution of the Superman in 1965 featured an introduction which spoke of a "new frontier" that was "not merely space, but more profoundly the relationship between 'inner space' to 'outer space' – a bridge...between mind and matter."[2]
The term cyborg is not the same thing as bionic and often applied to an organism that has restored function or enhanced abilities due to the integration of some artificial component or technology that relies on some sort of feedback.[3][4] While cyborgs are commonly thought of as mammals, they might also conceivably be any kind of organism and the term "Cybernetic organism" has been applied to networks, such as road systems, corporations and governments, which have been classed as such. The term can also apply to micro-organisms which are modified to perform at higher levels than their unmodified counterparts. It is hypothesized that cyborg technology will form a part of the future human evolution.

In popular culture, some cyborgs may be represented as visibly mechanical (e.g. the Cybermen in the Doctor Who franchise or The Borg from Star Trek or Darth Vader from Star Wars); or as almost indistinguishable from humans (e.g. the Terminators from the Terminator films, the "Human" Cylons from the re-imagining of Battlestar Galactica etc.) The 1970s television series The Six Million Dollar Man featured one of the most famous fictional cyborgs, referred to as a bionic man; the series was based upon a novel by Martin Caidin titled Cyborg. Cyborgs in fiction often play up a human contempt for over-dependence on technology, particularly when used for war, and when used in ways that seem to threaten free will. Cyborgs are also often portrayed with physical or mental abilities far exceeding a human counterpart (military forms may have inbuilt weapons, among other things).

Overview

According to some definitions of the term, the physical attachments humanity has with even the most basic technologies have already made them cyborgs.[5] In a typical example, a human with an artificial cardiac pacemaker or implantable cardioverter-defibrillator would be considered a cyborg, since these devices measure voltage potentials in the body, perform signal processing, and can deliver electrical stimuli, using this synthetic feedback mechanism to keep that person alive. Implants, especially cochlear implants, that combine mechanical modification with any kind of feedback response are also cyborg enhancements. Some theorists[who?] cite such modifications as contact lenses, hearing aids, or intraocular lenses as examples of fitting humans with technology to enhance their biological capabilities; however, these modifications are as cybernetic as a pen or a wooden leg. As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of aging and for instance a bio-techno-social definition of aging has been suggested.[6]

The term is also used to address human-technology mixtures in the abstract. This includes not only commonly used pieces of technology such as phones, computers, the Internet, etc. but also artifacts that may not popularly be considered technology; for example, pen and paper, and speech and language. When augmented with these technologies and connected in communication with people in other times and places, a person becomes capable of much more than they were before. This is like a computer, which gains power by using Internet protocols to connect with other computers. Cybernetic technologies include highways, pipes, electrical wiring, buildings, electrical plants, libraries, and other infrastructure that we hardly notice, but which are critical parts of the cybernetics that we work within.

Bruce Sterling in his universe of Shaper/Mechanist suggested an idea of alternative cyborg called Lobster, which is made not by using internal implants, but by using an external shell (e.g. a Powered Exoskeleton).[7] Unlike human cyborgs that appear human externally while being synthetic internally, a Lobster looks inhuman externally but contains a human internally. The computer game Deus Ex: Invisible War prominently featured cyborgs called Omar, where "Omar" is a Russian translation of the word "Lobster" (since the Omar are of Russian origin in the game).

Origins

The concept of a man-machine mixture was widespread in science fiction before World War II. As early as 1843, Edgar Allan Poe described a man with extensive prostheses in the short story "The Man That Was Used Up". In 1908, Jean de la Hire introduced Nyctalope (perhaps the first true superhero was also the first literary cyborg) in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who Can Live in the Water). Edmond Hamilton presented space explorers with a mixture of organic and machine parts in his novel The Comet Doom in 1928. He later featured the talking, living brain of an old scientist, Simon Wright, floating around in a transparent case, in all the adventures of his famous hero, Captain Future. He uses the term explicitly in the 1962 short story, "After a Judgment Day," to describe the "mechanical analogs" called "Charlies," explaining that "[c]yborgs, they had been called from the first one in the 1960s...cybernetic organisms." In the short story "No Woman Born" in 1944, C. L. Moore wrote of Deirdre, a dancer, whose body was burned completely and whose brain was placed in a faceless but beautiful and supple mechanical body.

The term was coined by Manfred E. Clynes and Nathan S. Kline in 1960 to refer to their conception of an enhanced human being who could survive in extraterrestrial environments:


Their concept was the outcome of thinking about the need for an intimate relationship between human and machine as the new frontier of space exploration was beginning to open up. A designer of physiological instrumentation and electronic data-processing systems, Clynes was the chief research scientist in the Dynamic Simulation Laboratory at Rockland State Hospital in New York.

The term first appears in print five months earlier when The New York Times reported on the Psychophysiological Aspects of Space Flight Symposium where Clynes and Kline first presented their paper.


A book titled Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable computer was published by Doubleday in 2001.[10] Some of the ideas in the book were incorporated into the 35mm motion picture film Cyberman.

Cyborg tissues in engineering

Cyborgs tissues structured with carbon nanotubes and plant or fungal cells have been used in artificial tissue engineering to produce new materials for mechanical and electrical uses. The work was presented by Di Giacomo and Maresca at MRS 2013 Spring conference on Apr, 3rd, talk number SS4.04.[11] The cyborg obtained is inexpensive, light and has unique mechanical properties. It can also be shaped in desired forms. Cells combined with MWCNTs co-precipitated as a specific aggregate of cells and nanotubes that formed a viscous material.
Likewise, dried cells still acted as a stable matrix for the MWCNT network. When observed by optical microscopy the material resembled an artificial “tissue” composed of highly packed cells. The effect of cell drying is manifested by their “ghost cell” appearance. A rather specific physical interaction between MWCNTs and cells was observed by electron microscopy suggesting that the cell wall (the most outer part of fungal and plant cells) may play a major active role in establishing a CNTs network and its stabilization. This novel material can be used in a wide range of electronic applications from heating to sensing and has the potential to open important new avenues to be exploited in electromagnetic shielding for radio frequency electronics and aerospace technology. In particular using Candida albicans cells cyborg tissue materials with temperature sensing properties have been reported. [12]

Individual cyborgs


Neil Harbisson, cyborg activist and president of the Cyborg Foundation.[13]

Jens Naumann, a man with acquired blindness, being interviewed about his vision BCI on CBS's The Early Show

Generally, the term "cyborg" is used to refer to a human with bionic, or robotic, implants.

In current prosthetic applications, the C-Leg system developed by Otto Bock HealthCare is used to replace a human leg that has been amputated because of injury or illness. The use of sensors in the artificial C-Leg aids in walking significantly by attempting to replicate the user's natural gait, as it would be prior to amputation.[14] Prostheses like the C-Leg and the more advanced iLimb are considered by some to be the first real steps towards the next generation of real-world cyborg applications. Additionally cochlear implants and magnetic implants which provide people with a sense that they would not otherwise have had can additionally be thought of as creating cyborgs.

In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to come up with a working brain interface to restore sight was private researcher William Dobelle. Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry's visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a two-ton mainframe, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.[15]

In 1997, Philip Kennedy, a scientist and physician designed the world's first human cyborg named Johnny Ray. Ray was a Vietnam veteran in Georgia who suffered a stroke. Unfortunately, Ray's body, as doctor's called it, was "locked in". Ray wanted his old life back so he agreed to Kennedy's experiment. Kennedy embedded a Neurotrophic Electrode near the part of Ray's brain so that Ray would be able to have some movement back in his body. The surgery went successfully, but in 2002, Johnny Ray died.[16]

In 2002, Canadian Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle's second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call the starry-night effect. Immediately after his implant, Jens was able to use his imperfectly restored vision to drive slowly around the parking area of the research institute.[17]

In 2002, under the heading Project Cyborg, a British scientist, Kevin Warwick, had an array of 100 electrodes fired into his nervous system in order to link his nervous system into the Internet. With this in place he successfully carried out a series of experiments including extending his nervous system over the Internet to control a robotic hand, a loudspeaker and amplifier. This is a form of extended sensory input and the first direct electronic communication between the nervous systems of two humans.[18][19]

In 2004, under the heading Bridging the Island of the Colourblind Project, a British and completely color-blind artist, Neil Harbisson, started wearing an eyeborg on his head in order to perceive colors through hearing.[20] His prosthetic device was included within his 2004 passport photograph which has been claimed to confirm his cyborg status.[21] In 2012 at TEDGlobal,[22] Harbisson explained that he did not feel like a cyborg when he started to use the eyeborg, he started to feel like a cyborg when he noticed that the software and his brain had united and given him an extra sense.[22]

Animal cyborgs

The US-based company Backyard Brains released what they refer to as "The world's first commercially available cyborg" called the RoboRoach. The project started as a University of Michigan biomedical engineering student senior design project in 2010[23] and was launched as an available beta product on 25 February 2011.[24] The RoboRoach was officially released into production via a TED talk at the TED Global conference,[25] and via the crowdsourcing website Kickstarter in 2013,[26] the kit allows students to use microstimulation to momentarily control the movements of a walking cockroach (left and right) using a bluetooth-enabled smartphone as the controller. Other groups have developed cyborg insects, including researchers at North Carolina State University[27] and UC Berkeley,[28] but the RoboRoach was the first kit available to the general public and was funded by the National Institute of Mental Health as a device to serve as a teaching aid to promote an interest in neuroscience.[25] Several animal welfare organizations including the RSPCA [29] and PETA [30] have expressed concerns about the ethics and welfare of animals in this project.

Social cyborgs

More broadly, the full term "cybernetic organism" is used to describe larger networks of communication and control. For example, cities, networks of roads, networks of software, corporations, markets, governments, and the collection of these things together. A corporation can be considered as an artificial intelligence that makes use of replaceable human components to function. People at all ranks can be considered replaceable agents of their functionally intelligent government institutions, whether such a view is desirable or not. The example above is reminiscent of the "organic paradigm" popular in the late 19th century due to the era's breakthroughs in understanding of cellular biology.

Jaap van Till tries to quantify this effect with his Synthecracy Network Law: V ~ N !, where V is value and N is number of connected people. This factorial growth is what he claims leads to a herd or hive like thinking between large, electronically connected groups.

Cyborg proliferation in society

In finance

Due to advances in computer technology, investors are able to employ super computers to engage in financial activities such as trading, banking, brokering, and money management. Because of the increased reliance on artificial intelligence and advanced computerization, modern finance is becoming “cyborg finance” because the key players are part human and part machine.[31] One key characteristic of cyborg finance is the use of incredibly powerful and fast computers to analyze and execute trading opportunities based on complex mathematical models.
The software employing these algorithms is often proprietary and non-transparent, thus it is sometimes referred to as “black-box trading.”

In medicine

In medicine, there are two important and different types of cyborgs: the restorative and the enhanced. Restorative technologies "restore lost function, organs, and limbs".[32] The key aspect of restorative cyborgization is the repair of broken or missing processes to revert to a healthy or average level of function. There is no enhancement to the original faculties and processes that were lost.

On the contrary, the enhanced cyborg "follows a principle, and it is the principle of optimal performance: maximising output (the information or modifications obtained) and minimising input (the energy expended in the process)".[33] Thus, the enhanced cyborg intends to exceed normal processes or even gain new functions that were not originally present.

Although prostheses in general supplement lost or damaged body parts with the integration of a mechanical artifice, bionic implants in medicine allow model organs or body parts to mimic the original function more closely. Michael Chorost wrote a memoir of his experience with cochlear implants, or bionic ear, titled "Rebuilt: How Becoming Part Computer Made Me More Human."[34] Jesse Sullivan became one of the first people to operate a fully robotic limb through a nerve-muscle graft, enabling him a complex range of motions beyond that of previous prosthetics.[35] By 2004, a fully functioning artificial heart was developed.[36] The continued technological development of bionic and nanotechnologies begins to raise the question of enhancement, and of the future possibilities for cyborgs which surpass the original functionality of the biological model. The ethics and desirability of "enhancement prosthetics" have been debated; their proponents include the transhumanist movement, with its belief that new technologies can assist the human race in developing beyond its present, normative limitations such as aging and disease, as well as other, more general incapacities, such as limitations on speed, strength, endurance, and intelligence. Opponents of the concept describe what they believe to be biases which propel the development and acceptance of such technologies; namely, a bias towards functionality and efficiency that may compel assent to a view of human people which de-emphasizes as defining characteristics actual manifestations of humanity and personhood, in favor of definition in terms of upgrades, versions, and utility.[37]

A brain-computer interface, or BCI, provides a direct path of communication from the brain to an external device, effectively creating a cyborg. Research of Invasive BCIs, which utilize electrodes implanted directly into the grey matter of the brain, has focused on restoring damaged eyesight in the blind and providing functionality to paralyzed people, most notably those with severe cases, such as Locked-In syndrome. This technology could enable people who are missing a limb or are in a wheelchair the power to control the devices that aide them through neural signals sent from the brain implants directly to computers or the devices. It is possible that this technology will also eventually be used with healthy people.[38]

Deep brain stimulation is a neurological surgical procedure used for therapeutic purposes. This process has aided in treating patients diagnosed with Parkinson's disease, Alzheimer's disease, Tourette syndrome, epilepsy, chronic headaches, and mental disorders. After the patient is unconscious, through anesthesia, brain pacemakers or electrodes, are implanted into the region of the brain where the cause of the disease is present. The region of the brain is then stimulated by bursts of electrical current to disrupt the oncoming surge of seizures. Like all invasive procedures, deep brain stimulation may put the patient at a higher risk. However, there have been more improvements in recent years with deep brain stimulation than any available drug treatment.[39]

Retinal implants are another form of cyborgization in medicine. The theory behind retinal stimulation to restore vision to people suffering from retinitis pigmentosa and vision loss due to aging (conditions in which people have an abnormally low amount of ganglion cells) is that the retinal implant and electrical stimulation would act as a substitute for the missing ganglion cells (cells which connect the eye to the brain.)

While work to perfect this technology is still being done, there have already been major advances in the use of electronic stimulation of the retina to allow the eye to sense patterns of light. A specialized camera is worn by the subject, such as on the frames of their glasses, which converts the image into a pattern of electrical stimulation. A chip located in the user's eye would then electrically stimulate the retina with this pattern by exciting certain nerve endings which transmit the image to the optic centers of the brain and the image would then appear to the user. If technological advances proceed as planned this technology may be used by thousands of blind people and restore vision to most of them.

A similar process has been created to aide people who have lost their vocal cords. This experimental device would do away with previously used robotic sounding voice simulators. The transmission of sound would start with a surgery to redirect the nerve that controls the voice and sound production to a muscle in the neck, where a nearby sensor would be able to pick up its electrical signals. The signals would then move to a processor which would control the timing and pitch of a voice simulator. That simulator would then vibrate producing a multitonal sound which could be shaped into words by the mouth.[40]

An August 26, 2012 article from Harvard University's homepage, by Peter Reuell of the Harvard Gazette, proceeds to discuss three-dimensional cyborg tissue research, published in the journal Nature Materials, with possible medical implications done by Charles M. Lieber, the Mark Hyman Jr. Professor of Chemistry, and Daniel Kohane, a Harvard Medical School Anesthesiology Professor at Boston Children's Hospital.[41]

In the military

Military organizations' research has recently focused on the utilisation of cyborg animals for the purposes of a supposed tactical advantage. DARPA has announced its interest in developing "cyborg insects" to transmit data from sensors implanted into the insect during the pupal stage. The insect's motion would be controlled from a Micro-Electro-Mechanical System (MEMS) and could conceivably survey an environment or detect explosives and gas.[42] Similarly, DARPA is developing a neural implant to remotely control the movement of sharks. The shark's unique senses would then be exploited to provide data feedback in relation to enemy ship movement or underwater explosives.[43]

In 2006, researchers at Cornell University invented[44] a new surgical procedure to implant artificial structures into insects during their metamorphic development.[45][46] The first insect cyborgs, moths with integrated electronics in their thorax, were demonstrated by the same researchers.[47][48] The initial success of the techniques has resulted in increased research and the creation of a program called Hybrid-Insect-MEMS, HI-MEMS. Its goal, according to DARPA's Microsystems Technology Office, is to develop "tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis".[49]

The use of neural implants has recently been attempted, with success, on roaches. Surgically applied electrodes were put on the insect, which were remotely controlled by a human. The results, although sometimes different, basically showed that the roach could be controlled by the impulses it received through the electrodes. DARPA is now funding this research because of its obvious beneficial applications to the military and other areas[50]

In 2009 at the Institute of Electrical and Electronics Engineers (IEEE) Micro-electronic mechanical systems (MEMS) conference in Italy, researchers demonstrated the first "wireless" flying-beetle cyborg.[51] Engineers at the University of California at Berkeley have pioneered the design of a "remote controlled beetle", funded by the DARPA HI-MEMS Program. Filmed evidence of this can be viewed here.[52] This was followed later that year by the demonstration of wireless control of a "lift-assisted" moth-cyborg.[53]

Eventually researchers plan to develop HI-MEMS for dragonflies, bees, rats and pigeons.[54][55] For the HI-MEMS cybernetic bug to be considered a success, it must fly 100 metres (330 ft) from a starting point, guided via computer into a controlled landing within 5 metres (16 ft) of a specific end point. Once landed, the cybernetic bug must remain in place.[54]

In art

The concept of the cyborg is often associated with science fiction. However, many artists have tried to create public awareness of cybernetic organisms; these can range from paintings to installations. Some artists who create such works are Neil Harbisson, Moon Ribas, Patricia Piccinini, Steve Mann, Orlan, H. R. Giger, Lee Bul, Wafaa Bilal, Tim Hawkinson and Stelarc.

Stelarc is a performance artist who has visually probed and acoustically amplified his body. He uses medical instruments, prosthetics, robotics, virtual reality systems, the Internet and biotechnology to explore alternate, intimate and involuntary interfaces with the body. He has made three films of the inside of his body and has performed with a third hand and a virtual arm. Between 1976–1988 he completed 25 body suspension performances with hooks into the skin. For 'Third Ear' he surgically constructed an extra ear within his arm that was internet enabled, making it a publicly accessible acoustical organ for people in other places.[56] He is presently performing as his avatar from his second life site.[57]

Tim Hawkinson promotes the idea that bodies and machines are coming together as one, where human features are combined with technology to create the Cyborg. Hawkinson's piece Emoter presented how society is now dependent on technology.[58]

Wafaa Bilal is an Iraqi-American performance artist who had a small 10 megapixel digital camera surgically implanted into the back of his head, part of a project entitled 3rd I.[59] For one year, beginning 15 December 2010, an image is captured once per minute 24 hours a day and streamed live to www.3rdi.me and the Mathaf: Arab Museum of Modern Art. The site also displays Bilal's location via GPS. Bilal says that the reason why he put the camera in the back of the head was to make an "allegorical statement about the things we don't see and leave behind."[60] As a professor at NYU, this project has raised privacy issues, and so Bilal has been asked to ensure that his camera does not take photographs in NYU buildings.[60]

Machines are becoming more ubiquitous in the artistic process itself, with computerized drawing pads replacing pen and paper, and drum machines becoming nearly as popular as human drummers. This is perhaps most notable in generative art and music. Composers such as Brian Eno have developed and utilized software which can build entire musical scores from a few basic mathematical parameters.[61]

Scott Draves is a generative artist whose work is explicitly described as a "cyborg mind". His Electric Sheep project generates abstract art by combining the work of many computers and people over the internet.[62]

Artists as cyborgs

Artists have explored the term cyborg from a perspective involving imagination. Some work to make an abstract idea of technological and human-bodily union apparent to reality in an art form utilizing varying mediums, from sculptures and drawings to digital renderings. Artists that seek to make cyborg-based fantasies a reality often call themselves cyborg artists, or may consider their artwork "cyborg". How an artist or their work may be considered cyborg will vary depending upon the interpreter's flexibility with the term. Scholars that rely upon a strict, technical description of cyborg, often going by Norbert Wiener's cybernetic theory and Manfred E. Clynes and Nathan S. Kline's first use of the term, would likely argue that most cyborg artists do not qualify to be considered cyborgs.[63] Scholars considering a more flexible description of cyborgs may argue it incorporates more than cybernetics.[64]
Others may speak of defining subcategories, or specialized cyborg types, that qualify different levels of cyborg at which technology influences an individual. This may range from technological instruments being external, temporary, and removable to being fully integrated and permanent.[65] Nonetheless, cyborg artists are artists. Being so, it can be expected for them to incorporate the cyborg idea rather than a strict, technical representation of the term,[66] seeing how their work will sometimes revolve around other purposes outside of cyborgism.[63]

In body modification

As medical technology becomes more advanced, some techniques and innovations are adopted by the body modification community. While not yet cyborgs in the strict definition of Manfred Clynes and Nathan Kline, technological developments like implantable silicon silk electronics,[67] augmented reality[68] and QR codes[69] are bridging the disconnect between technology and the body. Hypothetical technologies such as digital tattoo interfaces[70][71] would blend body modification aesthetics with interactivity and functionality, bringing a transhumanist way of life into present day reality.

In addition, it is quite plausible for anxiety expression to manifest. Individuals may experience pre-implantation feelings of fear and nervousness. To this end, individuals may also embody feelings of uneasiness, particularly in a socialized setting, due to their post-operative, technologically augmented bodies, and mutual unfamiliarity with the mechanical insertion. Anxieties may be linked to notions of otherness or a cyborged identity.[72]

In popular culture

Cyborgs have become a well-known part of science fiction literature and other media. Although many of these characters may be technically androids, they are often referred to as cyborgs. Well-known examples from film and television include RoboCop, Terminators, Evangelion, The Six Million Dollar Man, Replicants from Blade Runner, Daleks and Cybermen from Doctor Who, the Borg from Star Trek, Darth Vader and General Grievous from Star Wars, Inspector Gadget, and Cylons from the 2004 Battlestar Galactica series. From manga and anime are characters such as 8 Man (the inspiration for RoboCop), Kamen Rider, Ghost in the Shell's Motoko Kusanagi, as well as characters from western comic books like Tony Stark (after his Extremis and Bleeding Edge armor) and Victor "Cyborg" Stone. The Deus Ex videogame series deals extensively with the near-future rise of cyborgs and their corporate ownership, as does the Syndicate series.

Cyborgization in critical deaf studies

Joseph Michael Valente, describes "cyborgization" as an attempt to codify "normalization" through cochlear implantation in young deaf children. Drawing from Paddy Ladd's work on Deaf epistemology and Donna Haraway's Cyborg ontology, Valente "use[s] the concept of the cyborg as a way of agitating constructions of cyborg perfection (for the deaf child that would be to become fully hearing)". He claims that cochlear implant manufacturers advertise and sell cochlear implants as a mechanical device as well as an uncomplicated medical "miracle cure". Valente criticizes cochlear implant researchers whose studies largely to date do not include cochlear implant recipients, despite cochlear implants having been approved by the United States Food and Drug Administration (FDA) since 1984.[73] Pamela J. Kincheloe discusses the representation of the cochlear implant in media and popular culture as a case study for present and future responses to human alteration and enhancement.[74]

Cyborg Foundation

In 2010, the Cyborg Foundation became the world's first international organization dedicated to help humans become cyborgs.[75] The foundation was created by cyborg Neil Harbisson and Moon Ribas as a response to the growing amount of letters and emails received from people around the world interested in becoming a cyborg.[76]
The foundation's main aims are to extend human senses and abilities by creating and applying cybernetic extensions to the body,[77] to promote the use of cybernetics in cultural events and to defend cyborg rights.[78] In 2010, the foundation, based in Mataró (Barcelona), was the overall winner of the Cre@tic Awards, organized by Tecnocampus Mataró.[79]

In 2012, Spanish film director Rafel Duran Torrent, created a short film about the Cyborg Foundation. In 2013, the film won the Grand Jury Prize at the Sundance Film Festival's Focus Forward Filmmakers Competition and was awarded with $100,000 USD.[80]

Automation


From Wikipedia, the free encyclopedia

Automation or automatic control, is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching in telephone networks, steering and stabilization of ships, aircraft and other applications with minimal or reduced human intervention. Some processes have been completely automated.

The biggest benefit of automation is that it saves labor, however, it is also used to save energy and materials and to improve quality, accuracy and precision.

The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when General Motors established the automation department.[1] It was during this time that industry was rapidly adopting feedback controllers, which were introduced in the 1930s.[2]

Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic and computers, usually in combination. Complicated systems, such as modern factories, airplanes and ships typically use all these combined techniques.

Types of automation

One of the simplest types of control is on-off control. An example is the thermostats used on household appliances. Electromechanical thermostats used in HVAC may only have had provision for on/off control of heating or cooling systems. Electronic controllers may add multiple stages of heating and variable fan speed control.
Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control.

The advanced type of automation that revolutionized manufacturing, aircraft, communications and other industries, is feedback control, which is usually continuous and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range.

Open and closed loop

All the elements constituting the measurement and control of a single variable are called a control loop. Control that uses a measured signal, feeds the signal back and compares it to a set point, calculates and sends a return signal to make a correction, is called closed loop control. If the controller does not incorporate feedback to make a correction then it is open loop.

Loop control is normally accomplished with a controller. The theoretical basis of open and closed loop automation is control theory.

Sequential control and logical sequence or system state control

Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler.

States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions.

An early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed.

The total number of relays, cam timers and drum sequencers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit.

In a typical hard wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push button is released. (The start button is a normally open contact and the stop button is normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three phase industrial power) in the main power circuit. (Note: Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches.) All contacts are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay. See diagram: Motor Starters Hand-Off-Auto With Start-Stop (Note: The above description is the "Auto" position case in this diagram).

Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case an interlock could be added to insure that the oil pump is running before the motor starts. Timers, limit switches and electric eyes are other common elements in control circuits.

Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press rolls, applying pressure to presses.

Computer control

Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic type systems.
General purpose process control computers have increasingly replaced stand alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments and controllers in order to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real time graphical displays for operators and run reports for operators, engineers and management.

Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code.

History

The earliest feedback control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745.[3]

The centrifugal governor, which dates to the last quarter of the 18th century, was used to adjust the gap between millstones.[4] The centrifugal governor was also used in the automatic flour mill developed by Oliver Evans in 1785, making it the first completely automated industrial process. The governor was adopted by James Watt for use on a steam engine in 1788 after Watt’s partner Boulton saw one at a flour mill Boulton & Watt were building.[3]

The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning.[3]

Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory.[3]

The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory. Development of the electronic amplifier during the 1920s, which was important for long distance telephony, required a higher signal to noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to control theory. Military applications during the Second World War that contributed to and benefited from control theory were fire-control systems and aircraft controls. The word "automation" itself was coined in the 1940s by General Electric.[5] The so-called classical theoretical treatment of control theory dates to the 1940s and 1950s.[6]

Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 though the 1920s. Central electric power stations were also undergoing rapid growth and operation of new high pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls.

Central control rooms became common in the 1920s, but as late as the early 1930s, most process control was on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color coded lights to send signals to workers in the plant to manually make certain changes.[7]

Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification.[8]

In 1959 Texaco’s Port Arthur refinery became the first chemical plant to use digital control.[9] Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell.

Significant applications

The automatic telephone switchboard was introduced in 1892 along with dial telephones.[10] By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor.[11]

The logic performed by telephone switching relays was the inspiration for the digital computer.

The first commercially successful glass bottle blowing machine was an automatic model introduced in 1905.[12] The machine, operated by a two man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers.

Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making the paper sheet shrinks as it passes around steam heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919.[13] One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928.[14]

Before automation many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production.[15]

Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s.[16] Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC).

Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines.

Advantages and disadvantages

The main advantages of automation are:
  • Increased throughput or productivity.
  • Improved quality or increased predictability of quality.
  • Improved robustness (consistency), of processes or product.
  • Increased consistency of output.
  • Reduced direct human labor costs and expenses.
The following methods are often employed to improve productivity, quality, or robustness.
  • Install automation in operations to reduce cycle time.
  • Install automation where a high degree of accuracy is required.
  • Replacing human operators in tasks that involve hard physical or monotonous work.[17]
  • Replacing humans in tasks done in dangerous environments (i.e. fire, space, volcanoes, nuclear facilities, underwater, etc.)
  • Performing tasks that are beyond human capabilities of size, weight, speed, endurance, etc.
  • Economic improvement: Automation may improve in economy of enterprises, society or most of humanity. For example, when an enterprise invests in automation, technology recovers its investment; or when a state or country increases its income due to automation like Germany or Japan in the 20th Century.
  • Reduces operation time and work handling time significantly.
  • Frees up workers to take on other roles.
  • Provides higher level jobs in the development, deployment, maintenance and running of the automated processes.
The main disadvantages of automation are:
  • Security Threats/Vulnerability: An automated system may have a limited level of intelligence, and is therefore more susceptible to committing errors outside of its immediate scope of knowledge (e.g., it is typically unable to apply the rules of simple logic to general propositions).
  • Unpredictable/excessive development costs: The research and development cost of automating a process may exceed the cost saved by the automation itself.
  • High initial cost: The automation of a new product or plant typically requires a very large initial investment in comparison with the unit cost of the product, although the cost of automation may be spread among many products and over time.
In manufacturing, the purpose of automation has shifted to issues broader than productivity, cost, and time.

Lights out manufacturing

Lights out manufacturing is when a production system is 100% or near to 100% automated (not hiring any workers). In order to eliminate the need for labor costs all together.

Health and environment

The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and those that do the opposite[clarification needed] too.[citation needed] Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation.[dubious ][citation needed]

Convertibility and turnaround time

Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation.

Digital electronics helped too. Former analogue-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field level instrumentation, eliminating hard-wiring.

Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analogue-based measurement and control still dominates. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems.

Automation tools

Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include Computer-aided design (CAD software) and Computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry.[18]

Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events.[19]

An automated online assistant on a website, with an avatar for enhanced human–computer interaction.

Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utilities departments they are called stationary engineers.[20]

Different types of automation tools exist:
When it comes to Factory Automation, Host Simulation Software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance with respect to Factory Automation standards (timeouts, response time, processing time). [21]

Limitations to automation

  • Current technology is unable to automate all the desired tasks.
  • Many operations using automation have large amounts of invested capital and produce high volumes of product, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel are needed to insure that the entire system functions properly and that safety and product quality are maintained.
  • As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function.
  • As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of exhaustion of opportunities. New technological paradigms may however set new limits that surpass the previous limits.

Current limitations

Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems. Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise.
In many cases, the use of humans is more cost-effective than mechanical approaches even where automation of industrial tasks is possible. Overcoming these obstacles is a theorized path to post-scarcity economics.

Recent and emerging applications


KUKA industrial robots being used at a bakery for food production

Automated retail

Food and drink

The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees.[22] The University of Texas at Austin has introduced fully automated cafe retail locations.[23] Some Cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device.[24][25] Some restaurants have automated food delivery to customers tables using a Conveyor belt system. The use of robots is sometimes employed to replace waiting staff.[26]

Stores

Many Supermarkets and even smaller stores are rapidly introducing Self checkout systems reducing the need for employing checkout workers.

Online shopping could be considered a form of automated retail as the payment and checkout are through an automated Online transaction processing system. Other forms of automation can also be an integral part of online shopping, for example the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems.

Automated mining

Involves the removal of human labor from the mining process.[27] The mining industry is currently in the transition towards Automation. Currently it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation.

Automated video surveillance

The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the amount of sensors that the automated system can handle and the mobility of sensor, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify designated authority in case of occurrence of particular events.[28]

Automated highway systems

As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the United States Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). 
Congress legislated in ISTEA that “the Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate installation of equipment in new and existing motor vehicles." [ISTEA 1991, part B, Section 6054(b)].Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems.[29]

Automated waste management[edit]

Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service.[30]

Home automation[edit]

Main article: Home automation
Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades.

Industrial automation[edit]

Industrial automation deals primarily with the automation of manufacturing, quality control and material handling processes. General purpose controllers for industrial processes include Programmable logic controllers and computers. One trend is increased use of Machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots.
Energy efficiency in industrial processes has become a higher priority. Semiconductor companies like Infineon Technologies are offering 8-bit micro-controller applications for example found in motor controls, general purpose pumps, fans, and ebikes to reduce energy consumption and thus increase efficiency.

Low Cost automation[edit]

Low Cost Automation (popularly known as LCA), is the introduction of simple pneumatic, hydraulic, mechanical and electrical devices into the existing production machinery, with a view to improving their productivity. These would also enable the operation of these equipment by even semi-skilled and unskilled labour, with a little training. This will involve the use of standardised parts and devices to mechanise or automate machines, processes and systems. Utilising a human being as a source of energy is an inefficient method, in addition to being boring and monotonous to the worker. It is estimated that it costs approximately 400 times as much for a man to supply 1 kwh of energy as it does to get this from electrical power. Similarly, using an operator as a sensing device is not, only un-economical but also would result in excessive fatigue.
It is considered a new tool in the hands of management to dispense with the workers. It is feared that introduction of automation would lead to large scale unemployment and hence, considered as an enemy of the working class. Let us try to find out how far these are true. First of all, the concept of automation is not new; only the word is comparatively new. Describing automation, it is said that its main characteristics are 'feedback' or 'closed loop' system. There is nothing new about the feedback system as, long ago, James Watt invented the governor, which is essentially a feed back mechanism, to provide the steam engine with a smooth constant speed control under changing load conditions. Secondly, automation is also described as numerical control, punched or magnetic tape control. Again, there is nothing new in this, as in the early 18th century Basil Buchan designed the punched card control for looms to get the desired pattern woven correctly, without faults due to human error. As regards the fear of increased unemployment it is true that indiscriminate application of automation on a large scale would result in increased unemployment problem. But, it is worth remembering here that it is not the automation itself but the application that is to be blamed. Low cost automation, which unfortunately has not received as much attention as it deserves, perhaps because of lack of publicity, knowledge and understanding, does not lead to retrenchment as is feared by many. Low cost automation results in improvements in production processes, systems etc. And any improvements would result in reduced time for the work being done and, if the quantum of work remains the same, requirements of labour would reduce. But, in a growing economy the demand for commodities are rarely met. Therefore, if the amount of time taken to do a job is reduced, more number of jobs can be done in a day, a month or a year. This means that there would be increased productivity with attendant reduction in the unit cost of production. No doubt there would be some minor displacement of workers, but this would not result in retrenchment of workers as the increased output and increased market demand would definitely not only absorb whatever workers have been found surplus, but also provide employment opportunities to some more. But shunning away from improvement, thinking that it would result in displacement of some workers from their existing place, would, in the long run, affect the company in the smaller sense and the economy as a whole in the broader sense. One should not lose sight of the long term benefits of increased productivity, which is essential for achieving prosperity. Another argument one comes across is that 'our labour is cheap; so why go in for LCA?’ This is a clear case of misunderstanding between 'cheap labour' and 'low labour costs'. The point that should be remembered is not how much we pay a person but how much output we get for each rupee we pay.

Agriculture[edit]

Now that we’re moving towards automated orange-sorting [31] and autonomous tractors,[32] the next step in automated agriculture is robotic strawberry pickers.[33]
Agent-assisted Automation
Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. There are two basic types: desktop automation and automated voice solutions. Desktop automation refers to software programming that makes it easier for the call center agent to work across multiple desktop tools. The automation would take the information entered into one tool and populate it across the others so it did not have to be entered more than once, for example. Automated voice solutions allow the agents to remain on the line while disclosures and other important information is provided to customers in the form of pre-recorded audio files. Specialized applications of these automated voice solutions enable the agents to process credit cards without ever seeing or hearing the credit card numbers or CVV codes[34]

The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time.

Relationship to unemployment

Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases.[35] In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (I.e, robotics industries, computer industries, design industries). These new industries provide many high salary skill based jobs to the economy.

1947–1948 civil war in Mandatory Palestine

From Wikipedia, the free encyclopedia During the civil war, the Jewish and Arab communities of Palestine clashed (the latter supported b...