Search This Blog

Monday, December 25, 2023

Robonaut

From Wikipedia, the free encyclopedia
Two Robonaut 2 robots

A robonaut is a humanoid robot, part of a development project conducted by the Dexterous Robotics Laboratory at NASA's Lyndon B. Johnson Space Center (JSC) in Houston, Texas. Robonaut differs from other current space-faring robots in that, while most current space robotic systems (such as robotic arms, cranes and exploration rovers) are designed to move large objects, Robonaut's tasks require more dexterity.

The core idea behind the Robonaut series is to have a humanoid machine work alongside astronauts. Its form factor and dexterity are designed such that Robonaut "is capable of performing all of the tasks required of an EVA-suited crewmember."

NASA states "Robonauts are essential to NASA's future as we go beyond low Earth orbit", and R2 will provide performance data about how a robot may work side-by-side with astronauts.

The latest Robonaut version, R2, was delivered to the International Space Station (ISS) by STS-133 in February 2011. The first US-built robot on the ISS, R2 is a robotic torso designed to assist with crew EVAs and can hold tools used by the crew. However, Robonaut 2 does not have adequate protection needed to exist outside the space station and enhancements and modifications would be required to allow it to move around the station's interior.

As of 2018 NASA planned to return R2 for repairs and then relaunch.

Robonaut 1

Robonaut 1 (R1) was the first model. The two Robonaut versions (R1A and R1B) had many partners including DARPA. None were flown to space. Other designs for Robonaut propose uses for teleoperation on planetary surfaces, where Robonaut could explore a planetary surface while receiving instructions from orbiting astronauts above. Robonaut B was introduced in 2002, R1B is a portable version of R1. R1 had several lower bodies. One of these was the Zero-G Leg, which if Robonaut was working on the space station he would climb using the external handrails and then use his zero-g leg to latch onto the station using a WIF socket. Another was the Robotic Mobility Platform (RMP), developed in 2003, it is a base with two wheels using a Segway PT. And the four wheeled Centaur 1, which was developed in 2006. Robonaut has participated in NASA's Desert Research and Technology Studies field trials in the Arizona desert.

In 2006, the automotive company General Motors expressed interest in the project and proposed to team up with NASA. In 2007 a Space Act Agreement was signed that allowed GM and NASA to work together on the next generation of Robonaut.

Robonaut 2

R2 moves for the first time aboard the ISS.

In February 2010, Robonaut 2 (R2) was revealed to the public. R2 is capable of speeds more than four times faster than R1, is more compact, more dexterous, and includes a deeper and wider range of sensing. It can move its arms up to 2 m/s, has a 40 lb payload capacity and its hands have a grasping force of roughly 5 lbs. per finger. There are over 350 sensors and 38 PowerPC processors in the robot.

Station crew members will be able to operate R2, as will controllers on the ground; both will do so using telepresence. One of the improvements over the previous Robonaut generation is that R2 does not need constant supervision. In anticipation of a future destination in which distance and time delays would make continuous management problematic, R2 was designed to be set to tasks and then carry them through autonomously with periodic status checks. While not all human range of motion and sensitivity has been duplicated, the robot's hand has 12 degrees of freedom as well as 2 degrees of freedom in wrist. The R2 model also uses touch sensors at the tips of its fingers.

R2 was designed as a prototype to be used on Earth but mission managers were impressed by R2 and chose to send it to the ISS. Various upgrades were made to qualify it for use inside the station. The outer skin materials were exchanged to meet the station's flammability requirements, shielding was added to reduce electromagnetic interference, processors were upgraded to increase the robot's radiation tolerance, the original fans were replaced with quieter ones to accommodate the station's noise requirements, and the power system was rewired to run on the station's direct current system rather than the alternating current used on the ground.

Robonaut being upgraded on-orbit

In the design of the R2 robot, a 3D time of flight imager will be used in conjunction with a stereo camera pair to provide depth information and visible stereo images to the system. This allows the R2 to "see", which is one of the basic preconditions to fulfill its tasks. To integrate the various sensor data types in a single development environment the image processing software Halcon 9.0 from MVTec Software (Munich, Germany) is used.

2011 Testing at the ISS

Robonaut 2 was launched on STS-133 on February 24, 2011, and delivered to the ISS. On August 22, R2 was powered up for the first time while in low earth orbit. This was called a "power soak" which is a power system test only with no movement. On October 13, R2 moved for the first time while in space. The conditions aboard the space station provide a proving ground for robots to work shoulder to shoulder with people in microgravity. Once this has been demonstrated inside the station, software upgrades and lower bodies may be added, allowing R2 to move around the interior of the station and perform maintenance tasks, such as vacuuming or cleaning filters. A pair of legs were delivered to the ISS on SpX-3 in April 2014. The battery backpack was planned to be launched on a later flight in Summer/Fall 2014.

Further upgrades could be added to allow R2 to work outside in the vacuum of space, where R2 could help space walkers perform repairs, make additions to the station or conduct scientific experiments. There were initially no plans to return the launched R2 back to earth.

2018 Repair and possible relaunch

NASA announced on 1 April 2018 that R2 would return to Earth in May 2018 with CRS-14 Dragon for repair and eventual relaunch in about a year's time. As of 2018 NASA planned to return R2 for repairs and then relaunch.

NASA's experience with R2 on the station will help them understand its capabilities for possible deep space missions.

Project M (R2 on the moon)

In late 2009, a proposed mission called Project M was announced by Johnson Space Center that, if it had been approved, would have had the objective of landing an R2 robot on the Moon within 1,000 days.

Mechatronics

From Wikipedia, the free encyclopedia
Mechatronics
Occupation
NamesMechatronics Engineer
Occupation type
Engineering
Activity sectors
Electrical and mechanical industry, engineering industry
SpecialtyMechanical engineering, electrical/electronics engineering, computer engineering, software programming, system engineering, control system, smart and intelligent system, automation and robotics
Description
CompetenciesMultidisciplinary technical knowledge, electro-mechanical system design, system integration and maintenance
Fields of
employment
Science, technology, engineering, industry, computer, exploration

Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, and product engineering.

As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words "mechanics" and "electronics"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.

The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry.

Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering.

French standard NF E 01-010 gives the following definition: "approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality".

History

The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally.

With the advent of information technology in the 1980s, microprocessors were introduced into mechanical systems, improving performance significantly. By the 1990s, advances in computational intelligence were applied to mechatronics in ways that revolutionized the field.

Description

Aerial Euler diagram from RPI's website describes the fields that make up mechatronics.

A mechatronics engineer unites the principles of mechanics, electrical, electronics, and computing to generate a simpler, more economical and reliable system.

Engineering cybernetics deals with the question of control engineering of mechatronic systems. It is used to control or regulate such a system (see control theory). Through collaboration, the mechatronic modules perform the production goals and inherit flexible and agile manufacturing properties in the production scheme. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most known architectures involve hierarchy, polyarchy, heterarchy, and hybrid. The methods for achieving a technical effect are described by control algorithms, which might or might not utilize formal methods in their design. Hybrid systems important to mechatronics include production systems, synergy drives, exploration rovers, automotive subsystems such as anti-lock braking systems and spin-assist, and everyday equipment such as autofocus cameras, video, hard disks, CD players and phones.

Subdisciplines

Mechanical

View of the Volkswagen dual clutch direct shift gearbox transmission

Mechanical engineering is an important part of mechatronics engineering. It includes the study of mechanical nature of how an object works. Mechanical elements refer to mechanical structure, mechanism, thermo-fluid, and hydraulic aspects of a mechatronics system. The study of thermodynamics, dynamics, fluid mechanics, pneumatics and hydraulics. Mechatronics engineer who works a mechanical engineer can specialize in hydraulics and pneumatics systems, where they can be found working in automobile industries. A mechatronics engineer can also design a vehicle since they have strong mechanical and electronical background. Knowledge of software applications such as computer-aided design and computer aided manufacturing is essential for designing products. Mechatronics covers a part of mechanical syllabus which is widely applied in automobile industry.

Mechatronic systems represent a large part of the functions of an automobile. The control loop formed by sensor—information processing—actuator—mechanical (physical) change is found in many systems. The system size can be very different. The Anti-lock braking system (ABS) is a mechatronic system. The brake itself is also one. And the control loop formed by driving control (for example cruise control), engine, vehicle driving speed in the real world and speed measurement is a mechatronic system, too. The great importance of mechatronics for automotive engineering is also evident from the fact that vehicle manufacturers often have development departments with "Mechatronics" in their names.

Electronics and Electricals

Electronics and Telecommunication engineering specializes in electronics devices and telecom devices of a mechatronics system. A mechatronics engineer specialized in electronics and telecommunications have knowledge of computer hardware devices. The transmission of signal is the main application of this subfield of mechatronics. Where digital and analog systems also forms an important part of mechatronics systems. Telecommunications engineering deals with the transmission of information across a medium.

Electronics engineering is related to computer engineering and electrical engineering. Control engineering has a wide range of electronic applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. VLSI designing is important for creating integrated circuits. Mechatronics engineers have deep knowledge of microprocessors, microcontrollers, microchips and semiconductors. The application of mechatronics in electronics manufacturing industry can conduct research and development on consumer electronic devices such as mobile phones, computers, cameras etc. For mechatronics engineers it is necessary to learn operating computer applications such as MATLAB and Simulink for designing and developing electronic products.

Mechatronics engineering is a interdisciplinary course, it includes concepts of both electrical and mechanical systems. A mechatronics engineer engages in designing high power transformers or radio-frequency module transmitters.

Avionics

An avionics technician uses an oscilloscope to verify signals on aircraft avionics equipment.

Avionics is also considered a variant of mechatronics as it combines several fields such as electronics and telecom with Aerospace engineering. It is the subdiscipline of mechatronics engineering and aerospace engineering which is engineering branch focusing on electronics systems of aircraft. The word avionics is a blend of aviation and electronics. The electronics system of aircraft includes aircraft communication addressing and reporting system, air navigation, aircraft flight control system, aircraft collision avoidance systems, flight recorder, weather radar and lightning detector. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.

Advanced Mechatronics

Another variant is Motion control for Advanced Mechatronics, presently recognized as a key technology in mechatronics. The robustness of motion control will be represented as a function of stiffness and a basis for practical realization. Target of motion is parameterized by control stiffness which could be variable according to the task reference. The system robustness of motion always requires very high stiffness in the controller.

Industrial

Industrial engineers on their duty

The branch of industrial engineer includes the design of machinery, assembly and process lines of various manufacturing industries. This branch can be said somewhat similar to automation and robotics. Mechatronics engineers who works as industrial engineers design and develop infrastructure of a manufacturing plant. Also it can be said that they are architect of machines. One can work as an industrial designer to design the industrial layout and plan for setting up of a manufacturing industry or as an industrial technician to lookover the technical requirements and repairing of the particular factory.

Robotics

An industrial robot manufactured by ABB

Robotics is one of the newest emerging subfield of mechatronics. It is the study of robots that how they are manufactured and operated. Since 2000, this branch of mechatronics is attracting a number of aspirants. Robotics is interrelated with automation because here also not much human intervention is required. A large number of factories especially in automobile factories, robots are founds in assembly lines where they perform the job of drilling, installation and fitting. Programming skills are necessary for specialization in robotics. Knowledge of programming language —ROBOTC is important for functioning robots. An industrial robot is a prime example of a mechatronics system; it includes aspects of electronics, mechanics, and computing to do its day-to-day jobs.

Computer

Telescope automatic control system and a space object observation system

The Internet of things (IoT) is the inter-networking of physical devices, embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data. IoT and mechatronics are complementary. Many of the smart components associated with the Internet of Things will be essentially mechatronic. The development of the IoT is forcing mechatronics engineers, designers, practitioners and educators to research the ways in which mechatronic systems and components are perceived, designed and manufactured. This allows them to face up to new issues such as data security, machine ethics and the human-machine interface.

Knowledge of programming is very important. A mechatronics engineer has to do programming in different levels example.—PLC programming, drone programming, hardware programming, CNC programming etc. Due to combination of electronics engineering, soft skills from computer side is important. Important programming languages for mechatronics engineer to learn is Java, Python, C++ and C programming language.

Superintelligence

From Wikipedia, the free encyclopedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Feasibility of artificial superintelligence

Progress in machine classification of images
The error rate of AI by year. The red line represents the error rate of a trained human.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

An AI system capable of self-improvement could enhance its own intelligence, thereby becoming more efficient at improving itself. This cycle of "recursive self-improvement" might cause an intelligence explosion, resulting in the creation of a superintelligence.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Feasibility of biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. This notion, Iterated Embryo Selection, has received wide treatment from other authors. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.

In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.

Design considerations

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

  • The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The moral rightness (MR) proposal is that it should value moral rightness.
  • The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ... MR would also appear to have some disadvantages. It relies on the notion of "morally right," a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by "morally right." If the AI could grasp the meaning, it could search for actions that fit ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in ways that are morally impermissible.

Potential threat to humanity

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity. Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time" is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down, in order to accomplish its goals. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Posthumanism

From Wikipedia, the free encyclopedia

It encompasses a wide variety of branches, including:

  • Antihumanism: a branch of theory that is critical of traditional humanism and traditional ideas about the human condition, vitality and agency.
  • Cultural posthumanism: A branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond "archaic" concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
  • Philosophical posthumanism: A philosophical direction that draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
  • Posthuman condition: The deconstruction of the human condition by critical theorists.
  • Posthuman transhumanism: A transhuman ideology and movement which, drawing from posthumanist philosophy, seeks to develop and make available technologies that enable immortality and greatly enhance human intellectual, physical, and psychological capacities in order to achieve a "posthuman future".
  • AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers and theorists, including Nick Land, promote the view that humans should embrace and accept their eventual demise as a consequence of a technological singularity. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
  • Voluntary human extinction: Seeks a "posthuman future" that in this case is a future without humans.

Philosophical posthumanism

Philosopher Theodore Schatzki suggests there are two varieties of posthumanism of the philosophical kind:

One, which he calls "objectivism", tries to counter the overemphasis of the subjective, or intersubjective, that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things, because "Humans and nonhumans, it [objectivism] proclaims, codetermine one another", and also claims "independence of (some) objects from human activity and conceptualization".

A second posthumanist agenda is "the prioritization of practices over individuals (or individual subjects)", which, they say, constitute the individual.

There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it "posthumanism", he made an immanent critique of humanism, and then constructed a philosophy that presupposed neither humanist, nor scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created", Dooyeweerd wrote, "and the nature even of our selfhood". Both human and nonhuman alike function subject to a common law-side, which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.

Emergence of philosophical posthumanism

Ihab Hassan, theorist in the academic study of literature, once stated: "Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism." This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.

Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term posthumanism.

Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.

Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. Posthumanistic views were also found in the works of Shakespeare. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish "anthropological universals" that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.

Although Nietzsche's philosophy has been characterized as posthumanist, Foucault placed posthumanism within a context that differentiated humanism from Enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment's challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological and technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.

Contemporary posthuman discourse

Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as "technological posthumanism", visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.yles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway's concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists' use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).

While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.

Technological versus non-technological

Posthumanism can be divided into non-technological and technological forms.

Non-technological posthumanism

While posthumanization has links with the scholarly methodologies of posthumanism, it is a distinct phenomenon. The rise of explicit posthumanism as a scholarly approach is relatively recent, occurring since the late 1970s; however, some of the processes of posthumanization that it studies are ancient. For example, the dynamics of non-technological posthumanization have existed historically in all societies in which animals were incorporated into families as household pets or in which ghosts, monsters, angels, or semidivine heroes were considered to play some role in the world.

Such non-technological posthumanization has been manifested not only in mythological and literary works but also in the construction of temples, cemeteries, zoos, or other physical structures that were considered to be inhabited or used by quasi- or para-human beings who were not natural, living, biological human beings but who nevertheless played some role within a given society, to the extent that, according to philosopher Francesca Ferrando: "the notion of spirituality dramatically broadens our understanding of the posthuman, allowing us to investigate not only technical technologies (robotics, cybernetics, biotechnology, nanotechnology, among others), but also, technologies of existence."

Technological posthumanism

Some forms of technological posthumanization involve efforts to directly alter the social, psychological, or physical structures and behaviors of the human being through the development and application of technologies relating to genetic engineering or neurocybernetic augmentation; such forms of posthumanization are studied, e.g., by cyborg theory. Other forms of technological posthumanization indirectly "posthumanize" human society through the deployment of social robots or attempts to develop artificial general intelligences, sentient networks, or other entities that can collaborate and interact with human beings as members of posthumanized societies.

The dynamics of technological posthumanization have long been an important element of science fiction; genres such as cyberpunk take them as a central focus. In recent decades, technological posthumanization has also become the subject of increasing attention by scholars and policymakers. The expanding and accelerating forces of technological posthumanization have generated diverse and conflicting responses, with some researchers viewing the processes of posthumanization as opening the door to a more meaningful and advanced transhumanist future for humanity, while other bioconservative critiques warn that such processes may lead to a fragmentation of human society, loss of meaning, and subjugation to the forces of technology.

Common features

Processes of technological and non-technological posthumanization both tend to result in a partial "de-anthropocentrization" of human society, as its circle of membership is expanded to include other types of entities and the position of human beings is decentered. A common theme of posthumanist study is the way in which processes of posthumanization challenge or blur simple binaries, such as those of "human versus non-human", "natural versus artificial", "alive versus non-alive", and "biological versus mechanical".

Relationship with transhumanism

Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.

Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as "an intensification of humanism". Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism's focus on the Homo sapiens as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism "rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world)". These contrasting views on the importance of human beings are the main distinctions between the two subjects.

Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture".

Criticism

Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that 'the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history':

This is ontologically critical. Unlike the naming of 'postmodernism' where the 'post' does not infer the end of what it previously meant to be human (just the passing of the dominance of the modern) the posthumanists are playing a serious game where the human, in all its ontological variability, disappears in the name of saving something unspecified about us as merely a motley co-location of individuals and communities.

However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:

Altruism, mutualism, humanism are the soft and slimy virtues that underpin liberal capitalism. Humanism has always been integrated into discourses of exploitation: colonialism, imperialism, neoimperialism, democracy, and of course, American democratization. One of the serious flaws in transhumanism is the importation of liberal-human values to the biotechno enhancement of the human. Posthumanism has a much stronger critical edge attempting to develop through enactment new understandings of the self and others, essence, consciousness, intelligence, reason, agency, intimacy, life, embodiment, identity and the body.

While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.

Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon, Aime Cesaire, Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of "beyond" is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitute the very terms through which human-nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a "beyond" actually "returns us to a Eurocentric transcendentalism long challenged". Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.

Polarization

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Polarization_(waves) Circular...