Search This Blog

Saturday, June 22, 2019

NASA Deep Space Network

From Wikipedia, the free encyclopedia

Deep Space Network
Deep space network 40th logo.svg
Insignia for the Deep Space Network's 40th anniversary celebrations, 1998.
Alternative namesNASA Deep Space Network Edit this at Wikidata
OrganizationInterplanetary Network Directorate
(NASA / JPL)
LocationUnited States of America, Spain, Australia Edit this at Wikidata
Coordinates34°12′3″N 118°10′18″WCoordinates: 34°12′3″N 118°10′18″W
EstablishedOctober 1, 1958
Websitehttps://deepspace.jpl.nasa.gov/
Telescopes
Goldstone Deep Space Communications ComplexBarstow, California, United States
Madrid Deep Space Communications ComplexRobledo de Chavela, Community of Madrid, Spain
Canberra Deep Space Communication ComplexCanberra, Australia

The NASA Deep Space Network (DSN) is a worldwide network of U.S. spacecraft communication facilities, located in the United States (California), Spain (Madrid), and Australia (Canberra), that supports NASA's interplanetary spacecraft missions. It also performs radio and radar astronomy observations for the exploration of the Solar System and the universe, and supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL). Similar networks are run by Russia, China, India, Japan and the European Space Agency.

General information

Deep Space Network Operations Center at JPL, Pasadena (California) in 1993.
 
DSN currently consists of three deep-space communications facilities placed approximately 120 degrees apart around the Earth. They are:
Each facility is situated in semi-mountainous, bowl-shaped terrain to help shield against radio frequency interference. The strategic placement with nearly 120-degree separation permits constant observation of spacecraft as the Earth rotates, which helps to make the DSN the largest and most sensitive scientific telecommunications system in the world.

The DSN supports NASA's contribution to the scientific investigation of the Solar System: It provides a two-way communications link that guides and controls various NASA unmanned interplanetary space probes, and brings back the images and new scientific information these probes collect. All DSN antennas are steerable, high-gain, parabolic reflector antennas. The antennas and data delivery systems make it possible to:
  • acquire telemetry data from spacecraft.
  • transmit commands to spacecraft.
  • upload software modifications to spacecraft.
  • track spacecraft position and velocity.
  • perform Very Long Baseline Interferometry observations.
  • measure variations in radio waves for radio science experiments.
  • gather science data.
  • monitor and control the performance of the network.

Operations control center

The antennas at all three DSN Complexes communicate directly with the Deep Space Operations Center (also known as Deep Space Network operations control center) located at the JPL facilities in Pasadena, California

In the early years, the operations control center did not have a permanent facility. It was a provisional setup with numerous desks and phones installed in a large room near the computers used to calculate orbits. In July 1961, NASA started the construction of the permanent facility, Space Flight Operations Facility (SFOF). The facility was completed in October 1963 and dedicated on May 14, 1964. In the initial setup of the SFOF, there were 31 consoles, 100 closed-circuit television cameras, and more than 200 television displays to support Ranger 6 to Ranger 9 and Mariner 4.

Currently, the operations center personnel at SFOF monitor and direct operations, and oversee the quality of spacecraft telemetry and navigation data delivered to network users. In addition to the DSN complexes and the operations center, a ground communications facility provides communications that link the three complexes to the operations center at JPL, to space flight control centers in the United States and overseas, and to scientists around the world.

Deep space

View from the Earth's north pole, showing the field of view of the main DSN antenna locations. Once a mission gets more than 30,000 km from Earth, it is al­ways in view of at least one of the stations.
 
Tracking vehicles in deep space is quite different from tracking missions in low Earth orbit (LEO). Deep space missions are visible for long periods of time from a large portion of the Earth's surface, and so require few stations (the DSN has only three main sites). These few stations, however, require huge antennas, ultra-sensitive receivers, and powerful transmitters in order to transmit and receive over the vast distances involved. 

Deep space is defined in several different ways. According to a 1975 NASA report, the DSN was designed to communicate with "spacecraft traveling approximately 16,000 km (10,000 miles) from Earth to the farthest planets of the solar system." JPL diagrams state that at an altitude of 30,000 km, a spacecraft is always in the field of view of one of the tracking stations. 

The International Telecommunications Union, which sets aside various frequency bands for deep space and near Earth use, defines "deep space" to start at a distance of 2 million km from the Earth's surface.

This definition means that missions to the Moon, and the Earth–Sun Lagrangian points L1 and L2, are considered near space and cannot use the ITU's deep space bands. Other Lagrangian points may or may not be subject to this rule due to distance.

History

The forerunner of the DSN was established in January 1958, when JPL, then under contract to the U.S. Army, deployed portable radio tracking stations in Nigeria, Singapore, and California to receive telemetry and plot the orbit of the Army-launched Explorer 1, the first successful U.S. satellite. NASA was officially established on October 1, 1958, to consolidate the separately developing space-exploration programs of the US Army, US Navy, and US Air Force into one civilian organization.

On December 3, 1958, JPL was transferred from the US Army to NASA and given responsibility for the design and execution of lunar and planetary exploration programs using remotely controlled spacecraft. Shortly after the transfer, NASA established the concept of the Deep Space Network as a separately managed and operated communications system that would accommodate all deep space missions, thereby avoiding the need for each flight project to acquire and operate its own specialized space communications network. The DSN was given responsibility for its own research, development, and operation in support of all of its users. Under this concept, it has become a world leader in the development of low-noise receivers; large parabolic-dish antennas; tracking, telemetry, and command systems; digital signal processing; and deep space navigation. The Deep Space Network formally announced its intention to send missions into deep space on Christmas Eve 1963; it has remained in continuous operation in one capacity or another ever since.

The largest antennas of the DSN are often called on during spacecraft emergencies. Almost all spacecraft are designed so normal operation can be conducted on the smaller (and more economical) antennas of the DSN, but during an emergency the use of the largest antennas is crucial. This is because a troubled spacecraft may be forced to use less than its normal transmitter power, attitude control problems may preclude the use of high-gain antennas, and recovering every bit of telemetry is critical to assessing the health of the spacecraft and planning the recovery. The most famous example is the Apollo 13 mission, where limited battery power and inability to use the spacecraft's high-gain antennas reduced signal levels below the capability of the Manned Space Flight Network, and the use of the biggest DSN antennas (and the Australian Parkes Observatory radio telescope) was critical to saving the lives of the astronauts. While Apollo was also a US mission, DSN provides this emergency service to other space agencies as well, in a spirit of inter-agency and international cooperation. For example, the recovery of the Solar and Heliospheric Observatory (SOHO) mission of the European Space Agency (ESA) would not have been possible without the use of the largest DSN facilities.

DSN and the Apollo program

Although normally tasked with tracking unmanned spacecraft, the Deep Space Network (DSN) also contributed to the communication and tracking of Apollo missions to the Moon, although primary responsibility was held by the Manned Space Flight Network. The DSN designed the MSFN stations for lunar communication and provided a second antenna at each MSFN site (the MSFN sites were near the DSN sites for just this reason). Two antennas at each site were needed both for redundancy and because the beam widths of the large antennas needed were too small to encompass both the lunar orbiter and the lander at the same time. DSN also supplied some larger antennas as needed, in particular for television broadcasts from the Moon, and emergency communications such as Apollo 13.

Excerpt from a NASA report describing how the DSN and MSFN cooperated for Apollo:
Another critical step in the evolution of the Apollo Network came in 1965 with the advent of the DSN Wing concept. Originally, the participation of DSN 26-m antennas during an Apollo Mission was to be limited to a backup role. This was one reason why the MSFN 26-m sites were collocated with the DSN sites at Goldstone, Madrid, and Canberra. However, the presence of two, well-separated spacecraft during lunar operations stimulated the rethinking of the tracking and communication problem. One thought was to add a dual S-band RF system to each of the three 26-m MSFN antennas, leaving the nearby DSN 26-m antennas still in a backup role. Calculations showed, though, that a 26-m antenna pattern centered on the landed Lunar Module would suffer a 9-to-12 db loss at the lunar horizon, making tracking and data acquisition of the orbiting Command Service Module difficult, perhaps impossible. It made sense to use both the MSFN and DSN antennas simultaneously during the all-important lunar operations. JPL was naturally reluctant to compromise the objectives of its many unmanned spacecraft by turning three of its DSN stations over to the MSFN for long periods. How could the goals of both Apollo and deep space exploration be achieved without building a third 26-m antenna at each of the three sites or undercutting planetary science missions?
The solution came in early 1965 at a meeting at NASA Headquarters, when Eberhardt Rechtin suggested what is now known as the "wing concept". The wing approach involves constructing a new section or "wing" to the main building at each of the three involved DSN sites. The wing would include a MSFN control room and the necessary interface equipment to accomplish the following:
  1. Permit tracking and two-way data transfer with either spacecraft during lunar operations.
  2. Permit tracking and two-way data transfer with the combined spacecraft during the flight to the Moon.
  3. Provide backup for the collocated MSFN site passive track (spacecraft to ground RF links) of the Apollo spacecraft during trans-lunar and trans-earth phases.
With this arrangement, the DSN station could be quickly switched from a deep-space mission to Apollo and back again. GSFC personnel would operate the MSFN equipment completely independently of DSN personnel. Deep space missions would not be compromised nearly as much as if the entire station's equipment and personnel were turned over to Apollo for several weeks.
The details of this cooperation and operation are available in a two-volume technical report from JPL.

Management

The network is a NASA facility and is managed and operated for NASA by JPL, which is part of the California Institute of Technology (Caltech). The Interplanetary Network Directorate (IND) manages the program within JPL and is charged with the development and operation of it. The IND is considered to be JPL's focal point for all matters relating to telecommunications, interplanetary navigation, information systems, information technology, computing, software engineering, and other relevant technologies. While the IND is best known for its duties relating to the Deep Space Network, the organization also maintains the JPL Advanced Multi-Mission Operations System (AMMOS) and JPL's Institutional Computing and Information Services (ICIS).

Harris Corporation is under a 5-year contract to JPL for the DSN's operations and maintenance. Harris has responsibility for managing the Goldstone complex, operating the DSOC, and for DSN operations, mission planning, operations engineering, and logistics.

Antennas

70 m antenna at Goldstone, California.
 
Each complex consists of at least four deep space terminals equipped with ultra-sensitive receiving systems and large parabolic-dish antennas. There are:
  • One 34-meter (112 ft) diameter High Efficiency antenna (HEF).
  • Two or more 34-meter (112 ft) Beam waveguide antennas (BWG) (three operational at the Goldstone Complex, two at the Robledo de Chavela complex (near Madrid), and two at the Canberra Complex).
  • One 26-meter (85 ft) antenna.
  • One 70-meter (230 ft) antenna.
Five of the 34-meter (112 ft) beam waveguide antennas were added to the system in the late 1990s. Three were located at Goldstone, and one each at Canberra and Madrid. A second 34-meter (112 ft) beam waveguide antenna (the network's sixth) was completed at the Madrid complex in 2004.

In order to meet the current and future needs of deep space communication services, a number of new Deep Space Station antennas need to be built at the existing Deep Space Network sites. At the Canberra Deep Space Communication Complex the first of these was completed October 2014 (DSS35), with a second becoming operational in October 2016 (DSS36). Construction has also begun on an additional antenna at the Madrid Deep Space Communications Complex. By 2025, the 70 meter antennas at all three locations will be decommissioned and replaced with 34 meter BWG antennas that will be arrayed. All systems will be upgraded to have X-band uplink capabilities and both X and Ka-band downlink capabilities.

Current signal processing capabilities

The general capabilities of the DSN have not substantially changed since the beginning of the Voyager Interstellar Mission in the early 1990s. However, many advancements in digital signal processing, arraying and error correction have been adopted by the DSN. 

The ability to array several antennas was incorporated to improve the data returned from the Voyager 2 Neptune encounter, and extensively used for the Galileo spacecraft, when the high-gain antenna did not deploy correctly.

The DSN array currently available since the Galileo mission can link the 70-meter (230 ft) dish antenna at the Deep Space Network complex in Goldstone, California, with an identical antenna located in Australia, in addition to two 34-meter (112 ft) antennas at the Canberra complex. The California and Australia sites were used concurrently to pick up communications with Galileo

Arraying of antennas within the three DSN locations is also used. For example, a 70-meter (230 ft) dish antenna can be arrayed with a 34-meter dish. For especially vital missions, like Voyager 2, non-DSN facilities normally used for radio astronomy can be added to the array. In particular, the Canberra 70-meter (230 ft) dish can be arrayed with the Parkes Radio Telescope in Australia; and the Goldstone 70-meter dish can be arrayed with the Very Large Array of antennas in New Mexico. Also, two or more 34-meter (112 ft) dishes at one DSN location are commonly arrayed together. 

All the stations are remotely operated from a centralized Signal Processing Center at each complex. These Centers house the electronic subsystems that point and control the antennas, receive and process the telemetry data, transmit commands, and generate the spacecraft navigation data. Once the data is processed at the complexes, it is transmitted to JPL for further processing and for distribution to science teams over a modern communications network. 

Especially at Mars, there are often many spacecraft within the beam width of an antenna. For operational efficiency, a single antenna can receive signals from multiple spacecraft at the same time. This capability is called Multiple Spacecraft Per Aperture, or MSPA. Currently the DSN can receive up to 4 spacecraft signals at the same time, or MSPA-4. However, apertures cannot currently be shared for uplink. When two or more high power carriers are used simultaneously, very high order intermodulation products fall in the receiver bands, causing interference to the much (25 orders of magnitude) weaker received signals. Therefore only one spacecraft at a time can get an uplink, though up to 4 can be received.

Network limitations and challenges

70m antenna in Robledo de Chavela, Community of Madrid, Spain

There are a number of limitations to the current DSN, and a number of challenges going forward.
  • The Deep Space Network is something of a misnomer, as there are no current plans, nor future plans, for exclusive communication satellites anywhere in space to handle multiparty, multi-mission use. All the transmitting and receiving equipment are Earth-based. Therefore, data transmission rates from/to any and all spacecrafts and space probes are severely constrained due to the distances from Earth.
  • The need to support "legacy" missions that have remained operational beyond their original lifetimes but are still returning scientific data. Programs such as Voyager have been operating long past their original mission termination date. They also need some of the largest antennas.
  • Replacing major components can cause problems as it can leave an antenna out of service for months at a time.
  • The older 70M & HEF antennas are reaching the end of their lives. At some point these will need to be replaced. The leading candidate for 70M replacement had been an array of smaller dishes, but more recently the decision was taken to expand the provision of 34-meter (112 ft) BWG antennas at each complex to a total of 4.
  • New spacecraft intended for missions beyond geocentric orbits are being equipped to use the beacon mode service, which allows such missions to operate without the DSN most of the time.

DSN and radio science

Illustration of Juno and Jupiter. Juno is in a polar orbit that takes it close to Jupiter as it passes from north to south, getting a view of both poles. During the GS experiment it must point its antenna at the Deep Space Network on Earth to pick up a special signal sent from DSN.
 
The DSN forms one portion of the radio sciences experiment included on most deep space missions, where radio links between spacecraft and Earth are used to investigate planetary science, space physics and fundamental physics. The experiments include radio occultations, gravity field determination and celestial mechanics, bistatic scattering, doppler wind experiments, solar corona characterization, and tests of fundamental physics.

For example, the Deep Space Network forms one component of the gravity science experiment on Juno. This includes special communication hardware on Juno and uses its communication system. The DSN radiates a Ka-band uplink, which is picked up by Juno's Ka-Band communication system and then processed by a special communication box called KaTS, and then this new signal is sent back the DSN. This allows the velocity of the spacecraft over time to be determined with a level of precision that allows a more accurate determination of the gravity field at planet Jupiter.

Another radio science experiment is REX on the New Horizons spacecraft to Pluto-Charon. REX received a signal from Earth as it was occulted by Pluto to take various measurements of that systems of bodies.

Friday, June 21, 2019

Telerobotics

From Wikipedia, the free encyclopedia

Justus security robot patrolling in Kraków
 
Telerobotics is the area of robotics concerned with the control of semi-autonomous robots from a distance, chiefly using Wireless network (like Wi-Fi, Bluetooth, the Deep Space Network, and similar) or tethered connections. It is a combination of two major subfields, teleoperation and telepresence.

Teleoperation

Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.

Early Telerobotics (Rosenberg, 1992) US Air Force - Virtual Fixtures system
 
Teleoperation is the most standard term, used both in research and technical communities, for referring to operation at a distance. This is opposed to "telepresence", which refers to the subset of telerobotic systems configured with an immersive interface such that the operator feels present in the remote environment, projecting his or her presence through the remote robot. One of the first telepresence systems that enabled operators to feel present in a remote environment through all of the primary senses (sight, sound, and touch) was the Virtual Fixtures system developed at US Air Force Research Laboratories in the early 1990s. The system enabled operators to perform dexterous tasks (inserting pegs into holes) remotely such that the operator would feel as if he or she was inserting the pegs when in fact it was a robot remotely performing the task.

A telemanipulator (or teleoperator) is a device that is controlled remotely by a human operator. In simple cases the controlling operator's command actions correspond directly to actions in the device controlled, as for example in a radio controlled model aircraft or a tethered deep submergence vehicle. Where communications delays make direct control impractical (such as a remote planetary rover), or it is desired to reduce operator workload (as in a remotely controlled spy or attack aircraft), the device will not be controlled directly, instead being commanded to follow a specified path. At increasing levels of sophistication the device may operate somewhat independently in matters such as obstacle avoidance, also commonly employed in planetary rovers. 

Devices designed to allow the operator to control a robot at a distance are sometimes called telecheric robotics. 

Two major components of telerobotics and telepresence are the visual and control applications. A remote camera provides a visual representation of the view from the robot. Placing the robotic camera in a perspective that allows intuitive control is a recent technique that although based in Science Fiction (Robert A. Heinlein's Waldo 1942) has not been fruitful as the speed, resolution and bandwidth have only recently been adequate to the task of being able to control the robot camera in a meaningful way. Using a head mounted display, the control of the camera can be facilitated by tracking the head as shown in the figure below.

This only works if the user feels comfortable with the latency of the system, the lag in the response to movements, the visual representation. Any issues such as, inadequate resolution, latency of the video image, lag in the mechanical and computer processing of the movement and response, and optical distortion due to camera lens and head mounted display lenses, can cause the user 'simulator sickness' that is exacerbated by the lack of vestibular stimulation with visual representation of motion. 

Mismatch between the users motions such as registration errors, lag in movement response due to overfiltering, inadequate resolution for small movements, and slow speed can contribute to these problems. 

The same technology can control the robot, but then the eye–hand coordination issues become even more pervasive through the system, and user tension or frustration can make the system difficult to use.

The tendency to build robots has been to minimize the degrees of freedom because that reduces the control problems. Recent improvements in computers has shifted the emphasis to more degrees of freedom, allowing robotic devices that seem more intelligent and more human in their motions. This also allows more direct teleoperation as the user can control the robot with their own motions.

Interfaces

A telerobotic interface can be as simple as a common MMK (monitor-mouse-keyboard) interface. While this is not immersive, it is inexpensive. Telerobotics driven by internet connections are often of this type. A valuable modification to MMK is a joystick, which provides a more intuitive navigation scheme for planar robot movement. 

Dedicated telepresence setups utilize a head mounted display with either single or dual eye display, and an ergonomically matched interface with joystick and related button, slider, trigger controls.

Other interfaces merge fully immersive virtual reality interfaces and real-time video instead of computer-generated images. Another example would be to use an omnidirectional treadmill with an immersive display system so that the robot is driven by the person walking or running. Additional modifications may include merged data displays such as Infrared thermal imaging, real-time threat assessment, or device schematics.

Applications

Space

NASA HERRO (Human Exploration using Real-time Robotic Operations) telerobotic exploration concept
 
With the exception of the Apollo program, most space exploration has been conducted with telerobotic space probes. Most space-based astronomy, for example, has been conducted with telerobotic telescopes. The Russian Lunokhod-1 mission, for example, put a remotely driven rover on the moon, which was driven in real time (with a 2.5-second lightspeed time delay) by human operators on the ground. Robotic planetary exploration programs use spacecraft that are programmed by humans at ground stations, essentially achieving a long-time-delay form of telerobotic operation. Recent noteworthy examples include the Mars exploration rovers (MER) and the Curiosity rover. In the case of the MER mission, the spacecraft and the rover operated on stored programs, with the rover drivers on the ground programming each day's operation. The International Space Station (ISS) uses a two-armed telemanipulator called Dextre. More recently, a humanoid robot Robonaut has been added to the space station for telerobotic experiments. 

NASA has proposed use of highly capable telerobotic systems for future planetary exploration using human exploration from orbit. In a concept for Mars Exploration proposed by Landis, a precursor mission to Mars could be done in which the human vehicle brings a crew to Mars, but remains in orbit rather than landing on the surface, while a highly capable remote robot is operated in real time on the surface. Such a system would go beyond the simple long time delay robotics and move to a regime of virtual telepresence on the planet. One study of this concept, the Human Exploration using Real-time Robotic Operations (HERRO) concept, suggested that such a mission could be used to explore a wide variety of planetary destinations.

Telepresence and videoconferencing

iRobot Ava 500, an autonomous roaming telepresence robot.
 
The prevalence of high quality video conferencing using mobile devices, tablets and portable computers has enabled a drastic growth in telepresence robots to help give a better sense of remote physical presence for communication and collaboration in the office, home, school, etc. when one cannot be there in person. The robot avatar can move or look around at the command of the remote person.

There have been two primary approaches that both utilize videoconferencing on a display 1) desktop telepresence robots - typically mount a phone or tablet on a motorized desktop stand to enable the remote person to look around a remote environment by panning and tilting the display or 2) drivable telepresence robots - typically contain a display (integrated or separate phone or tablet) mounted on a roaming base. Some examples of desktop telepresence robots include Kubi by Revolve Robotics, Galileo by Motrr, and Swivl. Some examples of roaming telepresence robots include Beam by Suitable Technologies, Double by Double Robotics, RP-Vita by iRobot and InTouch Health, Anybots, Vgo, TeleMe by Mantarobot, and Romo by Romotive. More modern roaming telepresence robots may include an ability to operate autonomously. The robots can map out the space and be able to avoid obstacles while driving themselves between rooms and their docking stations.

Traditional videoconferencing systems and telepresence rooms generally offer Pan / Tilt / Zoom cameras with far end control. The ability for the remote user to turn the device's head and look around naturally during a meeting is often seen as the strongest feature of a telepresence robot. For this reason, the developers have emerged in the new category of desktop telepresence robots that concentrate on this strongest feature to create a much lower cost robot. The desktop telepresence robots, also called head and neck Robots allow users to look around during a meeting and are small enough to be carried from location to location, eliminating the need for remote navigation.

Marine applications

Marine remotely operated vehicles (ROVs) are widely used to work in water too deep or too dangerous for divers. They repair offshore oil platforms and attach cables to sunken ships to hoist them. They are usually attached by a tether to a control center on a surface ship. The wreck of the Titanic was explored by an ROV, as well as by a crew-operated vessel.

Telemedicine

Additionally, a lot of telerobotic research is being done in the field of medical devices, and minimally invasive surgical systems. With a robotic surgery system, a surgeon can work inside the body through tiny holes just big enough for the manipulator, with no need to open up the chest cavity to allow hands inside.

Other applications

Remote manipulators are used to handle radioactive materials. 

Telerobotics has been used in installation art pieces; Telegarden is an example of a project where a robot was operated by users through the Web.

Experiment

From Wikipedia, the free encyclopedia

Even very young children perform rudimentary experiments to learn about the world and how things work.
 
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments vary greatly in goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exists natural experimental studies

A child may carry out basic experiments to understand gravity, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of hands-on activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences. 

Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled (accounted for by the control measurements) and none are uncontrolled. In such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variable.

Overview

In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.

An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment reveals, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment

In engineering and the physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.

In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis

There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.

History

One of the first methodical approaches to experiments in the modern sense is visible in the works of the Arab mathematician and scholar Ibn al-Haytham. He conducted his experiments in the field of optics - going back to optical and mathematical problems in the works of Ptolemy - by controlling his experiments due to factors such as self-criticality, reliance on visible results of the experiments as well as a criticality in terms of earlier results. He counts as one of the first scholars using an inductive-experimental method for achieving results. In his book "Optics" he describes the fundamentally new approach to knowledge and research in an experimental sense:
"We should, that is, recommence the inquiry into its principles and premisses, beginning our investigation with an inspection of the things that exist and a survey of the conditions of visible objects. We should distinguish the properties of particulars, and gather by induction what pertains to the eye when vision takes place and what is found in the manner of sensation to be uniform, unchanging, manifest and not subject to doubt. After which we should ascend in our inquiry and reasonings, gradually and orderly, criticizing premisses and exercising caution in regard to conclusions – our aim in all that we make subject to inspection and review being to employ justice, not to follow prejudice, and to take care in all that we judge and criticize that we seek the truth and not to be swayed by opinion. We may in this way eventually come to the truth that gratifies the heart and gradually and carefully reach the end at which certainty appears; while through criticism and caution we may seize the truth that dispels disagreement and resolves doubtful matters. For all that, we are not free from that human turbidity which is in the nature of man; but we must do our best with what we possess of human power. From God we derive support in all things."
According to his explanation, a strictly controlled test execution with a sensibility for the subjectivity and susceptibility of outcomes due to the nature of man is necessary. Furthermore, a critical view on the results and outcomes of earlier scholars is necessary:
"It is thus the duty of the man who studies the writings of scientists, if learning the truth is his goal, to make himself an enemy of all that he reads, and, applying his mind to the core and margins of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency."
Thus, a comparison of earlier results with the experimental results is necessary for an objective experiment - the visible results being more important. In the end, this may mean that an experimental researcher must find enough courage to discard traditional opinions or results, especially if these results are not experimental but results from a logical/ mental derivation. In this process of critical consideration, the man himself should not forget that he tends to subjective opinions - through "prejudices" and "leniency" - and thus has to be critical about his own way of building hypotheses.

Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an influential supporter of experimental science in the English renaissance. He disagreed with the method of answering scientific questions by deduction - similar to Ibn al-Haytham - and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession." Bacon wanted a method that relied on repeatable observations, or experiments. Notably, he first ordered the scientific method as we understand it today.
There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment]; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.
In the centuries that followed, people who applied the scientific method in different areas made important advances and discoveries. For example, Galileo Galilei (1564-1642) accurately measured time and experimented to make accurate measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743-1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822-1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible. 

A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century, with contributions from statisticians such as Ronald Fisher (1890-1962), Jerzy Neyman (1894-1981), Oscar Kempthorne (1919-2000), Gertrude Mary Cox (1900-1978), and William Gemmell Cochran (1909-1980), among others.

Types of experiment

Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study. In some disciplines (e.g., psychology or political science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects to neutralize experimenter bias, and ensures, over a large number of iterations of the experiment, that it controls for all confounding factors.

Controlled experiments

A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure similar to the actual experimental test but is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to subtract from the test sample results. Sometimes the positive control takes the quadrant of a standard curve

An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in the fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) to estimate the amount of protein in the unknown sample.

Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups. 

Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated. 

In human experiments, researchers may give a subject (person) a stimulus that the subject responds to. The goal of the experiment is to measure the response to the stimulus by a test method

Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
 
In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations – or hypotheses. These hypotheses suggest reasons to explain a phenomenon, or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out and the results analysed to confirm, refute, or define the accuracy of the hypotheses. 

Experiments can be also designed to estimate spillover effects onto nearby untreated units.

Natural experiments

The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult or impossible. In this case researchers resort to natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study. 

Much research in several science disciplines, including economics, political science, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "Stars are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a star. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed.

Field experiments

Field experiments are so named to distinguish them from laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory.

Contrast with observational study

The black box model for observation (input and output are observables). When there are a feedback with some observer's control, as illustrated, the observation is also an experiment.
 
An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). For an observational science to be valid, the experimenter must know and account for confounding factors. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data. 

Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.

A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and results are not meaningful if a covariate is neither randomized nor included in the model.

To avoid conditions that render an experiment far less useful, physicians conducting medical trials – say for U.S. Food and Drug Administration approval – quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with complicated statistical methods such as propensity score matching methods, which require large populations of subjects and extensive information on covariates. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing.

Ethics

By placing the distribution of the independent variable(s) under the control of the researcher, an experiment – particularly when it involves human subjects – introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example, in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice. It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors. 

Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects.

Experimental method in law

The experimental method can be useful in solving juridical problems.

Computer-assisted surgery

From Wikipedia, the free encyclopedia

Computer-assisted surgery (CAS) represents a surgical concept and set of methods, that use computer technology for surgical planning, and for guiding or performing surgical interventions. CAS is also known as computer-aided surgery, computer-assisted intervention, image-guided surgery and surgical navigation, but these are terms that are more or less synonymous with CAS. CAS has been a leading factor in the development of robotic surgery.

General principles

Image gathering ("segmentation") on the LUCAS workstation

Creating a virtual image of the patient

The most important component for CAS is the development of an accurate model of the patient. This can be conducted through a number of medical imaging technologies including CT, MRI, x-rays, ultrasound plus many more. For the generation of this model, the anatomical region to be operated has to be scanned and uploaded into the computer system. It is possible to employ a number of scanning methods, with the datasets combined through data fusion techniques. The final objective is the creation of a 3D dataset that reproduces the exact geometrical situation of the normal and pathological tissues and structures of that region. Of the available scanning methods, the CT is preferred, because MRI data sets are known to have volumetric deformations that may lead to inaccuracies. An example data set can include the collection of data compiled with 180 CT slices, that are 1 mm apart, each having 512 by 512 pixels. The contrasts of the 3D dataset (with its tens of millions of pixels) provide the detail of soft vs hard tissue structures, and thus allow a computer to differentiate, and visually separate for a human, the different tissues and structures. The image data taken from a patient will often include intentional landmark features, in order to be able to later realign the virtual dataset against the actual patient during surgery.

Image analysis and processing

Image analysis involves the manipulation of the patients 3D model to extract relevant information from the data. Using the differing contrast levels of the different tissues within the imagery, as examples, a model can be changed to show just hard structures such as bone, or view the flow of arteries and veins through the brain.

Diagnostic, preoperative planning, surgical simulation

Using specialized software the gathered dataset can be rendered as a virtual 3D model of the patient, this model can be easily manipulated by a surgeon to provide views from any angle and at any depth within the volume. Thus the surgeon can better assess the case and establish a more accurate diagnostic. Furthermore, the surgical intervention will be planned and simulated virtually, before actual surgery takes place (computer-aided surgical simulation [CASS]). Using dedicated software, the surgical robot will be programmed to carry out the planned actions during the actual surgical intervention.

Surgical navigation

In computer-assisted surgery, the actual intervention is defined as surgical navigation. Using the surgical navigation system the surgeon uses special instruments, which are tracked by the navigation system. The position of a tracked instrument in relation to the patient's anatomy is shown on images of the patient, as the surgeon moves the instrument. The surgeon thus uses the system to 'navigate' the location of an instrument. The feedback the system provides of the instrument location is particularly useful in situations where the surgeon cannot actually see the tip of the instrument, such as in minimally invasive surgeries.

Robotic surgery

Robotic surgery is a term used for correlated actions of a surgeon and a surgical robot (that has been programmed to carry out certain actions during the preoperative planning procedure). A surgical robot is a mechanical device (generally looking like a robotic arm) that is computer-controlled. Robotic surgery can be divided into three types, depending on the degree of surgeon interaction during the procedure: supervisory-controlled, telesurgical, and shared-control. In a supervisory-controlled system, the procedure is executed solely by the robot, which will perform the pre-programmed actions. A telesurgical system, also known as remote surgery, requires the surgeon to manipulate the robotic arms during the procedure rather than allowing the robotic arms to work from a predetermined program. With shared-control systems, the surgeon carries out the procedure with the use of a robot that offers steady-hand manipulations of the instrument. In most robots, the working mode can be chosen for each separate intervention, depending on the surgical complexity and the particularities of the case.

Applications

Computer-assisted surgery is the beginning of a revolution in surgery. It already makes a great difference in high-precision surgical domains, but it is also used in standard surgical procedures.

Computer-assisted neurosurgery

Telemanipulators have been used for the first time in neurosurgery, in the 1980s. This allowed a greater development in brain microsurgery (compensating surgeon’s physiological tremor by 10-fold), increased accuracy and precision of the intervention. It also opened a new gate to minimally invasive brain surgery, furthermore reducing the risk of post-surgical morbidity by avoiding accidental damage to adjacent centers.

Computer-assisted oral and maxillofacial surgery

Bone segment navigation is the modern surgical approach in orthognathic surgery (correction of the anomalies of the jaws and skull), in temporo-mandibular joint (TMJ) surgery, or in the reconstruction of the mid-face and orbit.

It is also used in implantology where the available bone can be seen and the position, angulation and depth of the implants can be simulated before the surgery. During the operation surgeon is guided visually and by sound alerts. IGI (Image Guided Implantology) is one of the navigation systems which uses this technology.

Guided Implantology

New therapeutic concepts as guided surgery are being developed and applied in the placement of dental implants. The prosthetic rehabilitation is also planned and performed parallel to the surgical procedures. The planning steps are at the foreground and carried out in a cooperation of the surgeon, the dentist and the dental technician. Edentulous patients, either one or both jaws, benefit as the time of treatment is reduced. 

Regarding the edentulous patients, conventional denture support is often compromised due to moderate bone atrophy, even if the dentures are constructed based on correct anatomic morphology.

Using cone beam computed tomography, the patient and the existing prosthesis are being scanned. Furthermore, the prosthesis alone is also scanned. Glass pearls of defined diameter are placed in the prosthesis and used as reference points for the upcoming planning. The resulting data is processed and the position of the implants determined. The surgeon, using special developed software, plans the implants based on prosthetic concepts considering the anatomic morphology. After the planning of the surgical part is completed, a CAD/CAM surgical guide for dental placement is constructed. The mucosal-supported surgical splint ensures the exact placement of the implants in the patient. Parallel to this step, the new implant supported prosthesis is constructed.

The dental technician, using the data resulting from the previous scans, manufactures a model representing the situation after the implant placement. The prosthetic compounds, abutments, are already prefabricated. The length and the inclination can be chosen. The abutments are connected to the model at a position in consideration of the prosthetic situation. The exact position of the abutments is registered. The dental technician can now manufacture the prosthesis. 

The fit of the surgical splint is clinically proved. After that, the splint is attached using a three-point support pin system. Prior to the attachment, irrigation with a chemical disinfectant is advised. The pins are driven through defined sheaths from the vestibular to the oral side of the jaw. Ligaments anatomy should be considered, and if necessary decompensation can be achieved with minimal surgical interventions. The proper fit of the template is crucial and should be maintained throughout the whole treatment. Regardless of the mucosal resilience, a correct and stable attachment is achieved through the bone fixation. The access to the jaw can now only be achieved through the sleeves embedded in the surgical template. Using specific burs through the sleeves the mucosa is removed. Every bur used, carries a sleeve compatible to the sleeves in the template, which ensures that the final position is achieved but no further progress in the alveolar ridge can take place. Further procedure is very similar to the traditional implant placement. The pilot hole is drilled and then expanded. With the aid of the splint, the implants are finally placed. After that, the splint can be removed.

With the aid of a registration template, the abutments can be attached and connected to the implants at the defined position. No less than a pair of abutments should be connected simultaneously to avoid any discrepancy. An important advantage of this technique is the parallel positioning of the abutments. A radiological control is necessary to verify the correct placement and connection of implant and abutment. 

In a further step, abutments are covered by gold cone caps, which represent the secondary crowns. Where necessary, the transition of the gold cone caps to the mucosa can be isolated with rubber dam rings. 

The new prosthesis corresponds to a conventional total prosthesis but the basis contains cavities so that the secondary crowns can be incorporated. The prosthesis is controlled at the terminal position and corrected if needed. The cavities are filled with a self-curing cement and the prosthesis is placed in the terminal position. After the self-curing process, the gold caps are definitely cemented in the prosthesis cavities and the prosthesis can now be detached. Excess cement may be removed and some corrections like polishing or under filling around the secondary crowns may be necessary. The new prosthesis is fitted using a construction of telescope double cone crowns. At the end position, the prosthesis buttons down on the abutments to ensure an adequate hold. 

At the same sitting, the patient receives the implants and the prosthesis. An interim prosthesis is not necessary. The extent of the surgery is kept to minimum. Due to the application of the splint, a reflection of soft tissues in not needed. The patient experiences less bleeding, swelling and discomfort. Complications such as injuring of neighbouring structures are also avoided. Using 3D imaging during the planning phase, the communication between the surgeon, dentist and dental technician is highly supported and any problems can easily detected and eliminated. Each specialist accompanies the whole treatment and interaction can be made. As the end result is already planned and all surgical intervention is carried according to the initial plan, the possibility of any deviation is kept to a minimum. Given the effectiveness of the initial planning the whole treatment duration is shorter than any other treatment procedures.

Computer-assisted ENT surgery

Image-guided surgery and CAS in ENT commonly consists of navigating preoperative image data such as CT or cone beam CT to assist with locating or avoiding anatomically important regions such as the optical nerve or the opening to the frontal sinuses. For use in middle-ear surgery there has been some application of robotic surgery due to the requirement for high-precision actions.

Computer-assisted orthopedic surgery (CAOS)

The application of robotic surgery is widespread in orthopedics, especially in routine interventions, like total hip replacement or pedicle screw insertion. It is also useful in pre-planning and guiding the correct anatomical position of displaced bone fragments in fractures, allowing a good fixation by osteosynthesis, especially for malrotated bones. Early CAOS systems include the HipNav, OrthoPilot, and Praxim.

Computer-assisted visceral surgery

With the advent of computer-assisted surgery, great progresses have been made in general surgery towards minimal invasive approaches. Laparoscopy in abdominal and gynecologic surgery is one of the beneficiaries, allowing surgical robots to perform routine operations, like colecystectomies, or even hysterectomies. In cardiac surgery, shared control systems can perform mitral valve replacement or ventricular pacing by small thoracotomies. In urology, surgical robots contributed in laparoscopic approaches for pyeloplasty or nephrectomy or prostatic interventions.

Computer-assisted cardiac interventions

Applications include atrial fibrillation and cardiac resynchronization therapy. Pre-operative MRI or CT is used to plan the procedure. Pre-operative images, models or planning information can be registered to intra-operative fluoroscopic image to guide procedures.

Computer-assisted radiosurgery

Radiosurgery is also incorporating advanced robotic systems. CyberKnife is such a system that has a lightweight linear accelerator mounted on the robotic arm. It is guided towards tumor processes, using the skeletal structures as a reference system (Stereotactic Radiosurgery System). During the procedure, real time X-ray is used to accurately position the device before delivering radiation beam. The robot can compensate for respiratory motion of the tumor in real-time.

Advantages

CAS starts with the premise of a much better visualization of the operative field, thus allowing a more accurate preoperative diagnostic and a well-defined surgical planning, by using surgical planning in a preoperative virtual environment. This way, the surgeon can easily assess most of the surgical difficulties and risks and have a clear idea about how to optimize the surgical approach and decrease surgical morbidity. During the operation, the computer guidance improves the geometrical accuracy of the surgical gestures and also reduce the redundancy of the surgeon’s acts. This significantly improves ergonomy in the operating theatre, decreases the risk of surgical errors and reduces the operating time.

Disadvantages

There are several disadvantages of computer-assisted surgery. Many systems have costs in the millions of dollars, making them a large investment for even big hospitals. Some people believe that improvements in technology, such as haptic feedback, increased processor speeds, and more complex and capable software will increase the cost of these systems. Another disadvantage is the size of the systems. These systems have relatively large footprints. This is an important disadvantage in today's already crowded-operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room.

Fearmongering

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Fearmongering Fearmongering ,...