Search This Blog

Friday, June 21, 2019

Telerobotics

From Wikipedia, the free encyclopedia

Justus security robot patrolling in Kraków
 
Telerobotics is the area of robotics concerned with the control of semi-autonomous robots from a distance, chiefly using Wireless network (like Wi-Fi, Bluetooth, the Deep Space Network, and similar) or tethered connections. It is a combination of two major subfields, teleoperation and telepresence.

Teleoperation

Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.

Early Telerobotics (Rosenberg, 1992) US Air Force - Virtual Fixtures system
 
Teleoperation is the most standard term, used both in research and technical communities, for referring to operation at a distance. This is opposed to "telepresence", which refers to the subset of telerobotic systems configured with an immersive interface such that the operator feels present in the remote environment, projecting his or her presence through the remote robot. One of the first telepresence systems that enabled operators to feel present in a remote environment through all of the primary senses (sight, sound, and touch) was the Virtual Fixtures system developed at US Air Force Research Laboratories in the early 1990s. The system enabled operators to perform dexterous tasks (inserting pegs into holes) remotely such that the operator would feel as if he or she was inserting the pegs when in fact it was a robot remotely performing the task.

A telemanipulator (or teleoperator) is a device that is controlled remotely by a human operator. In simple cases the controlling operator's command actions correspond directly to actions in the device controlled, as for example in a radio controlled model aircraft or a tethered deep submergence vehicle. Where communications delays make direct control impractical (such as a remote planetary rover), or it is desired to reduce operator workload (as in a remotely controlled spy or attack aircraft), the device will not be controlled directly, instead being commanded to follow a specified path. At increasing levels of sophistication the device may operate somewhat independently in matters such as obstacle avoidance, also commonly employed in planetary rovers. 

Devices designed to allow the operator to control a robot at a distance are sometimes called telecheric robotics. 

Two major components of telerobotics and telepresence are the visual and control applications. A remote camera provides a visual representation of the view from the robot. Placing the robotic camera in a perspective that allows intuitive control is a recent technique that although based in Science Fiction (Robert A. Heinlein's Waldo 1942) has not been fruitful as the speed, resolution and bandwidth have only recently been adequate to the task of being able to control the robot camera in a meaningful way. Using a head mounted display, the control of the camera can be facilitated by tracking the head as shown in the figure below.

This only works if the user feels comfortable with the latency of the system, the lag in the response to movements, the visual representation. Any issues such as, inadequate resolution, latency of the video image, lag in the mechanical and computer processing of the movement and response, and optical distortion due to camera lens and head mounted display lenses, can cause the user 'simulator sickness' that is exacerbated by the lack of vestibular stimulation with visual representation of motion. 

Mismatch between the users motions such as registration errors, lag in movement response due to overfiltering, inadequate resolution for small movements, and slow speed can contribute to these problems. 

The same technology can control the robot, but then the eye–hand coordination issues become even more pervasive through the system, and user tension or frustration can make the system difficult to use.

The tendency to build robots has been to minimize the degrees of freedom because that reduces the control problems. Recent improvements in computers has shifted the emphasis to more degrees of freedom, allowing robotic devices that seem more intelligent and more human in their motions. This also allows more direct teleoperation as the user can control the robot with their own motions.

Interfaces

A telerobotic interface can be as simple as a common MMK (monitor-mouse-keyboard) interface. While this is not immersive, it is inexpensive. Telerobotics driven by internet connections are often of this type. A valuable modification to MMK is a joystick, which provides a more intuitive navigation scheme for planar robot movement. 

Dedicated telepresence setups utilize a head mounted display with either single or dual eye display, and an ergonomically matched interface with joystick and related button, slider, trigger controls.

Other interfaces merge fully immersive virtual reality interfaces and real-time video instead of computer-generated images. Another example would be to use an omnidirectional treadmill with an immersive display system so that the robot is driven by the person walking or running. Additional modifications may include merged data displays such as Infrared thermal imaging, real-time threat assessment, or device schematics.

Applications

Space

NASA HERRO (Human Exploration using Real-time Robotic Operations) telerobotic exploration concept
 
With the exception of the Apollo program, most space exploration has been conducted with telerobotic space probes. Most space-based astronomy, for example, has been conducted with telerobotic telescopes. The Russian Lunokhod-1 mission, for example, put a remotely driven rover on the moon, which was driven in real time (with a 2.5-second lightspeed time delay) by human operators on the ground. Robotic planetary exploration programs use spacecraft that are programmed by humans at ground stations, essentially achieving a long-time-delay form of telerobotic operation. Recent noteworthy examples include the Mars exploration rovers (MER) and the Curiosity rover. In the case of the MER mission, the spacecraft and the rover operated on stored programs, with the rover drivers on the ground programming each day's operation. The International Space Station (ISS) uses a two-armed telemanipulator called Dextre. More recently, a humanoid robot Robonaut has been added to the space station for telerobotic experiments. 

NASA has proposed use of highly capable telerobotic systems for future planetary exploration using human exploration from orbit. In a concept for Mars Exploration proposed by Landis, a precursor mission to Mars could be done in which the human vehicle brings a crew to Mars, but remains in orbit rather than landing on the surface, while a highly capable remote robot is operated in real time on the surface. Such a system would go beyond the simple long time delay robotics and move to a regime of virtual telepresence on the planet. One study of this concept, the Human Exploration using Real-time Robotic Operations (HERRO) concept, suggested that such a mission could be used to explore a wide variety of planetary destinations.

Telepresence and videoconferencing

iRobot Ava 500, an autonomous roaming telepresence robot.
 
The prevalence of high quality video conferencing using mobile devices, tablets and portable computers has enabled a drastic growth in telepresence robots to help give a better sense of remote physical presence for communication and collaboration in the office, home, school, etc. when one cannot be there in person. The robot avatar can move or look around at the command of the remote person.

There have been two primary approaches that both utilize videoconferencing on a display 1) desktop telepresence robots - typically mount a phone or tablet on a motorized desktop stand to enable the remote person to look around a remote environment by panning and tilting the display or 2) drivable telepresence robots - typically contain a display (integrated or separate phone or tablet) mounted on a roaming base. Some examples of desktop telepresence robots include Kubi by Revolve Robotics, Galileo by Motrr, and Swivl. Some examples of roaming telepresence robots include Beam by Suitable Technologies, Double by Double Robotics, RP-Vita by iRobot and InTouch Health, Anybots, Vgo, TeleMe by Mantarobot, and Romo by Romotive. More modern roaming telepresence robots may include an ability to operate autonomously. The robots can map out the space and be able to avoid obstacles while driving themselves between rooms and their docking stations.

Traditional videoconferencing systems and telepresence rooms generally offer Pan / Tilt / Zoom cameras with far end control. The ability for the remote user to turn the device's head and look around naturally during a meeting is often seen as the strongest feature of a telepresence robot. For this reason, the developers have emerged in the new category of desktop telepresence robots that concentrate on this strongest feature to create a much lower cost robot. The desktop telepresence robots, also called head and neck Robots allow users to look around during a meeting and are small enough to be carried from location to location, eliminating the need for remote navigation.

Marine applications

Marine remotely operated vehicles (ROVs) are widely used to work in water too deep or too dangerous for divers. They repair offshore oil platforms and attach cables to sunken ships to hoist them. They are usually attached by a tether to a control center on a surface ship. The wreck of the Titanic was explored by an ROV, as well as by a crew-operated vessel.

Telemedicine

Additionally, a lot of telerobotic research is being done in the field of medical devices, and minimally invasive surgical systems. With a robotic surgery system, a surgeon can work inside the body through tiny holes just big enough for the manipulator, with no need to open up the chest cavity to allow hands inside.

Other applications

Remote manipulators are used to handle radioactive materials. 

Telerobotics has been used in installation art pieces; Telegarden is an example of a project where a robot was operated by users through the Web.

Experiment

From Wikipedia, the free encyclopedia

Even very young children perform rudimentary experiments to learn about the world and how things work.
 
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments vary greatly in goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exists natural experimental studies

A child may carry out basic experiments to understand gravity, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of hands-on activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences. 

Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled (accounted for by the control measurements) and none are uncontrolled. In such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variable.

Overview

In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.

An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment reveals, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment

In engineering and the physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.

In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis

There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.

History

One of the first methodical approaches to experiments in the modern sense is visible in the works of the Arab mathematician and scholar Ibn al-Haytham. He conducted his experiments in the field of optics - going back to optical and mathematical problems in the works of Ptolemy - by controlling his experiments due to factors such as self-criticality, reliance on visible results of the experiments as well as a criticality in terms of earlier results. He counts as one of the first scholars using an inductive-experimental method for achieving results. In his book "Optics" he describes the fundamentally new approach to knowledge and research in an experimental sense:
"We should, that is, recommence the inquiry into its principles and premisses, beginning our investigation with an inspection of the things that exist and a survey of the conditions of visible objects. We should distinguish the properties of particulars, and gather by induction what pertains to the eye when vision takes place and what is found in the manner of sensation to be uniform, unchanging, manifest and not subject to doubt. After which we should ascend in our inquiry and reasonings, gradually and orderly, criticizing premisses and exercising caution in regard to conclusions – our aim in all that we make subject to inspection and review being to employ justice, not to follow prejudice, and to take care in all that we judge and criticize that we seek the truth and not to be swayed by opinion. We may in this way eventually come to the truth that gratifies the heart and gradually and carefully reach the end at which certainty appears; while through criticism and caution we may seize the truth that dispels disagreement and resolves doubtful matters. For all that, we are not free from that human turbidity which is in the nature of man; but we must do our best with what we possess of human power. From God we derive support in all things."
According to his explanation, a strictly controlled test execution with a sensibility for the subjectivity and susceptibility of outcomes due to the nature of man is necessary. Furthermore, a critical view on the results and outcomes of earlier scholars is necessary:
"It is thus the duty of the man who studies the writings of scientists, if learning the truth is his goal, to make himself an enemy of all that he reads, and, applying his mind to the core and margins of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency."
Thus, a comparison of earlier results with the experimental results is necessary for an objective experiment - the visible results being more important. In the end, this may mean that an experimental researcher must find enough courage to discard traditional opinions or results, especially if these results are not experimental but results from a logical/ mental derivation. In this process of critical consideration, the man himself should not forget that he tends to subjective opinions - through "prejudices" and "leniency" - and thus has to be critical about his own way of building hypotheses.

Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an influential supporter of experimental science in the English renaissance. He disagreed with the method of answering scientific questions by deduction - similar to Ibn al-Haytham - and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession." Bacon wanted a method that relied on repeatable observations, or experiments. Notably, he first ordered the scientific method as we understand it today.
There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment]; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.
In the centuries that followed, people who applied the scientific method in different areas made important advances and discoveries. For example, Galileo Galilei (1564-1642) accurately measured time and experimented to make accurate measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743-1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822-1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible. 

A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century, with contributions from statisticians such as Ronald Fisher (1890-1962), Jerzy Neyman (1894-1981), Oscar Kempthorne (1919-2000), Gertrude Mary Cox (1900-1978), and William Gemmell Cochran (1909-1980), among others.

Types of experiment

Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study. In some disciplines (e.g., psychology or political science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects to neutralize experimenter bias, and ensures, over a large number of iterations of the experiment, that it controls for all confounding factors.

Controlled experiments

A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure similar to the actual experimental test but is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to subtract from the test sample results. Sometimes the positive control takes the quadrant of a standard curve

An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in the fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) to estimate the amount of protein in the unknown sample.

Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups. 

Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated. 

In human experiments, researchers may give a subject (person) a stimulus that the subject responds to. The goal of the experiment is to measure the response to the stimulus by a test method

Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
 
In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations – or hypotheses. These hypotheses suggest reasons to explain a phenomenon, or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out and the results analysed to confirm, refute, or define the accuracy of the hypotheses. 

Experiments can be also designed to estimate spillover effects onto nearby untreated units.

Natural experiments

The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult or impossible. In this case researchers resort to natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study. 

Much research in several science disciplines, including economics, political science, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "Stars are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a star. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed.

Field experiments

Field experiments are so named to distinguish them from laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory.

Contrast with observational study

The black box model for observation (input and output are observables). When there are a feedback with some observer's control, as illustrated, the observation is also an experiment.
 
An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). For an observational science to be valid, the experimenter must know and account for confounding factors. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data. 

Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.

A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and results are not meaningful if a covariate is neither randomized nor included in the model.

To avoid conditions that render an experiment far less useful, physicians conducting medical trials – say for U.S. Food and Drug Administration approval – quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with complicated statistical methods such as propensity score matching methods, which require large populations of subjects and extensive information on covariates. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing.

Ethics

By placing the distribution of the independent variable(s) under the control of the researcher, an experiment – particularly when it involves human subjects – introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example, in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice. It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors. 

Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects.

Experimental method in law

The experimental method can be useful in solving juridical problems.

Computer-assisted surgery

From Wikipedia, the free encyclopedia

Computer-assisted surgery (CAS) represents a surgical concept and set of methods, that use computer technology for surgical planning, and for guiding or performing surgical interventions. CAS is also known as computer-aided surgery, computer-assisted intervention, image-guided surgery and surgical navigation, but these are terms that are more or less synonymous with CAS. CAS has been a leading factor in the development of robotic surgery.

General principles

Image gathering ("segmentation") on the LUCAS workstation

Creating a virtual image of the patient

The most important component for CAS is the development of an accurate model of the patient. This can be conducted through a number of medical imaging technologies including CT, MRI, x-rays, ultrasound plus many more. For the generation of this model, the anatomical region to be operated has to be scanned and uploaded into the computer system. It is possible to employ a number of scanning methods, with the datasets combined through data fusion techniques. The final objective is the creation of a 3D dataset that reproduces the exact geometrical situation of the normal and pathological tissues and structures of that region. Of the available scanning methods, the CT is preferred, because MRI data sets are known to have volumetric deformations that may lead to inaccuracies. An example data set can include the collection of data compiled with 180 CT slices, that are 1 mm apart, each having 512 by 512 pixels. The contrasts of the 3D dataset (with its tens of millions of pixels) provide the detail of soft vs hard tissue structures, and thus allow a computer to differentiate, and visually separate for a human, the different tissues and structures. The image data taken from a patient will often include intentional landmark features, in order to be able to later realign the virtual dataset against the actual patient during surgery.

Image analysis and processing

Image analysis involves the manipulation of the patients 3D model to extract relevant information from the data. Using the differing contrast levels of the different tissues within the imagery, as examples, a model can be changed to show just hard structures such as bone, or view the flow of arteries and veins through the brain.

Diagnostic, preoperative planning, surgical simulation

Using specialized software the gathered dataset can be rendered as a virtual 3D model of the patient, this model can be easily manipulated by a surgeon to provide views from any angle and at any depth within the volume. Thus the surgeon can better assess the case and establish a more accurate diagnostic. Furthermore, the surgical intervention will be planned and simulated virtually, before actual surgery takes place (computer-aided surgical simulation [CASS]). Using dedicated software, the surgical robot will be programmed to carry out the planned actions during the actual surgical intervention.

Surgical navigation

In computer-assisted surgery, the actual intervention is defined as surgical navigation. Using the surgical navigation system the surgeon uses special instruments, which are tracked by the navigation system. The position of a tracked instrument in relation to the patient's anatomy is shown on images of the patient, as the surgeon moves the instrument. The surgeon thus uses the system to 'navigate' the location of an instrument. The feedback the system provides of the instrument location is particularly useful in situations where the surgeon cannot actually see the tip of the instrument, such as in minimally invasive surgeries.

Robotic surgery

Robotic surgery is a term used for correlated actions of a surgeon and a surgical robot (that has been programmed to carry out certain actions during the preoperative planning procedure). A surgical robot is a mechanical device (generally looking like a robotic arm) that is computer-controlled. Robotic surgery can be divided into three types, depending on the degree of surgeon interaction during the procedure: supervisory-controlled, telesurgical, and shared-control. In a supervisory-controlled system, the procedure is executed solely by the robot, which will perform the pre-programmed actions. A telesurgical system, also known as remote surgery, requires the surgeon to manipulate the robotic arms during the procedure rather than allowing the robotic arms to work from a predetermined program. With shared-control systems, the surgeon carries out the procedure with the use of a robot that offers steady-hand manipulations of the instrument. In most robots, the working mode can be chosen for each separate intervention, depending on the surgical complexity and the particularities of the case.

Applications

Computer-assisted surgery is the beginning of a revolution in surgery. It already makes a great difference in high-precision surgical domains, but it is also used in standard surgical procedures.

Computer-assisted neurosurgery

Telemanipulators have been used for the first time in neurosurgery, in the 1980s. This allowed a greater development in brain microsurgery (compensating surgeon’s physiological tremor by 10-fold), increased accuracy and precision of the intervention. It also opened a new gate to minimally invasive brain surgery, furthermore reducing the risk of post-surgical morbidity by avoiding accidental damage to adjacent centers.

Computer-assisted oral and maxillofacial surgery

Bone segment navigation is the modern surgical approach in orthognathic surgery (correction of the anomalies of the jaws and skull), in temporo-mandibular joint (TMJ) surgery, or in the reconstruction of the mid-face and orbit.

It is also used in implantology where the available bone can be seen and the position, angulation and depth of the implants can be simulated before the surgery. During the operation surgeon is guided visually and by sound alerts. IGI (Image Guided Implantology) is one of the navigation systems which uses this technology.

Guided Implantology

New therapeutic concepts as guided surgery are being developed and applied in the placement of dental implants. The prosthetic rehabilitation is also planned and performed parallel to the surgical procedures. The planning steps are at the foreground and carried out in a cooperation of the surgeon, the dentist and the dental technician. Edentulous patients, either one or both jaws, benefit as the time of treatment is reduced. 

Regarding the edentulous patients, conventional denture support is often compromised due to moderate bone atrophy, even if the dentures are constructed based on correct anatomic morphology.

Using cone beam computed tomography, the patient and the existing prosthesis are being scanned. Furthermore, the prosthesis alone is also scanned. Glass pearls of defined diameter are placed in the prosthesis and used as reference points for the upcoming planning. The resulting data is processed and the position of the implants determined. The surgeon, using special developed software, plans the implants based on prosthetic concepts considering the anatomic morphology. After the planning of the surgical part is completed, a CAD/CAM surgical guide for dental placement is constructed. The mucosal-supported surgical splint ensures the exact placement of the implants in the patient. Parallel to this step, the new implant supported prosthesis is constructed.

The dental technician, using the data resulting from the previous scans, manufactures a model representing the situation after the implant placement. The prosthetic compounds, abutments, are already prefabricated. The length and the inclination can be chosen. The abutments are connected to the model at a position in consideration of the prosthetic situation. The exact position of the abutments is registered. The dental technician can now manufacture the prosthesis. 

The fit of the surgical splint is clinically proved. After that, the splint is attached using a three-point support pin system. Prior to the attachment, irrigation with a chemical disinfectant is advised. The pins are driven through defined sheaths from the vestibular to the oral side of the jaw. Ligaments anatomy should be considered, and if necessary decompensation can be achieved with minimal surgical interventions. The proper fit of the template is crucial and should be maintained throughout the whole treatment. Regardless of the mucosal resilience, a correct and stable attachment is achieved through the bone fixation. The access to the jaw can now only be achieved through the sleeves embedded in the surgical template. Using specific burs through the sleeves the mucosa is removed. Every bur used, carries a sleeve compatible to the sleeves in the template, which ensures that the final position is achieved but no further progress in the alveolar ridge can take place. Further procedure is very similar to the traditional implant placement. The pilot hole is drilled and then expanded. With the aid of the splint, the implants are finally placed. After that, the splint can be removed.

With the aid of a registration template, the abutments can be attached and connected to the implants at the defined position. No less than a pair of abutments should be connected simultaneously to avoid any discrepancy. An important advantage of this technique is the parallel positioning of the abutments. A radiological control is necessary to verify the correct placement and connection of implant and abutment. 

In a further step, abutments are covered by gold cone caps, which represent the secondary crowns. Where necessary, the transition of the gold cone caps to the mucosa can be isolated with rubber dam rings. 

The new prosthesis corresponds to a conventional total prosthesis but the basis contains cavities so that the secondary crowns can be incorporated. The prosthesis is controlled at the terminal position and corrected if needed. The cavities are filled with a self-curing cement and the prosthesis is placed in the terminal position. After the self-curing process, the gold caps are definitely cemented in the prosthesis cavities and the prosthesis can now be detached. Excess cement may be removed and some corrections like polishing or under filling around the secondary crowns may be necessary. The new prosthesis is fitted using a construction of telescope double cone crowns. At the end position, the prosthesis buttons down on the abutments to ensure an adequate hold. 

At the same sitting, the patient receives the implants and the prosthesis. An interim prosthesis is not necessary. The extent of the surgery is kept to minimum. Due to the application of the splint, a reflection of soft tissues in not needed. The patient experiences less bleeding, swelling and discomfort. Complications such as injuring of neighbouring structures are also avoided. Using 3D imaging during the planning phase, the communication between the surgeon, dentist and dental technician is highly supported and any problems can easily detected and eliminated. Each specialist accompanies the whole treatment and interaction can be made. As the end result is already planned and all surgical intervention is carried according to the initial plan, the possibility of any deviation is kept to a minimum. Given the effectiveness of the initial planning the whole treatment duration is shorter than any other treatment procedures.

Computer-assisted ENT surgery

Image-guided surgery and CAS in ENT commonly consists of navigating preoperative image data such as CT or cone beam CT to assist with locating or avoiding anatomically important regions such as the optical nerve or the opening to the frontal sinuses. For use in middle-ear surgery there has been some application of robotic surgery due to the requirement for high-precision actions.

Computer-assisted orthopedic surgery (CAOS)

The application of robotic surgery is widespread in orthopedics, especially in routine interventions, like total hip replacement or pedicle screw insertion. It is also useful in pre-planning and guiding the correct anatomical position of displaced bone fragments in fractures, allowing a good fixation by osteosynthesis, especially for malrotated bones. Early CAOS systems include the HipNav, OrthoPilot, and Praxim.

Computer-assisted visceral surgery

With the advent of computer-assisted surgery, great progresses have been made in general surgery towards minimal invasive approaches. Laparoscopy in abdominal and gynecologic surgery is one of the beneficiaries, allowing surgical robots to perform routine operations, like colecystectomies, or even hysterectomies. In cardiac surgery, shared control systems can perform mitral valve replacement or ventricular pacing by small thoracotomies. In urology, surgical robots contributed in laparoscopic approaches for pyeloplasty or nephrectomy or prostatic interventions.

Computer-assisted cardiac interventions

Applications include atrial fibrillation and cardiac resynchronization therapy. Pre-operative MRI or CT is used to plan the procedure. Pre-operative images, models or planning information can be registered to intra-operative fluoroscopic image to guide procedures.

Computer-assisted radiosurgery

Radiosurgery is also incorporating advanced robotic systems. CyberKnife is such a system that has a lightweight linear accelerator mounted on the robotic arm. It is guided towards tumor processes, using the skeletal structures as a reference system (Stereotactic Radiosurgery System). During the procedure, real time X-ray is used to accurately position the device before delivering radiation beam. The robot can compensate for respiratory motion of the tumor in real-time.

Advantages

CAS starts with the premise of a much better visualization of the operative field, thus allowing a more accurate preoperative diagnostic and a well-defined surgical planning, by using surgical planning in a preoperative virtual environment. This way, the surgeon can easily assess most of the surgical difficulties and risks and have a clear idea about how to optimize the surgical approach and decrease surgical morbidity. During the operation, the computer guidance improves the geometrical accuracy of the surgical gestures and also reduce the redundancy of the surgeon’s acts. This significantly improves ergonomy in the operating theatre, decreases the risk of surgical errors and reduces the operating time.

Disadvantages

There are several disadvantages of computer-assisted surgery. Many systems have costs in the millions of dollars, making them a large investment for even big hospitals. Some people believe that improvements in technology, such as haptic feedback, increased processor speeds, and more complex and capable software will increase the cost of these systems. Another disadvantage is the size of the systems. These systems have relatively large footprints. This is an important disadvantage in today's already crowded-operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room.

Robot-assisted surgery

From Wikipedia, the free encyclopedia

Robot-assisted surgery
Laproscopic Surgery Robot.jpg
A robotically assisted surgical system used for prostatectomies, cardiac valve repair and gynecologic surgical procedures
Other namesRobotically-assisted surgery

Robotic surgery are types of surgical procedures that are done using robotic systems. Robotically-assisted surgery was developed to try to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.

In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to administer the instruments. These include using a direct telemanipulator or through computer control. A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery. The robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient. In computer-controlled systems, the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerized method is that the surgeon does not have to be present, but can be anywhere in the world, leading to the possibility for remote surgery.

Laparoscopic procedures are considered a form of minimally-invasive surgery. Several small incisions, called keyhole incisions, are made. These types of surgeries are associated with shorter hospital stays than open surgery, as well as less postoperative pain and scarring and lower risks of infection and need for blood transfusion.

In the case of enhanced open surgery, autonomous instruments (in familiar configurations) replace traditional steel tools, performing certain actions (such as rib spreading) with much smoother, feedback-controlled motions that could be achieved by a human hand. The main object of such smart instruments is to reduce or eliminate the tissue trauma traditionally associated with open surgery without requiring more than a few minutes' training on the part of surgeons. This approach seeks to improve open surgeries, particularly cardio-thoracic, that have so far not benefited from minimally-invasive techniques.

Robotic surgery has been criticized for its expense, with the average costs in 2007 ranging from $5,607 to $45,914 per patient. This technique has not been approved for cancer surgery as of 2019 with concerns that it may worsen rather than improve outcomes.

Comparison to traditional methods

Major advances aided by surgical robots have been remote surgery, minimally invasive surgery and unmanned surgery. Due to robotic use, the surgery is done with precision, miniaturization, smaller incisions; decreased blood loss, less pain, and quicker healing time. Articulation beyond normal manipulation and three-dimensional magnification help to result in improved ergonomics. Due to these techniques, there is a reduced duration of hospital stays, blood loss, transfusions, and use of pain medication. The existing open surgery technique has many flaws like limited access to the surgical area, long recovery time, long hours of operation, blood loss, surgical scars, and marks.

The robot's costs range from $1 million to $2.5 million for each unit, and while its disposable supply cost is normally $1,500 per procedure, the cost of the procedure is higher. Additional surgical training is needed to operate the system. Numerous feasibility studies have been done to determine whether the purchase of such systems are worthwhile. As it stands, opinions differ dramatically. Surgeons report that, although the manufacturers of such systems provide training on this new technology, the learning phase is intensive and surgeons must perform 150 to 250 procedures to become adept in their use. During the training phase, minimally invasive operations can take up to twice as long as traditional surgery, leading to operating room tie-ups and surgical staffs keeping patients under anesthesia for longer periods. Patient surveys indicate they chose the procedure based on expectations of decreased morbidity, improved outcomes, reduced blood loss and less pain. Higher expectations may explain higher rates of dissatisfaction and regret.

Compared with other minimally invasive surgery approaches, robot-assisted surgery gives the surgeon better control over the surgical instruments and a better view of the surgical site. In addition, surgeons no longer have to stand throughout the surgery and do not get tired as quickly. Naturally occurring hand tremors are filtered out by the robot's computer software. Finally, the surgical robot can continuously be used by rotating surgery teams. Laparoscopic camera positioning is also significantly steadier with less inadvertent movements under robotic controls than compared to human assistance.

There are some issues in regards to current robotic surgery usage in clinical applications. There is a lack of haptics in some robotic systems currently in clinical use, which means there is no force feedback, or touch feedback. Surgeons are thus not able to feel the interaction of the instrument with the patient. Some systems already have this haptic feedback in order to improve the interaction between the surgeon and the tissue.

The robots can also be very large, have instrumentation limitations, and there may be issues with multi-quadrant surgery as current devices are solely used for single-quadrant application.

Critics of the system, including the American Congress of Obstetricians and Gynecologists, say there is a steep learning curve for surgeons who adopt the use of the system and that there's a lack of studies that indicate long-term results are superior to results following traditional laparoscopic surgery. Articles in the newly created Journal of Robotic Surgery tend to report on one surgeon's experience.

A Medicare study found that some procedures that have traditionally been performed with large incisions can be converted to "minimally invasive" endoscopic procedures with the use of the Da Vinci Surgical System, shortening length-of-stay in the hospital and reducing recovery times. But because of the hefty cost of the robotic system, it is not clear that it is cost-effective for hospitals and physicians despite any benefits to patients since there is no additional reimbursement paid by the government or insurance companies when the system is used.

Complications related to robotic surgeries range from converting the surgery to open, re-operation, permanent injury, damage to viscera and nerve damage. From 2000 to 2011, out of 75 hysterectomies done with robotic surgery, 34 had permanent injury, and 49 had damage to the viscera. Prostatectomies were more prone to permanent injury, nerve damage and visceral damage as well. Very minimal surgeries in a variety of specialties had to actually be converted to open or be re-operated on, but most did suffer some kind of damage and/or injury. For example, out of seven coronary artery bypass grafting, one patient had to go under re-operation. It is important that complications are captured, reported and evaluated to ensure the medical community is better educated on the safety of this new technology.

There are also current methods of robotic surgery being marketed and advertised online. Removal of a cancerous prostate has been a popular treatment through internet marketing. Internet marketing of medical devices are more loosely regulated than pharmaceutical promotions. Many sites that claim the benefits of this type of procedure had failed to mention risks and also provided unsupported evidence. There is an issue with government and medical societies promotion a production of balanced educational material. In the US alone, many websites promotion robotic surgery fail to mention any risks associated with these types of procedures, and hospitals providing materials largely ignore risks, overestimate benefits and are strongly influenced by the manufacturer.

Uses

Heart

As of 2004, three types of heart surgery are being performed on a routine basis using robotic surgery systems. These three surgery types were:
  • Atrial septal defect repair – the repair of a hole between the two upper chambers of the heart,
  • Mitral valve repair – the repair of the valve that prevents blood from regurgitating back into the upper heart chambers during contractions of the heart,
  • Coronary artery bypass – rerouting of blood supply by bypassing blocked arteries that provide blood to the heart.
There is also a system for robotic heart surgery that learns to tie knots using recurrent neural networks. The EndoPAR system is an experimental robotic surgical platform developed at the University of Munich. Four robotic arms have force-feedback instruments where the fourth one holds a 3-D endoscopic stereo camera. This robot is controlled by a PHANToM Premium 1.5 device that allows for the surgeon to finely control knot tying with stabilization filters and displaying forces in all translational directions.

Thoracic

Thoracic surgery has become more widespread in thoracic surgery for mediastinal pathologies, pulmonary pathologies and more recently complex esophageal surgery.

Gastrointestinal

Multiple types of procedures have been performed with either the 'Zeus' or da Vinci robot systems, including bariatric surgery and gastrectomy for cancer. Surgeons at various universities initially published case series demonstrating different techniques and the feasibility of GI surgery using the robotic devices. Specific procedures have been more fully evaluated, specifically esophageal fundoplication for the treatment of gastroesophageal reflux and Heller myotomy for the treatment of achalasia.

Robot-assisted pancreatectomies have been found to be associated with "longer operating time, lower estimated blood loss, a higher spleen-preservation rate, and shorter hospital stay[s]" than laparoscopic pancreatectomies; there was "no significant difference in transfusion, conversion to open surgery, overall complications, severe complications, pancreatic fistula, severe pancreatic fistula, ICU stay, total cost, and 30-day mortality between the two groups."

Gynecology

Robotic surgery in gynecology is of uncertain benefit with it being unclear if it affects rates of complications. Gynecologic procedures may take longer with robot-assisted surgery but may be associated with a shorter hospital stay following hysterectomy. In the United States, robotic-assisted hysterectomy for benign conditions has been shown to be more expensive than conventional laparoscopic hysterectomy, with no difference in overall rates of complications.

This includes the use of the da Vinci surgical system in benign gynecology and gynecologic oncology. Robotic surgery can be used to treat fibroids, abnormal periods, endometriosis, ovarian tumors, uterine prolapse, and female cancers. Using the robotic system, gynecologists can perform hysterectomies, myomectomies, and lymph node biopsies.

A 2017 review of surgical removal of the uterus and cervix for early cervical cancer robotic and laparoscopic surgery resulted in similar outcomes with respect to the cancer.

Bone

Robots are used in orthopedic surgery.

Spine

Robotic devices started to be used in minimally invasive spine surgery starting in the mid-2000s. As of 2014, there were too few randomized clinical trials to allow judgement as to whether robotic spine surgery is more or less safe than other approaches.

Transplant surgery

Transplant surgery (organ transplantation) has been considered as highly technically demanding and virtually unobtainable by means of conventional laparoscopy. For many years, transplant patients were unable to benefit from the advantages of minimally invasive surgery. The development of robotic technology and its associated high-resolution capabilities, three-dimensional visual system, wrist type motion, and fine instruments, gave an opportunity for highly complex procedures to be completed in a minimally invasive fashion. Subsequently, the first fully robotic kidney transplantations were performed in the late 2000s. After the procedure was proven to be feasible and safe, the main emerging challenge was to determine which patients would benefit most from this robotic technique. As a result, recognition of the increasing prevalence of obesity amongst patients with kidney failure on hemodialysis posed a significant problem. Due to the abundantly higher risk of complications after traditional open kidney transplantation, obese patients were frequently denied access to transplantation, which is the premium treatment for end-stage kidney disease.

General surgery

General surgeons focus on any abdominal contents. With regards to robotic surgery, this type of procedure is currently best suited for single-quadrant procedures, in which the operations can be performed on any one of the four quadrants of the abdomen.

Cost disadvantages are applied with procedures such as a cholecystectomy and fundoplication, but are suitable opportunities for surgeons to advance their robotic surgery skills.

Urology

Robotic surgery in the field of urology has become very popular, especially in the United States. It has been most extensively applied for excision of prostate cancer because of difficult anatomical access. It is also utilized for kidney cancer surgeries and to lesser-extent surgeries of the bladder

As of 2014, there is little evidence of increased benefits compared to standard surgery to justify the increased costs. Some have found tentative evidence of more complete removal of cancer and fewer side effects from surgery for prostatectomy.

In 2000, the first robot-assisted laparoscopic radical prostatectomy was performed.

History

The first robot to assist in surgery was the Arthrobot, which was developed and used for the first time in Vancouver in 1985. This robot assisted in being able to manipulate and position the patient’s leg on voice command. Intimately involved were biomedical engineer Dr. James McEwen, Geof Auchinleck, a UBC engineering physics grad, and Dr. Brian Day as well as a team of engineering students. The robot was used in an orthopaedic surgical procedure on 12 March 1984, at the UBC Hospital in Vancouver. Over 60 arthroscopic surgical procedures were performed in the first 12 months, and a 1985 National Geographic video on industrial robots, The Robotics Revolution, featured the device. Other related robotic devices developed at the same time included a surgical scrub nurse robot, which handed operative instruments on voice command, and a medical laboratory robotic arm. A YouTube video entitled Arthrobot- the world's first surgical robot illustrates some of these in operation.

In 1985 a robot, the Unimation Puma 200, was used to orient a needle for a brain biopsy while under CT guidance during a neurological procedure. In the late 1980s, Imperial College in London developed PROBOT, which was then used to perform prostatic surgery. The advantages to this robot was its small size, accuracy and lack of fatigue for the surgeon. In 1992, the ROBODOC was introduced and revolutionized orthopedic surgery by being able to assist with hip replacement surgeries. The latter was the first surgical robot that was approved by the FDA in 2008. The ROBODOC from Integrated Surgical Systems (working closely with IBM) could mill out precise fittings in the femur for hip replacement. The purpose of the ROBODOC was to replace the previous method of carving out a femur for an implant, the use of a mallet and broach/rasp. 

Further development of robotic systems was carried out by SRI International and Intuitive Surgical with the introduction of the da Vinci Surgical System and Computer Motion with the AESOP and the ZEUS robotic surgical system. The first robotic surgery took place at The Ohio State University Medical Center in Columbus, Ohio under the direction of Robert E. Michler.

AESOP was a breakthrough in robotic surgery when introduced in 1994, as it was the first laparoscopic camera holder to be approved by the FDA. NASA initially funded the company, Computer Motion, that had produced AESOP, for its goal to create a robotic arm that can be used in space, but ended up becoming a camera used in laparascopic procedures. Voice control was then added in 1996 with the AESOP 2000 and seven degrees of freedom to mimic a human hand was added in 1998 with the AESOP 3000.

ZEUS was introduced commercially in 1998, and was started the idea of telerobotics or telepresence surgery where the surgeon is at a distance from the robot on a console and operates on the patient. Examples of using ZEUS include a fallopian tube reconnection in July 1998, a beating heart coronary artery bypass graft in October 1999, and the Lindbergh Operation, which was a cholecystectomy performed remotely in September 2001. In 2003, ZEUS made its most prominent mark in cardiac surgery after successfully harvesting the left internal mammary arteries in 19 patients, all of which had very successful clinical outcomes.

The original telesurgery robotic system that the da Vinci was based on was developed at Stanford Research Institude International in Menlo Park with grant support from DARPA and NASA. Ademonstration of an open bowel anastomosis was given to the Association of Military Surgeons of the US. Although the telesurgical robot was originally intended to facilitate remotely performed surgery in the battlefield and other remote environments, it turned out to be more useful for minimally invasive on-site surgery. The patents for the early prototype were sold to Intuitive Surgical in Mountain View, California. The da Vinci senses the surgeon's hand movements and translates them electronically into scaled-down micro-movements to manipulate the tiny proprietary instruments. It also detects and filters out any tremors in the surgeon's hand movements, so that they are not duplicated robotically. The camera used in the system provides a true stereoscopic picture transmitted to a surgeon's console. Compared to the ZEUS, the da Vinci robot is attached to trocars to the surgical table, and can imitate the human wrist. In 2000, the da Vinci obtained FDA approval for general laparscopic procedures and became the first operative surgical robot in the US. Examples of using the da Vinci system include the first robotically assisted heart bypass (performed in Germany) in May 1998, and the first performed in the United States in September 1999; and the first all-robotic-assisted kidney transplant, performed in January 2009. The da Vinci Si was released in April 2009 and initially sold for $1.75 million.

In 2005, a surgical technique was documented in canine and cadaveric models called the transoral robotic surgery (TORS) for the da Vinci robot surgical system as it was the only FDA-approved robot to perform head and neck surgery. In 2006, three patients underwent resection of the tongue using this technique. The results were more clear visualization of the cranial nerves, lingual nerves, and lingual artery, and the patients had a faster recovery to normal swallowing. In May 2006 the first artificial intelligence doctor-conducted unassisted robotic surgery was on a 34-year-old male to correct heart arrythmia. The results were rated as better than an above-average human surgeon. The machine had a database of 10,000 similar operations, and so, in the words of its designers, was "more than qualified to operate on any patient". In August 2007, Dr. Sijo Parekattil of the Robotics Institute and Center for Urology (Winter Haven Hospital and University of Florida) performed the first robotic-assisted microsurgery procedure denervation of the spermatic cord for chronic testicular pain. In February 2008, Dr. Mohan S. Gundeti of the University of Chicago Comer Children's Hospital performed the first robotic pediatric neurogenic bladder reconstruction.

On 12 May 2008, the first image-guided MR-compatible robotic neurosurgical procedure was performed at University of Calgary by Dr. Garnette Sutherland using the NeuroArm. In June 2008, the German Aerospace Centre (DLR) presented a robotic system for minimally invasive surgery, the MiroSurge. In September 2010, the Eindhoven University of Technology announced the development of the Sofie surgical system, the first surgical robot to employ force feedback. In September 2010, the first robotic operation at the femoral vasculature was performed at the University Medical Centre Ljubljana by a team led by Borut Geršak.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...