Search This Blog

Tuesday, June 5, 2018

Human subject research

From Wikipedia, the free encyclopedia

1946 military human subject research on the effects of wind on humans

Human subject research is systematic, scientific investigation that can be either interventional (a "trial") or observational (no "test article") and involves human beings as research subjects. Human subject research can be either medical (clinical) research or non-medical (e.g., social science) research.[1] Systematic investigation incorporates both the collection and analysis of data in order to answer a specific question. Medical human subject research often involves analysis of biological specimens, epidemiological and behavioral studies and medical chart review studies.[1] (A specific, and especially heavily regulated, type of medical human subject research is the "clinical trial", in which drugs, vaccines and medical devices are evaluated.) On the other hand, human subject research in the social sciences often involves surveys which consist of questions to a particular group of people. Survey methodology includes questionnaires, interviews, and focus groups.

Human subject research is used in various fields, including research into basic biology, clinical medicine, nursing, psychology, sociology, political science, and anthropology. As research has become formalized, the academic community has developed formal definitions of "human subject research", largely in response to abuses of human subjects.

Human subjects

The United States Department of Health and Human Services (HHS) defines a human research subject as a living individual about whom a research investigator (whether a professional or a student) obtains data through 1) intervention or interaction with the individual, or 2) identifiable private information (32 C.F.R. 219.102(f)). (Lim, 1990)[2]

As defined by HHS regulations:

"Intervention"- physical procedures by which data is gathered and the manipulation of the subject and/or their environment for research purposes [45 C.F.R. 46.102(f)][2]

"Interaction"- communication or interpersonal contact between investigator and subject [45 C.F.R. 46.102(f)])[2]

"Private Information"- information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public [45 C.F.R. 46.102(f)] )][2]

"Identifiable information"- specific information that can be used to identify an individual.[2]

Human subject rights

In 2010, the National Institute of Justice in the United States published recommended rights of human subjects:
  • Voluntary, informed consent
  • Respect for persons: treated as autonomous agents
  • The right to end participation in research at any time[3]
  • Right to safeguard integrity[3]
  • Benefits should outweigh cost
  • Protection from physical, mental and emotional harm
  • Access to information regarding research[3]
  • Protection of privacy and well-being[4]

Ethical guidelines

Ethical guidelines that govern the use of human subjects in research are a fairly new construct. In 1906 some regulations were put in place in the United States to protect subjects from abuses. After the passage of the Pure Food and Drug Act in 1906, regulatory bodies such as the Food and Drug Administration (FDA) and institutional review boards (IRBs) were gradually introduced. The policies that these institutions implemented served to minimize harm to the participant's mental and/or physical well being.

Nuremberg Code

In 1947, German physicians who conducted deadly or debilitating experiments on concentration camp prisoners were prosecuted as war criminals in the Nuremberg Trials. That same year, the Allies established the Nuremberg Code, the first international document to support the concept that "the voluntary consent of the human subject is absolutely essential". Individual consent was emphasized in the Nuremberg Code in order to prevent prisoners of war, patients, prisoners, and soldiers from being coerced into becoming human subjects. In addition, it was emphasized in order to inform participants of the risk-benefit outcomes of experiments.

Declaration of Helsinki

The Declaration of Helsinki was established in 1964 to regulate international research involving human subjects. Established by the World Medical Association, the declaration recommended guidelines for medical doctors conducting biomedical research that involves human subjects. Some of these guidelines included the principles that "research protocols should be reviewed by an independent committee prior to initiation" and that "research with humans should be based on results from laboratory animals and experimentation".
The Declaration of Helsinki is widely regarded as the cornerstone document on human research ethics.[5][6][7]

The Belmont Report

The Belmont Report was created by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to describe the ethical behaviors that involve researching human subjects. By looking primarily at biomedical and behavioral research that involve human subjects, the report was generated to promise that ethical standards are followed during research of human subjects.[8]There are three standards that serve as the baseline for the report and how human subjects are to be researched. The three guidelines are beneficence (ethics), justice and respect for persons. Beneficence (ethics) is described as protecting the well-being of the persons and respecting their decisions by being ethical and protecting the subjects from harm. The two rules of beneficence are maximizing the benefits of research and minimizing any possible risks.[9]It is the job of the researcher to inform the persons of the benefits as well as the risks of human subject research. Justice is important because it causes the researchers to be fair in their research findings and share what they have found, whether the information is good or bad.[9] The selection process of the subject is supposed to be fair and not separate due to race, sexual orientation or ethnic group.[10] Lastly, respect for persons explains that at any point a person who is involved in a study can decide whether they want to participate, not to participate or withdraw themselves from the study altogether. Two rules of respect for persons involve the person being autonomous and persons with diminished autonomy and entitled to protection.[8]The sole purpose of these guidelines is to ensure autonomy and to protect against those with a lesser chance to remain autonomous because of something out of their control.[8]

Clinical trials

Clinical trials are experiments done in clinical research. Such prospective biomedical or behavioral research studies on human participants are designed to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novel vaccines, drugs, dietary choices, dietary supplements, and medical devices) and known interventions that warrant further study and comparison. Clinical trials generate data on safety and efficacy.[11] They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial - their approval does not mean that the therapy is 'safe' or effective, only that the trial may be conducted.

Depending on product type and development stage, investigators initially enroll volunteers and/or patients into small pilot studies, and subsequently conduct progressively larger scale comparative studies. Clinical trials can vary in size and cost, and they can involve a single research center or multiple centers, in one country or in multiple countries. Clinical study design aims to ensure the scientific validity and reproducibility of the results.

Trials can be quite costly, depending on a number of factors. The sponsor may be a governmental organization or a pharmaceutical, biotechnology or medical device company. Certain functions necessary to the trial, such as monitoring and lab work, may be managed by an outsourced partner, such as a contract research organization or a central laboratory.

Human subjects in psychology and sociology

Stanford prison experiment

A study conducted by Philip Zimbardo in 1971 examined the effect of social roles on college students at Stanford University. Twenty-four male students were assigned to a random role of a prisoner or guard to simulate a mock prison in one of Stanford's basements. After only six days, the abusive behavior of the guards and the psychological suffering of prisoners proved significant enough to halt the two-week-long experiment.[12] Human subjects play a role in this experiment. This study would show whether or not prisoners and guards have conflict which make conflict inevitable. This conflict would be due to possible sadistic behavior of guards (dispositional) or due to the hostile environment of the prison (positional). Due to the fact that prisoners could lack respect for the law and guards could behave in a hostile manner due to the power structure of the social environment that are within prisons. Yet, if prisoners and guards behaved in a non aggressive way, this would support the dispositional hypothesis. If the prisoners were just to behave in the same way that people did in real life, this would support the positional hypotheses. Using human subjects for this experiment is vital because the results is based on the way a human would react, with behaviors only humans obtain. Human subjects are the most way to get successful results from this type of experiment. The results of this experiment showed that people will readily conform to the specific social roles they are supposed to play. The prison environment played a part in making the guards behavior more brutal, due to the fact that none of the participants showed this type of behavior beforehand. Most of the guards had a hard time believing they had been acting in such ways. This evidence concludes this to be positional behavior, meaning the behavior was due to the hostile environment of the prison. [13]

Milgram experiment

In 1961, Yale University psychologist Stanley Milgram led a series of experiments to determine to what extent an individual would obey instructions given by an experimenter. Placed in a room with the experimenter, subjects played the role of a "teacher" to a "learner" situated in a separate room. The subjects were instructed to administer an electric shock to the learner when the learner answered incorrectly to a set of questions. The intensity of this electric shock was to be increased for every incorrect answer. The learner was a confederate (i.e. actor), and the shocks were faked, but the subjects were led to believe otherwise. Both prerecorded sounds of electric shocks and the confederate's pleas for the punishment to stop were audible to the "teacher" throughout the experiment. When the subject raised questions or paused, the experimenter insisted that the experiment should continue. Despite widespread speculation that most participants would not continue to "shock" the learner, 65 percent of participants in Milgram's initial trial complied until the end of the experiment, continuing to administer shocks to the confederate with purported intensities of up to "450 volts".[14][15] Although many participants questioned the experimenter and displayed various signs of discomfort, when the experiment was repeated, 65 percent of subjects were willing to obey instructions to administer the shocks through the final one.[16]

Asch conformity experiments

Psychologist Solomon Asch's classic conformity experiment in 1951 involved one subject participant and multiple confederates; they were asked to provide answers to a variety of different low-difficulty questions.[17] In every scenario, the multiple confederates gave their answers in turn, and the subject participant subject was allowed to answer last. In a control group of participants, the percentage of error was less than one percent. However, when the confederates unanimously chose an incorrect answer, 75 percent of the subject participants agreed with the majority at least once. The study has been regarded as significant evidence for the power of social influence and conformity.[18]

Robber's Cave study

A classic advocate of Realistic conflict theory, Muzafer Sherif's Robber's Cave experiment shed light on how group competition can foster hostility and prejudice.[19] In the 1961 study, two groups of ten boys each who were not "naturally" hostile were grouped together without knowledge of one another in Robber's Cave State Park, Oklahoma.[20] The twelve-year-old boys bonded with their own groups for a week before the groups were set in competition with each other in games such as tug-of-war and football. In light of this competition, the groups resorted to name-calling and other displays of resentment, such as burning the other group's team flag. The hostility continued and worsened until the end of the three-week study, when the groups were forced to work together to solve problems.[20]

Bystander effect

The bystander effect is demonstrated in a series of famous experiments by Bibb Latane and John Darley[20] In each of these experiments, participants were confronted with a type of emergency, such as the witnessing of a seizure or smoke entering through air vents. A common phenomenon was observed that as the number of witnesses or "bystanders" increases, so does the time it takes for individuals to respond to the emergency. This effect has been shown to promote the diffusion of responsibility by concluding that, when surrounded by others, the individual expects someone else to take action.[20]

Cognitive dissonance

Human subjects have been commonly used in experiments testing the theory of cognitive dissonance after the landmark study by Leon Festinger and Merrill Carlsmith.[21] In 1959, Festinger and Carlsmith devised a situation in which participants would undergo excessively tedious and monotonous tasks. After the completion of these tasks, the subjects were instructed to help the experiment continue in exchange for a variable amount of money. All the subjects had to do was simply inform the next "student" waiting outside the testing area (who was secretly a confederate) that the tasks involved in the experiment were interesting and enjoyable. It was expected that the participants wouldn't fully agree with the information they were imparting to the student, and after complying, half of the participants were awarded $1, and the others were awarded $20. A subsequent survey showed that, by a large margin, those who received less money for essentially "lying" to the student came to believe that the tasks were far more enjoyable than their highly paid counterparts.[21]

Vehicle safety

Throughout the years, many studies have been done on human subjects aiding towards a greater purpose. Human subject research is used across many industries, with one of those being the automotive industry. Research has shown that civilian volunteers decided to participate in vehicle safety research to help automobile designers create more impactful and sustainable safety restraints for vehicles. This research allows designers to inquire more data on the tolerance of a human body in the event of an automobile accident to better improve safety features in automobiles. Some of the tests conducted ranged from sled runs evaluating head-neck injuries, airbag tests, and even tests involving military vehicles and their constraint systems. It is important to note that from thousands of tests involving human subjects, results indicate no serious injuries were persistent. This fact is largely due to the preparation efforts of the researchers to ensure all ethical guidelines are followed and to ensure the safety and well-being of their subjects. Although this research provides positive contributions, there are some drawbacks and resistance to human subject research for crash testing due to the liability of injury and the lack of facilities that have appropriate machinery to perform such experiments. Overall, the experiments have helped contribute to the knowledge of human tolerance for injury in crash impacts. This research is additional data from which testing with cadavers or crash test dummies would prevent us from discovering. Cadavers and crash test dummies still provide meaningful purpose when testing for higher tolerance tests beyond human capability.[22]

Social media

The increased use of social media as a data source for researchers has led to new uncertainties regarding the definition of human subject research. Privacy, confidentiality, and informed consent are key concerns, yet it is unclear when social media users qualify as human subjects. Moreno et al. conclude that if access to the social media content is public, information is identifiable but not private, and information gathering requires no interaction with the person who posted it online, then the research is unlikely to qualify as human subjects research.[23] Defining features of human subject research, according to federal regulations, are that the researchers interact directly with the subject or obtain identifiable private information about the subject.[2] Social media research may or may not meet this definition. A research institution’s institutional review board (IRB) is often responsible for reviewing potential research on human subjects, but IRB protocols regarding social media research may be vague or outdated.[23]

Concerns regarding privacy and informed consent have surfaced regarding multiple social media studies. A research project by Harvard sociologists, known as "Tastes, Ties, and Time," utilized data from Facebook profiles of students at an “anonymous, northeastern American university” that was quickly identified as Harvard, potentially placing the privacy of the human subjects at risk.[24] The data set was removed from public access shortly after the issue was identified.[25] The issue was complicated by the fact that the research project was partially funded by the National Science Foundation, which mandates the projects it funds to engage in data sharing.[25]

A study by Facebook and researchers at Cornell University, published in the Proceedings of the National Academy of Sciences in 2014, collected data from hundreds of thousands of Facebook users after temporarily removing certain types of emotional content from their News Feed.[26] Many considered this a violation of the requirement for informed consent in human subjects research.[27][28] Because the data was collected by Facebook, a private company, in a manner that was consistent with its Data Use Policy and user terms and agreements, the Cornell IRB board determined that the study did not fall under its jurisdiction.[26] It has been argued that that this study broke the law nonetheless by violating state laws regarding informed consent.[28] Others have noted that speaking out against these research methods may be counterproductive, as private companies will likely continue to experiment on users, but will be dis-incentivized from sharing their methods or findings with scientists or the public.[29] In an “Editorial Expression of Concern” that was added to the online version of the research paper, PNAS states that while they “deemed it appropriate to publish the paper… It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.”[26]

Moreno et al.’s recommended considerations for social media research are: 1) determine if the study qualifies as human subject research, 2) consider the risk level of the content, 3) present research and motives accurately when engaging on social media, 4) provide contact information throughout the consent process, 5) make sure data is not identifiable or searchable (avoid direct quotes that may be identifiable with an online search), 6) consider developing project privacy policies in advance, and 7) be aware that each state has its own laws regarding informed consent.[23] Social media sites offer great potential as a data source by providing access to hard-to-reach research subjects and groups, capturing the natural, “real-world” responses of subjects, and providing affordable and efficient data collection methods.[23][30]

Unethical human experimentation

Unethical human experimentation violates the principles of medical ethics. It has been performed by countries including Nazi Germany, Imperial Japan, North Korea, the United States, and the Soviet Union. Examples include Project MKUltra, Unit 731, Totskoye nuclear exercise,[31] the experiments of Josef Mengele, and the human experimentation conducted by Chester M. Southam.
Nazi Germany performed human experimentation on large numbers of prisoners (including children), largely Jews from across Europe, but also Romani, Sinti, ethnic Poles, Soviet POWs and disabled Germans, by Nazi Germany in its concentration camps mainly in the early 1940s, during World War II and the Holocaust. Prisoners were forced into participating; they did not willingly volunteer and no consent was given for the procedures. Typically, the experiments resulted in death, trauma, disfigurement or permanent disability, and as such are considered as examples of medical torture. After the war, these crimes were tried at what became known as the Doctors' Trial, and the abuses perpetrated led to the development of the Nuremberg Code.[32] During the Nuremberg Trials, 23 Nazi doctors and scientists were prosecuted for the unethical treatment of concentration camp inmates, who were often used as research subjects with fatal consequences. Of those 23, 15 were convicted, 7 were condemned to death, 9 received prison sentences from 10 years to life, and 7 were acquitted.[33]
Unit 731, a department of the Imperial Japanese Army located near Harbin (then in the puppet state of Manchukuo, in northeast China), experimented on prisoners by conducting vivisections, dismemberments, and bacterial inoculations. It induced epidemics on a very large scale from 1932 onward through the Second Sino-Japanese war.[34] It also conducted biological and chemical weapons tests on prisoners and captured POWs. With the expansion of the empire during World War II, similar units were set up in conquered cities such as Nanking (Unit 1644), Beijing (Unit 1855), Guangzhou (Unit 8604) and Singapore (Unit 9420). After the war, Supreme Commander of the Occupation Douglas MacArthur gave immunity in the name of the United States to Shirƍ Ishii and all members of the units in exchange for all of the results of their experiments.[34]

During World War II, Fort Detrick in Maryland was the headquarters of US biological warfare experiments. Operation Whitecoat involved the injection of infectious agents into military forces to observe their effects in human subjects.[35] Subsequent human experiments in the United States have also been characterized as unethical. They were often performed illegally, without the knowledge, consent, or informed consent of the test subjects. Public outcry over the discovery of government experiments on human subjects led to numerous congressional investigations and hearings, including the Church Committee, Rockefeller Commission, and Advisory Committee on Human Radiation Experiments, amongst others. The Tuskegee syphilis experiment, widely regarded as the "most infamous biomedical research study in U.S. history,"[36] was performed from 1932 to 1972 by the Tuskegee Institute contracted by the United States Public Health Service. The study followed more than 600 African-American men who were not told they had syphilis and were denied access to the known treatment of penicillin.[37] This led to the 1974 National Research Act, to provide for protection of human subjects in experiments. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was established and was tasked with establishing the boundary between research and routine practice, the role of risk-benefit analysis, guidelines for participation, and the definition of informed consent. Its Belmont Report established three tenets of ethical research: respect for persons, beneficence, and justice.[38]

From the 1950s-60s, Chester M. Southam, an important virologist and cancer researcher, injected HeLa cells into cancer patients, healthy individuals, and prison inmates from the Ohio Penitentiary. He wanted to observe if cancer could be transmitted as well as if people could become immune to cancer by developing an acquired immune response. Many believe that this experiment violated the bioethical principles of informed consent, non-maleficence, and beneficence.[39]

Nuclear medicine

From Wikipedia, the free encyclopedia
Nuclear medicine
ICD-10-PCS C
ICD-9 92
MeSH D009683
OPS-301 code 3-70-3-72, 8-53

Nuclear medicine is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease. Nuclear medicine, in a sense, is "radiology done inside out" or "endoradiology" because it records radiation emitting from within the body rather than radiation that is generated by external sources like X-rays. In addition, nuclear medicine scans differ from radiology as the emphasis is not on imaging anatomy but the function and for such reason, it is called a physiological imaging modality. Single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans are the two most common imaging modalities in nuclear medicine.[1]

Diagnostic medical imaging

Diagnostic

In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example, intravenously or orally. Then, external detectors (gamma cameras) capture and form images from the radiation emitted by the radiopharmaceuticals. This process is unlike a diagnostic X-ray, where external radiation is passed through the body to form an image.

There are several techniques of diagnostic nuclear medicine.
  • 2D: Scintigraphy ("scint") is the use of internal radionuclides to create two-dimensional images.[2]
  • 3D: SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. Positron emission tomography (PET) uses coincidence detection to image functional processes.
Nuclear medicine tests differ from most other imaging modalities in that diagnostic tests primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. Nuclear medicine imaging studies are generally more organ-, tissue- or disease-specific (e.g.: lungs scan, heart scan, bone scan, brain scan, tumor, infection, Parkinson etc.) than those in conventional radiology imaging, which focus on a particular section of the body (e.g.: chest X-ray, abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are nuclear medicine studies that allow imaging of the whole body based on certain cellular receptors or functions. Examples are whole body PET scans or PET/CT scans, gallium scans, indium white blood cell scans, MIBG and octreotide scans.


Iodine-123 whole body scan for thyroid cancer evaluation. The study above was performed after the total thyroidectomy and TSH stimulation with thyroid hormone medication withdrawal. The study shows a small residual thyroid tissue in the neck and a mediastinum lesion, consistent with the thyroid cancer metastatic disease. The observable uptakes in the stomach and bladder are normal physiologic findings.

While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process.

Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer. In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes.

Hybrid scanning techniques

In some centers, the nuclear medicine scans can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight the part of the body in which the radiopharmaceutical is concentrated. This practice is often referred to as image fusion or co-registration, for example SPECT/CT and PET/CT. The fusion imaging technique in nuclear medicine provides information about the anatomy and function, which would otherwise be unavailable or would require a more invasive procedure or surgery.

Practical concerns in nuclear imaging

Although the risks of low-level radiation exposures are not well understood, a cautious approach has been universally adopted that all human radiation exposures should be kept As Low As Reasonably Practicable, "ALARP". (Originally, this was known as "As Low As Reasonably Achievable" (ALARA), but this has changed in modern draftings of the legislation to add more emphasis on the "Reasonably" and less on the "Achievable".)

Working with the ALARP principle, before a patient is exposed for a nuclear medicine examination, the benefit of the examination must be identified. This needs to take into account the particular circumstances of the patient in question, where appropriate. For instance, if a patient is unlikely to be able to tolerate a sufficient amount of the procedure to achieve a diagnosis, then it would be inappropriate to proceed with injecting the patient with the radioactive tracer.

When the benefit does justify the procedure, then the radiation exposure (the amount of radiation given to the patient) should also be kept as low as reasonably practicable. This means that the images produced in nuclear medicine should never be better than required for confident diagnosis. Giving larger radiation exposures can reduce the noise in an image and make it more photographically appealing, but if the clinical question can be answered without this level of detail, then this is inappropriate.

As a result, the radiation dose from nuclear medicine imaging varies greatly depending on the type of study. The effective radiation dose can be lower than or comparable to or can far exceed the general day-to-day environmental annual background radiation dose. Likewise, it can also be less than, in the range of, or higher than the radiation dose from an abdomen/pelvis CT scan.

Some nuclear medicine procedures require special patient preparation before the study to obtain the most accurate result. Pre-imaging preparations may include dietary preparation or the withholding of certain medications. Patients are encouraged to consult with the nuclear medicine department prior to a scan.

Analysis

The end result of the nuclear medicine imaging process is a "dataset" comprising one or more images. In multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of radionuclide in the patient.

The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine.[citation needed]

Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot.

Interventional nuclear medicine

Radionuclide therapy can be used to treat conditions such as hyperthyroidism, thyroid cancer, and blood disorders.
In nuclear medicine therapy, the radiation treatment dose is administered internally (e.g. intravenous or oral routes) rather than from an external radiation source.

The radiopharmaceuticals used in nuclear medicine therapy emit ionizing radiation that travels only a short distance, thereby minimizing unwanted side effects and damage to noninvolved organs or nearby structures. Most nuclear medicine therapies can be performed as outpatient procedures since there are few side effects from the treatment and the radiation exposure to the general public can be kept within a safe limit.

Common nuclear medicine (unsealed source) therapies

Substance Condition
Iodine-131-sodium iodide hyperthyroidism and thyroid cancer
Yttrium-90-ibritumomab tiuxetan (Zevalin) and Iodine-131-tositumomab (Bexxar) refractory lymphoma
131I-MIBG (metaiodobenzylguanidine) neuroendocrine tumors
Samarium-153 or Strontium-89 palliative bone pain treatment

In some centers the nuclear medicine department may also use implanted capsules of isotopes (brachytherapy) to treat cancer.

Commonly used radiation sources (radionuclides) for brachytherapy[3]

Radionuclide Type Half-life Energy
Caesium-137 (137Cs) Îł-ray 30.17 years 0.662 MeV
Cobalt-60 (60Co) Îł-ray 5.26 years 1.17, 1.33 MeV
Iridium-192 (192Ir) ÎČ-particles 73.8 days 0.38 MeV (mean)
Iodine-125 (125I) Îł-rays 59.6 days 27.4, 31.4 and 35.5 keV
Palladium-103 (103Pd) Îł-ray 17.0 days 21 keV (mean)
Ruthenium-106 (106Ru) ÎČ-particles 1.02 years 3.54 MeV

History

The history of nuclear medicine contains contributions from scientists across different disciplines in physics, chemistry, engineering, and medicine. The multidisciplinary nature of nuclear medicine makes it difficult for medical historians to determine the birthdate of nuclear medicine. This can probably be best placed between the discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory for medicine related use, in 1946.[4]

The origins of this medical idea date back as far as the mid-1920s in Freiburg, Germany, when George de Hevesy made experiments with radionuclides administered to rats, thus displaying metabolic pathways of these substances and establishing the tracer principle. Possibly, the genesis of this medical field took place in 1936, when John Lawrence, known as "the father of nuclear medicine", took a leave of absence from his faculty position at Yale Medical School, to visit his brother Ernest Lawrence at his new radiation laboratory (now known as the Lawrence Berkeley National Laboratory) in Berkeley, California. Later on, John Lawrence made the first application in patients of an artificial radionuclide when he used phosphorus-32 to treat leukemia.[5][6]

Many historians consider the discovery of artificially produced radionuclides by Frédéric Joliot-Curie and IrÚne Joliot-Curie in 1934 as the most significant milestone in nuclear medicine.[4] In February 1934, they reported the first artificial production of radioactive material in the journal Nature, after discovering radioactivity in aluminum foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of IrÚne Curie) for radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear physics to medicine in the 1930s. The history of nuclear medicine will not be complete without mentioning these early pioneers.

Nuclear medicine gained public recognition as a potential specialty on December 7, 1946 when an article was published in the Journal of the American Medical Association by Sam Seidlin.[7] The article described a successful treatment of a patient with thyroid cancer metastases using radioiodine (I-131). This is considered by many historians as the most important article ever published in nuclear medicine.[8] Although the earliest use of I-131 was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, quantification of the thyroid function, and therapy for hyperthyroidism. Among the many radionuclides that were discovered for medical-use, none were as important as the discovery and development of Technetium-99m. It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element to fill space number 43 in the Periodic Table. The development of a generator system to produce Technetium-99m in the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in nuclear medicine and is employed in a wide variety of nuclear medicine imaging studies.

Widespread clinical use of nuclear medicine began in the early 1950s, as knowledge expanded about radionuclides, detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger's scintillation camera (Anger camera) broadened the young discipline of nuclear medicine into a full-fledged medical imaging specialty.

By the early 1960s, in southern Scandinavia, Niels A. Lassen, David H. Ingvar, and Erik SkinhĂžj developed techniques that provided the first blood flow maps of the brain, which initially involved xenon-133 inhalation;[9] an intra-arterial equivalent was developed soon after, enabling measurement of the local distribution of cerebral activity for patients with neuropsychiatric disorders such as schizophrenia.[10] Later versions would have 254 scintillators so a two-dimensional image could be produced on a color monitor. It allowed them to construct images reflecting brain activation from speaking, reading, visual or auditory perception and voluntary movement.[11] The technique was also used to investigate, e.g., imagined sequential movements, mental calculation and mental spatial navigation.[12][13]

By the 1970s most organs of the body could be visualized using nuclear medicine procedures. In 1971, American Medical Association officially recognized nuclear medicine as a medical specialty.[14] In 1972, the American Board of Nuclear Medicine was established, and in 1974, the American Osteopathic Board of Nuclear Medicine was established, cementing nuclear medicine as a stand-alone medical specialty.

In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single photon emission computed tomography (SPECT), around the same time, led to three-dimensional reconstruction of the heart and establishment of the field of nuclear cardiology.

More recent developments in nuclear medicine include the invention of the first positron emission tomography scanner (PET). The concept of emission and transmission tomography, later developed into single photon emission computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s.[citation needed] Their work led to the design and construction of several tomographic instruments at the University of Pennsylvania. Tomographic imaging techniques were further developed at the Washington University School of Medicine. These innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California San Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998.[citation needed]

PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread acceptance over the last few years, which also was facilitated by establishing 18F-labelled tracers for standard procedures, allowing work at non-cyclotron-equipped sites. PET/CT imaging is now an integral part of oncology for diagnosis, staging and treatment monitoring. A fully integrated MRI/PET scanner is on the market from early 2011.[citation needed]

Source of radionuclides, with notes on a few radiopharmaceuticals

About a third of the world's supply, and most of Europe's supply, of medical isotopes is produced at the Petten nuclear reactor in the Netherlands. Another third of the world's supply, and most of North America's supply, is produced at the Chalk River Laboratories in Chalk River, Ontario, Canada. The NRU started operating in 1957. The Canadian Nuclear Safety Commission ordered the National Research Universal reactor to be shut down on November 18, 2007 for regularly scheduled maintenance and an upgrade of the safety systems to modern standards. The upgrade took longer than expected, and in December 2007 a critical shortage of medical isotopes occurred. The Canadian government passed emergency legislation allowing the reactor to restart on 16 December 2007, and production of medical isotopes to continue. In mid-February, 2009, the reactor was shut down once again due to a mechanism problem that extracts the isotope containing rods from the reactor. The reactor was again shut down in mid May of the same year because of a heavy water leak. The reactor was started again during the first quarter of 2010. The NRU will cease routine production in the fall of 2016, however the reactor will be available for backup production until March 2018, at which point it will be shut down.[15]

The Chalk River reactor is used to irradiate materials with neutrons which are produced in great quantity during the fission of U-235. These neutrons change the nucleus of the irradiated material by adding a neutron, or by splitting it in the process of nuclear fission. In a reactor, one of the fission products of uranium is molybdenum-99 which is extracted and shipped to radiopharmaceutical houses all over North America. The Mo-99 radioactively beta decays with a half-life of 2.7 days (or 66 hours), turning initially into Tc-99m, which is then extracted (milked) from a "moly cow" (see technetium-99m generator). The Tc-99m then further decays, while inside a patient, releasing a gamma photon which is detected by the gamma camera. It decays to its ground state of Tc-99, which is relatively non-radioactive compared to Tc-99m.

The most commonly used radioisotope in PET F-18, is not produced in any nuclear reactor, but rather in a circular accelerator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen O-18. The O-18 constitutes about 0.20% of ordinary oxygen (mostly O-16), from which it is extracted. The F-18 is then typically used to make FDG (see this link for more information on this process).

Common isotopes used in nuclear medicine [16][17]
isotope symbol Z T1/2 decay gamma (keV) positron (keV)
Imaging:
fluorine-18 18F 9 109.77 m ÎČ+ 511 (193%) 249.8 (97%)[18]
gallium-67 67Ga 31 3.26 d ec 93 (39%),
185 (21%),
300 (17%)
-
krypton-81m 81mKr 36 13.1 s IT 190 (68%) -
rubidium-82 82Rb 37 1.27 m ÎČ+ 511 (191%) 3.379 (95%)
nitrogen-13 13N 7 9.97 m ÎČ+ 511 (200%) 1190 (100%)[19]
technetium-99m 99mTc 43 6.01 h IT 140 (89%) -
indium-111 111In 49 2.80 d ec 171 (90%),
245 (94%)
-
iodine-123 123I 53 13.3 h ec 159 (83%) -
xenon-133 133Xe 54 5.24 d ÎČ 81 (31%) 0.364 (99%)
thallium-201 201Tl 81 3.04 d ec 69–83* (94%),
167 (10%)
-
Therapy:
yttrium-90 90Y 39 2.67 d ÎČ - 2.280 (100%)
iodine-131 131I 53 8.02 d ÎČ 364 (81%) 0.807 (100%)
Z = atomic number, the number of protons; T1/2 = half-life; decay = mode of decay
photons = principle photon energies in kilo-electron volts, keV, (abundance/decay)
ÎČ = beta maximum energy in mega-electron volts, MeV, (abundance/decay)
ÎČ+ = ÎČ+ decay; ÎČ = ÎČ decay; IT = isomeric transition; ec = electron capture
* X-rays from progeny, mercury, Hg
A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which produce radionuclides with longer half-lives, or cyclotrons, which produce radionuclides with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium.

The most commonly used intravenous radionuclides are Technetium-99m (technetium-99m), Iodine-123 and 131, Thallium-201, Gallium-67, Fluorine-18 fluorodeoxyglucose, and Indium-111 Labeled Leukocytes

The most commonly used gaseous/aerosol radionuclides are xenon-133, krypton-81m, Technetium-99m [20] and technetium-99m DTPA

Radiation dose

A patient undergoing a nuclear medicine procedure will receive a radiation dose. Under present international guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation dose delivered to a patient in a nuclear medicine investigation, though unproven, is generally accepted to present a very small risk of inducing cancer. In this respect it is similar to the risk from X-ray investigations except that the dose is delivered internally rather than from an external source such as an X-ray machine, and dosage amounts are typically significantly higher than those of X-rays.

The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body.

Effective doses can range from 6 ÎŒSv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular filtration rate to 37 mSv (37,000 ÎŒSv) for a 150 MBq thallium-201 non-specific tumour imaging procedure. The common bone scan with 600 MBq of technetium-99m-MDP has an effective dose of approximately 3.5 mSv (3,500 ÎŒSv) (1).

Formerly, units of measurement were the curie (Ci), being 3.7E10 Bq, and also 1.0 grams of Radium (Ra-226); the rad (radiation absorbed dose), now replaced by the gray; and the rem (Röntgen equivalent man), now replaced with the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans are covered by the field of Health Physics; the development and practice of safe and effective nuclear medicinal techniques is a key focus of Medical Physics.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...