Search This Blog

Wednesday, July 28, 2021

Medical imaging

From Wikipedia, the free encyclopedia
 
Medical imaging
RupturedAAA.png
A CT scan image showing a ruptured abdominal aortic aneurysm
ICD-10-PCSB
ICD-987-88
MeSH003952 D 003952
OPS-301 code3
MedlinePlus007451

Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging.

As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology, which uses the imaging technologies of X-ray radiography, magnetic resonance imaging, ultrasound, endoscopy, elastography, tactile imaging, thermography, medical photography, nuclear medicine functional imaging techniques as positron emission tomography (PET) and single-photon emission computed tomography (SPECT).

Measurement and recording techniques that are not primarily designed to produce images, such as electroencephalography (EEG), magnetoencephalography (MEG), electrocardiography (ECG), and others, represent other technologies that produce data susceptible to representation as a parameter graph vs. time or maps that contain data about the measurement locations. In a limited comparison, these technologies can be considered forms of medical imaging in another discipline.

As of 2010, 5 billion medical imaging studies had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Medical imaging equipment are manufactured using technology from the semiconductor industry, including CMOS integrated circuit chips, power semiconductor devices, sensors such as image sensors (particularly CMOS sensors) and biosensors, and processors such as microcontrollers, microprocessors, digital signal processors, media processors and system-on-chip devices. As of 2015, annual shipments of medical imaging chips amount to 46 million units and $1.1 billion.

Medical imaging is often perceived to designate the set of techniques that noninvasively produce images of the internal aspect of the body. In this restricted sense, medical imaging can be seen as the solution of mathematical inverse problems. This means that cause (the properties of living tissue) is inferred from effect (the observed signal). In the case of medical ultrasound, the probe consists of ultrasonic pressure waves and echoes that go inside the tissue to show the internal structure. In the case of projectional radiography, the probe uses X-ray radiation, which is absorbed at different rates by different tissue types such as bone, muscle, and fat.

The term "noninvasive" is used to denote a procedure where no instrument is introduced into a patient's body, which is the case for most imaging techniques used.

Types

(a) The results of a CT scan of the head are shown as successive transverse sections. (b) An MRI machine generates a magnetic field around a patient. (c) PET scans use radiopharmaceuticals to create images of active blood flow and physiologic activity of the organ or organs being targeted. (d) Ultrasound technology is used to monitor pregnancies because it is the least invasive of imaging techniques and uses no electromagnetic radiation.

In the clinical context, "invisible light" medical imaging is generally equated to radiology or "clinical imaging" and the medical practitioner responsible for interpreting (and sometimes acquiring) the images is a radiologist. "Visible light" medical imaging involves digital video or still pictures that can be seen without special equipment. Dermatology and wound care are two modalities that use visible light imagery. Diagnostic radiography designates the technical aspects of medical imaging and in particular the acquisition of medical images. The radiographer or radiologic technologist is usually responsible for acquiring medical images of diagnostic quality, although some radiological interventions are performed by radiologists.

As a field of scientific investigation, medical imaging constitutes a sub-discipline of biomedical engineering, medical physics or medicine depending on the context: Research and development in the area of instrumentation, image acquisition (e.g., radiography), modeling and quantification are usually the preserve of biomedical engineering, medical physics, and computer science; Research into the application and interpretation of medical images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of medical science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. Many of the techniques developed for medical imaging also have scientific and industrial applications.

Radiography

Two forms of radiographic images are in use in medical imaging. Projection radiography and fluoroscopy, with the latter being useful for catheter guidance. These 2D techniques are still in wide use despite the advance of 3D tomography due to the low cost, high resolution, and depending on the application, lower radiation dosages with 2D technique. This imaging modality utilizes a wide beam of x rays for image acquisition and is the first imaging technique available in modern medicine.

  • Fluoroscopy produces real-time images of internal structures of the body in a similar fashion to radiography, but employs a constant input of x-rays, at a lower dose rate. Contrast media, such as barium, iodine, and air are used to visualize internal organs as they work. Fluoroscopy is also used in image-guided procedures when constant feedback during a procedure is required. An image receptor is required to convert the radiation into an image after it has passed through the area of interest. Early on this was a fluorescing screen, which gave way to an Image Amplifier (IA) which was a large vacuum tube that had the receiving end coated with cesium iodide, and a mirror at the opposite end. Eventually the mirror was replaced with a TV camera.
  • Projectional radiographs, more commonly known as x-rays, are often used to determine the type and extent of a fracture as well as for detecting pathological changes in the lungs. With the use of radio-opaque contrast media, such as barium, they can also be used to visualize the structure of the stomach and intestines – this can help diagnose ulcers or certain types of colon cancer.

Magnetic resonance imaging

A brain MRI representation

A magnetic resonance imaging instrument (MRI scanner), or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally known, uses powerful magnets to polarize and excite hydrogen nuclei (i.e., single protons) of water molecules in human tissue, producing a detectable signal which is spatially encoded, resulting in images of the body. The MRI machine emits a radio frequency (RF) pulse at the resonant frequency of the hydrogen atoms on water molecules. Radio frequency antennas ("RF coils") send the pulse to the area of the body to be examined. The RF pulse is absorbed by protons, causing their direction with respect to the primary magnetic field to change. When the RF pulse is turned off, the protons "relax" back to alignment with the primary magnet and emit radio-waves in the process. This radio-frequency emission from the hydrogen-atoms on water is what is detected and reconstructed into an image. The resonant frequency of a spinning magnetic dipole (of which protons are one example) is called the Larmor frequency and is determined by the strength of the main magnetic field and the chemical environment of the nuclei of interest. MRI uses three electromagnetic fields: a very strong (typically 1.5 to 3 teslas) static magnetic field to polarize the hydrogen nuclei, called the primary field; gradient fields that can be modified to vary in space and time (on the order of 1 kHz) for spatial encoding, often simply called gradients; and a spatially homogeneous radio-frequency (RF) field for manipulation of the hydrogen nuclei to produce measurable signals, collected through an RF antenna.

Like CT, MRI traditionally creates a two-dimensional image of a thin "slice" of the body and is therefore considered a tomographic imaging technique. Modern MRI instruments are capable of producing images in the form of 3D blocks, which may be considered a generalization of the single-slice, tomographic, concept. Unlike CT, MRI does not involve the use of ionizing radiation and is therefore not associated with the same health hazards. For example, because MRI has only been in use since the early 1980s, there are no known long-term effects of exposure to strong static fields (this is the subject of some debate; see 'Safety' in MRI) and therefore there is no limit to the number of scans to which an individual can be subjected, in contrast with X-ray and CT. However, there are well-identified health risks associated with tissue heating from exposure to the RF field and the presence of implanted devices in the body, such as pacemakers. These risks are strictly controlled as part of the design of the instrument and the scanning protocols used.

Because CT and MRI are sensitive to different tissue properties, the appearances of the images obtained with the two techniques differ markedly. In CT, X-rays must be blocked by some form of dense tissue to create an image, so the image quality when looking at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the proton of the hydrogen atom remains the most widely used, especially in the clinical setting, because it is so ubiquitous and returns a large signal. This nucleus, present in water molecules, allows the excellent soft-tissue contrast achievable with MRI.

A number of different pulse sequences can be used for specific MRI diagnostic imaging (multiparametric MRI or mpMRI). It is possible to differentiate tissue characteristics by combining two or more of the following imaging sequences, depending on the information being sought: T1-weighted (T1-MRI), T2-weighted (T2-MRI), diffusion weighted imaging (DWI-MRI), dynamic contrast enhancement (DCE-MRI), and spectroscopy (MRI-S). For example, imaging of prostate tumors is better accomplished using T2-MRI and DWI-MRI than T2-weighted imaging alone. The number of applications of mpMRI for detecting disease in various organs continues to expand, including liver studies, breast tumors, pancreatic tumors, and assessing the effects of vascular disruption agents on cancer tumors.

Nuclear medicine

Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as molecular medicine or molecular imaging and therapeutics. Nuclear medicine uses certain properties of isotopes and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma cameras and PET scanners are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that may be associated with a disease. Relatively short-lived isotope, such as 99mTc is administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light signal, which is in turn amplified and converted into count data.

  • Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken internally, for example, intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the radiation emitted by the radiopharmaceuticals.
  • SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which provides localization of functional SPECT data, is termed a SPECT-CT camera, and has shown utility in advancing the field of molecular imaging. In most other medical imaging modalities, energy is passed through the body and the reaction or result is read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly Thallium 201TI, Technetium 99mTC, Iodine 123I, and Gallium 67Ga. The radioactive gamma rays are emitted through the body as the natural decaying process of these isotopes takes place. The emissions of the gamma rays are captured by detectors that surround the body. This essentially means that the human is now the source of the radioactivity, rather than the medical imaging devices such as X-ray or CT.
  • Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived positron emitting isotope, such as 18F, is incorporated with an organic substance such as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners may integrate PET, allowing PET-CT, or PET-MRI to optimize the image reconstruction involved with positron imaging. This is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient management.

Fiduciary markers are used in a wide range of medical imaging applications. Images of the same subject produced with two different imaging systems may be correlated (called image registration) by placing a fiduciary marker in the area imaged by both systems. In this case, a marker which is visible in the images produced by both imaging modalities must be used. By this method, functional information from SPECT or positron emission tomography can be related to anatomical information provided by magnetic resonance imaging (MRI). Similarly, fiducial points established during MRI can be correlated with brain images generated by magnetoencephalography to localize the source of brain activity.

Ultrasound

Ultrasound representation of Urinary bladder (black butterfly-like shape) and hyperplastic prostate

Medical ultrasound uses high frequency broadband sound waves in the megahertz range that are reflected by tissue to varying degrees to produce (up to 3D) images. This is commonly associated with imaging the fetus in pregnant women. Uses of ultrasound are much broader, however. Other important uses include imaging the abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide less anatomical detail than techniques such as CT or MRI, it has several advantages which make it ideal in numerous situations, in particular that it studies the function of moving structures in real-time, emits no ionizing radiation, and contains speckle that can be used in elastography. Ultrasound is also used as a popular research tool for capturing raw data, that can be made available through an ultrasound research interface, for the purpose of tissue characterization and implementation of new image processing techniques. The concepts of ultrasound differ from other medical imaging modalities in the fact that it is operated by the transmission and receipt of sound waves. The high frequency sound waves are sent into the tissue and depending on the composition of the different tissues; the signal will be attenuated and returned at separate intervals. A path of reflected sound waves in a multilayered structure can be defined by an input acoustic impedance (ultrasound sound wave) and the Reflection and transmission coefficients of the relative structures. It is very safe to use and does not appear to cause any adverse effects. It is also relatively inexpensive and quick to perform. Ultrasound scanners can be taken to critically ill patients in intensive care units, avoiding the danger caused while moving the patient to the radiology department. The real-time moving image obtained can be used to guide drainage and biopsy procedures. Doppler capabilities on modern scanners allow the blood flow in arteries and veins to be assessed.

Elastography

3D tactile image (C) is composed from 2D pressure maps (B) recorded in the process of tissue phantom examination (A).

Elastography is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses, as elasticity can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumours will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones. There are several elastographic techniques based on the use of ultrasound, magnetic resonance imaging and tactile imaging. The wide clinical use of ultrasound elastography is a result of the implementation of technology in clinical ultrasound machines. Main branches of ultrasound elastography include Quasistatic Elastography/Strain Imaging, Shear Wave Elasticity Imaging (SWEI), Acoustic Radiation Force Impulse imaging (ARFI), Supersonic Shear Imaging (SSI), and Transient Elastography. In the last decade a steady increase of activities in the field of elastography is observed demonstrating successful application of the technology in various areas of medical diagnostics and treatment monitoring.

Photoacoustic imaging

Photoacoustic imaging is a recently developed hybrid biomedical imaging modality based on the photoacoustic effect. It combines the advantages of optical absorption contrast with an ultrasonic spatial resolution for deep imaging in (optical) diffusive or quasi-diffusive regime. Recent studies have shown that photoacoustic imaging can be used in vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma detection, etc.

Tomography

Basic principle of tomography: superposition free tomographic cross sections S1 and S2 compared with the (not tomographic) projected image P

Tomography is the imaging by sections or sectioning. The main such methods in medical imaging are:

  • X-ray computed tomography (CT), or Computed Axial Tomography (CAT) scan, is a helical tomography technique (latest generation), which traditionally produces a 2D image of the structures in a thin section of the body. In CT, a beam of X-rays spins around an object being examined and is picked up by sensitive radiation detectors after having penetrated the object from multiple angles. A computer then analyses the information received from the scanner's detectors and constructs a detailed image of the object and its contents using the mathematical principles laid out in the Radon transform. It has a greater ionizing radiation dose burden than projection radiography; repeated scans must be limited to avoid health effects. CT is based on the same principles as X-Ray projections but in this case, the patient is enclosed in a surrounding ring of detectors assigned with 500–1000 scintillation detectors (fourth-generation X-Ray CT scanner geometry). Previously in older generation scanners, the X-Ray beam was paired by a translating source and detector. Computed tomography has almost completely replaced focal plane tomography in X-ray tomography imaging.
  • Positron emission tomography (PET) also used in conjunction with computed tomography, PET-CT, and magnetic resonance imaging PET-MRI.
  • Magnetic resonance imaging (MRI) commonly produces tomographic images of cross-sections of the body. (See separate MRI section in this article.)

Echocardiography

When ultrasound is used to image the heart it is referred to as an echocardiogram. Echocardiography allows detailed structures of the heart, including chamber size, heart function, the valves of the heart, as well as the pericardium (the sac around the heart) to be seen. Echocardiography uses 2D, 3D, and Doppler imaging to create pictures of the heart and visualize the blood flowing through each of the four heart valves. Echocardiography is widely used in an array of patients ranging from those experiencing symptoms, such as shortness of breath or chest pain, to those undergoing cancer treatments. Transthoracic ultrasound has been proven to be safe for patients of all ages, from infants to the elderly, without risk of harmful side effects or radiation, differentiating it from other imaging modalities. Echocardiography is one of the most commonly used imaging modalities in the world due to its portability and use in a variety of applications. In emergency situations, echocardiography is quick, easily accessible, and able to be performed at the bedside, making it the modality of choice for many physicians.

Functional near-infrared spectroscopy

FNIR Is a relatively new non-invasive imaging technique. NIRS (near infrared spectroscopy) is used for the purpose of functional neuroimaging and has been widely accepted as a brain imaging technique.

Magnetic particle imaging

Using superparamagnetic iron oxide nanoparticles, magnetic particle imaging (MPI) is a developing diagnostic imaging technique used for tracking superparamagnetic iron oxide nanoparticles. The primary advantage is the high sensitivity and specificity, along with the lack of signal decrease with tissue depth. MPI has been used in medical research to image cardiovascular performance, neuroperfusion, and cell tracking.

In pregnancy

CT scanning (volume rendered in this case) confers a radiation dose to the developing fetus.

Medical imaging may be indicated in pregnancy because of pregnancy complications, a pre-existing disease or an acquired disease in pregnancy, or routine prenatal care. Magnetic resonance imaging (MRI) without MRI contrast agents as well as obstetric ultrasonography are not associated with any risk for the mother or the fetus, and are the imaging techniques of choice for pregnant women. Projectional radiography, CT scan and nuclear medicine imaging result some degree of ionizing radiation exposure, but have with a few exceptions much lower absorbed doses than what are associated with fetal harm. At higher dosages, effects can include miscarriage, birth defects and intellectual disability.

Maximizing imaging procedure use

The amount of data obtained in a single MR or CT scan is very extensive. Some of the data that radiologists discard could save patients time and money, while reducing their exposure to radiation and risk of complications from invasive procedures. Another approach for making the procedures more efficient is based on utilizing additional constraints, e.g., in some medical imaging modalities one can improve the efficiency of the data acquisition by taking into account the fact the reconstructed density is positive.

Creation of three-dimensional images

Volume rendering techniques have been developed to enable CT, MRI and ultrasound scanning software to produce 3D images for the physician. Traditionally CT and MRI scans produced 2D static output on film. To produce 3D images, many scans are made and then combined by computers to produce a 3D model, which can then be manipulated by the physician. 3D ultrasounds are produced using a somewhat similar technique. In diagnosing disease of the viscera of the abdomen, ultrasound is particularly sensitive on imaging of biliary tract, urinary tract and female reproductive organs (ovary, fallopian tubes). As for example, diagnosis of gallstone by dilatation of common bile duct and stone in the common bile duct. With the ability to visualize important structures in great detail, 3D visualization methods are a valuable resource for the diagnosis and surgical treatment of many pathologies. It was a key resource for the famous, but ultimately unsuccessful attempt by Singaporean surgeons to separate Iranian twins Ladan and Laleh Bijani in 2003. The 3D equipment was used previously for similar operations with great success.

Other proposed or developed techniques include:

Some of these techniques are still at a research stage and not yet used in clinical routines.

Non-diagnostic imaging

Neuroimaging has also been used in experimental circumstances to allow people (especially disabled persons) to control outside devices, acting as a brain computer interface.

Many medical imaging software applications are used for non-diagnostic imaging, specifically because they don't have an FDA approval and not allowed to use in clinical research for patient diagnosis. Note that many clinical research studies are not designed for patient diagnosis anyway.

Archiving and recording

Used primarily in ultrasound imaging, capturing the image produced by a medical imaging device is required for archiving and telemedicine applications. In most scenarios, a frame grabber is used in order to capture the video signal from the medical device and relay it to a computer for further processing and operations.

DICOM

The Digital Imaging and Communication in Medicine (DICOM) Standard is used globally to store, exchange, and transmit medical images. The DICOM Standard incorporates protocols for imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and radiation therapy.

Compression of medical images

Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.

Medical imaging in the cloud

There has been growing trend to migrate from on-premise PACS to a cloud-based PACS. A recent article by Applied Radiology said, "As the digital-imaging realm is embraced across the healthcare enterprise, the swift transition from terabytes to petabytes of data has put radiology on the brink of information overload. Cloud computing offers the imaging department of the future the tools to manage data much more intelligently."

Use in pharmaceutical clinical trials

Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and quantitative assessment.

A typical clinical trial goes through multiple phases and can take up to eight years. Clinical endpoints or outcomes are used to determine whether the therapy is safe and effective. Once a patient reaches the endpoint, he or she is generally excluded from further experimental interaction. Trials that rely solely on clinical endpoints are very costly as they have long durations and tend to need large numbers of patients.

In contrast to clinical endpoints, surrogate endpoints have been shown to cut down the time required to confirm whether a drug has clinical benefits. Imaging biomarkers (a characteristic that is objectively measured by an imaging technique, which is used as an indicator of pharmacological response to a therapy) and surrogate endpoints have shown to facilitate the use of small group sizes, obtaining quick results with good statistical power.

Imaging is able to reveal subtle change that is indicative of the progression of therapy that may be missed out by more subjective, traditional approaches. Statistical bias is reduced as the findings are evaluated without any direct patient contact.

Imaging techniques such as positron emission tomography (PET) and magnetic resonance imaging (MRI) are routinely used in oncology and neuroscience areas,. For example, measurement of tumour shrinkage is a commonly used surrogate endpoint in solid tumour response evaluation. This allows for faster and more objective assessment of the effects of anticancer drugs. In Alzheimer's disease, MRI scans of the entire brain can accurately assess the rate of hippocampal atrophy, while PET scans can measure the brain's metabolic activity by measuring regional glucose metabolism, and beta-amyloid plaques using tracers such as Pittsburgh compound B (PiB). Historically less use has been made of quantitative medical imaging in other areas of drug development although interest is growing.

An imaging-based trial will usually be made up of three components:

  1. A realistic imaging protocol. The protocol is an outline that standardizes (as far as practically possible) the way in which the images are acquired using the various modalities (PET, SPECT, CT, MRI). It covers the specifics in which images are to be stored, processed and evaluated.
  2. An imaging centre that is responsible for collecting the images, perform quality control and provide tools for data storage, distribution and analysis. It is important for images acquired at different time points are displayed in a standardised format to maintain the reliability of the evaluation. Certain specialised imaging contract research organizations provide end to end medical imaging services, from protocol design and site management through to data quality assurance and image analysis.
  3. Clinical sites that recruit patients to generate the images to send back to the imaging centre.

Shielding

Lead is the main material used for radiographic shielding against scattered X-rays.

In magnetic resonance imaging, there is MRI RF shielding as well as magnetic shielding to prevent external disturbance of image quality.

Privacy protection

Medical imaging are generally covered by laws of medical privacy. For example, in the United States the Health Insurance Portability and Accountability Act (HIPAA) sets restrictions for health care providers on utilizing protected health information, which is any individually identifiable information relating to the past, present, or future physical or mental health of any individual. While there has not been any definitive legal decision in the matter, at least one study has indicated that medical imaging may contain biometric information that can uniquely identify a person, and so may qualify as PHI.

The UK General Medical Council's ethical guidelines indicate that the Council does not require consent prior to secondary uses of X-ray images.

Industry

Organizations in the medical imaging industry include manufacturers of imaging equipment, freestanding radiology facilities, and hospitals.

The global market for manufactured devices was estimated at $5 billion in 2018. Notable manufacturers as of 2012 included Fujifilm, GE, Siemens Healthineers, Philips, Shimadzu, Toshiba, Carestream Health, Hitachi, Hologic, and Esaote. In 2016, the manufacturing industry was characterized as oligopolistic and mature; new entrants included in Samsung and Neusoft Medical.

In the United States, as estimate as of 2015 places the US market for imaging scans at about $100b, with 60% occurring in hospitals and 40% occurring in freestanding clinics, such as the RadNet chain.

Copyright

United States

As per chapter 300 of the Compendium of U.S. Copyright Office practices, "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author." including "Medical imaging produced by x-rays, ultrasounds, magnetic resonance imaging, or other diagnostic equipment." This position differs from the broad copyright protections afforded to photographs. While the Copyright Compendium is an agency statutory interpretation and not legally binding, courts are likely to give deference to it if they find it reasonable. Yet, there is no U.S. federal case law directly addressing the issue of the copyrightability of x-ray images.

Derivatives

In a derivative of a medical image created in the U.S., added annotations and explanations may be copyrightable, but the medical image itself remains Public Domain.

An extensive definition of the term derivative work is given by the United States Copyright Act in 17 U.S.C. § 101:

A “derivative work” is a work based upon one or more preexisting works, such as a translation... art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a “derivative work”.

17 U.S.C. § 103(b) provides:

The copyright in a compilation or derivative work extends only to the material contributed by the author of such work, as distinguished from the preexisting material employed in the work, and does not imply any exclusive right in the preexisting material. The copyright in such work is independent of, and does not affect or enlarge the scope, duration, ownership, or subsistence of, any copyright protection in the preexisting material.

Germany

In Germany, X-ray images as well as MRI, medical ultrasound, PET and scintigraphy images are protected by (copyright-like) related rights or neighbouring rights. This protection does not require creativity (as would be necessary for regular copyright protection) and lasts only for 50 years after image creation, if not published within 50 years, or for 50 years after the first legitimate publication. The letter of the law grants this right to the "Lichtbildner", i.e. the person who created the image. The literature seems to uniformly consider the medical doctor, dentist or veterinary physician as the rights holder, which may result from the circumstance that in Germany many x-rays are performed in ambulatory setting

United Kingdom

Medical images created in the United Kingdom will normally be protected by copyright due to "the high level of skill, labour and judgement required to produce a good quality x-ray, particularly to show contrast between bones and various soft tissues". The Society of Radiographers believe this copyright is owned by employer (unless the radiographer is self-employed—though even then their contract might require them to transfer ownership to the hospital). This copyright owner can grant certain permissions to whoever they wish, without giving up their ownership of the copyright. So the hospital and its employees will be given permission to use such radiographic images for the various purposes that they require for medical care. Physicians employed at the hospital will, in their contracts, be given the right to publish patient information in journal papers or books they write (providing they are made anonymous). Patients may also be granted permission to "do what they like with" their own images.

Sweden

The Cyber Law in Sweden states: "Pictures can be protected as photographic works or as photographic pictures. The former requires a higher level of originality; the latter protects all types of photographs, also the ones taken by amateurs, or within medicine or science. The protection requires some sort of photographic technique being used, which includes digital cameras as well as holograms created by laser technique. The difference between the two types of work is the term of protection, which amounts to seventy years after the death of the author of a photographic work as opposed to fifty years, from the year in which the photographic picture was taken."

Medical imaging may possibly be included in the scope of "photography", similarly to a U.S. statement that "MRI images, CT scans, and the like are analogous to photography."

Health information technology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Health_information_technology

Health information technology (HIT) is health technology, particularly information technology, applied to health and health care. It supports health information management across computerized systems and the secure exchange of health information between consumers, providers, payers, and quality monitors. Based on an often-cited 2008 report on a small series of studies conducted at four sites that provide ambulatory care – three U.S. medical centers and one in the Netherlands – the use of electronic health records (EHRs) was viewed as the most promising tool for improving the overall quality, safety and efficiency of the health delivery system. According to a 2006 report by the Agency for Healthcare Research and Quality, in an ideal world, broad and consistent utilization of HIT would:

  • improve health care quality or effectiveness
  • increase health care productivity or efficiency
  • prevent medical errors and increase health care accuracy and procedural correctness
  • reduce health care costs
  • increase administrative efficiencies and healthcare work processes
  • decrease paperwork and unproductive or idle work time
  • extend real-time communications of health informatics among health care professionals
  • expand access to affordable care

Risk-based regulatory framework for health IT

September 4, 2013 the Health IT Policy Committee (HITPC) accepted and approved recommendations from the Food and Drug Administration Safety and Innovation Act (FDASIA) working group for a risk-based regulatory framework for health information technology. The Food and Drug Administration (FDA), the Office of the National Coordinator for Health IT (ONC), and Federal Communications Commission (FCC) kicked off the FDASIA workgroup of the HITPC to provide stakeholder input into a report on a risk-based regulatory framework that promotes safety and innovation and reduces regulatory duplication, consistent with section 618 of FDASIA. This provision permitted the Secretary of Health and Human Services (HHS) to form a workgroup in order to obtain broad stakeholder input from across the health care, IT, patients and innovation spectrum. The FDA, ONC, and FCC actively participated in these discussions with stakeholders from across the health care, IT, patients and innovation spectrum.

HIMSS Good Informatics Practices-GIP is aligned with FDA risk-based regulatory framework for health information technology. GIP development began in 2004 developing risk-based IT technical guidance. Today the GIP peer-review and published modules are widely used as a tool for educating Health IT professionals.

Interoperable HIT will improve individual patient care, but it will also bring many public health benefits including:

  • early detection of infectious disease outbreaks around the country;
  • improved tracking of chronic disease management;
  • evaluation of health care based on value enabled by the collection of de-identified price and quality information that can be compared

According to an article published in the International Journal of Medical Informatics, health information sharing between patients and providers helps to improve diagnosis, promotes self care, and patients also know more information about their health. The use of electronic medical records (EMRs) is still scarce now but is increasing in Canada, American and British primary care. Healthcare information in EMRs are important sources for clinical, research, and policy questions. Health information privacy (HIP) and security has been a big concern for patients and providers. Studies in Europe evaluating electronic health information poses a threat to electronic medical records and exchange of personal information. Moreover, software's traceability features allow the hospitals to collect detailed information about the preparations dispensed, creating a database of every treatment that can be used for research purposes.

Concepts and definitions

Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, health data, and knowledge for communication and decision making". Technology is a broad concept that deals with a species' usage and knowledge of tools and crafts, and how it affects a species' ability to control and adapt to its environment. However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. For HIT, technology represents computers and communications attributes that can be networked to build systems for moving health information. Informatics is yet another integral aspect of HIT.

Informatics refers to the science of information, the practice of information processing, and the engineering of information systems. Informatics underlies the academic investigation and practitioner application of computing and communications technology to healthcare, health education, and biomedical research. Health informatics refers to the intersection of information science, computer science, and health care. Health informatics describes the use and sharing of information within the healthcare industry with contributions from computer science, mathematics, and psychology. It deals with the resources, devices, and methods required for optimizing the acquisition, storage, retrieval, and use of information in health and biomedicine. Health informatics tools include not only computers but also clinical guidelines, formal medical terminologies, and information and communication systems. Medical informatics, nursing informatics, public health informatics, pharmacy informatics, and translational bioinformatics are subdisciplines that inform health informatics from different disciplinary perspectives. The processes and people of concern or study are the main variables.

Implementation

The Institute of Medicine's (2001) call for the use of electronic prescribing systems in all healthcare organizations by 2010 heightened the urgency to accelerate United States hospitals' adoption of CPOE systems. In 2004, President Bush signed an Executive Order titled the President's Health Information Technology Plan, which established a ten-year plan to develop and implement electronic medical record systems across the US to improve the efficiency and safety of care. According to a study by RAND Health, the US healthcare system could save more than $81 billion annually, reduce adverse healthcare events and improve the quality of care if it were to widely adopt health information technology.

The American Recovery and Reinvestment Act, signed into law in 2009 under the Obama Administration, has provided approximately $19 billion in incentives for hospitals to shift from paper to electronic medical records. Meaningful Use, as a part of the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH) was the incentive that included over $20 billion for the implementation of HIT alone, and provided further indication of the growing consensus regarding the potential salutary effect of HIT. The American Recovery and Reinvestment Act has set aside $2 billion which will go towards programs developed by the National Coordinator and Secretary to help healthcare providers implement HIT and provide technical assistance through various regional centers. The other $17 billion in incentives comes from Medicare and Medicaid funding for those who adopt HIT before 2015. Healthcare providers who implement electronic records can receive up to $44,000 over four years in Medicare funding and $63,750 over six years in Medicaid funding. The sooner that healthcare providers adopt the system, the more funding they receive. Those who do not adopt electronic health record systems before 2015 do not receive any federal funding.

While electronic health records have potentially many advantages in terms of providing efficient and safe care, recent reports have brought to light some challenges with implementing electronic health records. The most immediate barriers for widespread adoption of this technology have been the high initial cost of implementing the new technology and the time required for doctors to train and adapt to the new system. There have also been suspected cases of fraudulent billing, where hospitals inflate their billings to Medicare. Given that healthcare providers have not reached the deadline (2015) for adopting electronic health records, it is unclear what effects this policy will have long term.

One approach to reducing the costs and promoting wider use is to develop open standards related to EHRs. In 2014 there was widespread interest in a new HL7 draft standard, Fast Healthcare Interoperability Resources (FHIR), which is designed to be open, extensible, and easier to implement, benefiting from modern web technologies.

Types of technology

In a 2008 study about the adoption of technology in the United States, Furukawa, and colleagues classified applications for prescribing to include electronic medical records (EMR), clinical decision support (CDS), and computerized physician order entry (CPOE). They further defined applications for dispensing to include bar-coding at medication dispensing (BarD), robot for medication dispensing (ROBOT), and automated dispensing machines (ADM). They defined applications for administration to include electronic medication administration records (eMAR) and bar-coding at medication administration (BarA or BCMA).

Electronic health record (EHR)

US medical groups' adoption of EHR (2005)

Although the electronic health record (EHR), previously known as the electronic medical record (EMR), is frequently cited in the literature, there is no consensus about the definition. However, there is consensus that EMRs can reduce several types of errors, including those related to prescription drugs, to preventive care, and to tests and procedures. Recurring alerts remind clinicians of intervals for preventive care and track referrals and test results. Clinical guidelines for disease management have a demonstrated benefit when accessible within the electronic record during the process of treating the patient. Advances in health informatics and widespread adoption of interoperable electronic health records promise access to a patient's records at any health care site. A 2005 report noted that medical practices in the United States are encountering barriers to adopting an EHR system, such as training, costs and complexity, but the adoption rate continues to rise (see chart to right). Since 2002, the National Health Service of the United Kingdom has placed emphasis on introducing computers into healthcare. As of 2005, one of the largest projects for a national EHR is by the National Health Service (NHS) in the United Kingdom. The goal of the NHS is to have 60,000,000 patients with a centralized electronic health record by 2010. The plan involves a gradual roll-out commencing May 2006, providing general practices in England access to the National Programme for IT (NPfIT), the NHS component of which is known as the "Connecting for Health Programme". However, recent surveys have shown physicians' deficiencies in understanding the patient safety features of the NPfIT-approved software.

A main problem in HIT adoption is mainly seen by physicians, an important stakeholder to the process of EHR. The Thorn et al. article, elicited that emergency physicians noticed that health information exchange disrupted workflow and was less desirable to use, even though the main goal of EHR is improving coordination of care. The problem was seen that exchanges did not address the needs of end users, e.g. simplicity, user-friendly interface, and speed of systems. The same finding was seen in an earlier article with the focus on CPOE and physician resistance to its use, Bhattacherjee et al.

One opportunity for EHRs is to utilize natural language processing for searches. One systematic review of the literature found that searching and analyzing notes and text that would otherwise be inaccessible for review could be accessed through increasing collaboration between software developers and end-users of natural language processing tools within EHRs.

Clinical point of care technology

Computerized provider (physician) order entry

Prescribing errors are the largest identified source of preventable errors in hospitals. A 2006 report by the Institute of Medicine estimated that a hospitalized patient is exposed to a medication error each day of his or her stay. Computerized provider order entry (CPOE), also called computerized physician order entry, can reduce total medication error rates by 80%, and adverse (serious with harm to patient) errors by 55%. A 2004 survey by found that 16% of US clinics, hospitals and medical practices are expected to be utilizing CPOE within 2 years. In addition to electronic prescribing, a standardized bar code system for dispensing drugs could prevent a quarter of drug errors. Consumer information about the risks of the drugs and improved drug packaging (clear labels, avoiding similar drug names and dosage reminders) are other error-proofing measures. Despite ample evidence of the potential to reduce medication errors, competing systems of barcoding and electronic prescribing have slowed adoption of this technology by doctors and hospitals in the United States, due to concern with interoperability and compliance with future national standards. Such concerns are not inconsequential; standards for electronic prescribing for Medicare Part D conflict with regulations in many US states. And, aside from regulatory concerns, for the small-practice physician, utilizing CPOE requires a major change in practice work flow and an additional investment of time. Many physicians are not full-time hospital staff; entering orders for their hospitalized patients means taking time away from scheduled patients.

Technological innovations, opportunities, and challenges

One of the rapidly growing areas of health care innovation lies in the advanced use of data science and machine learning. The key opportunities here are:

  • health monitoring and diagnosis
  • medical treatment and patient care
  • pharmaceutical research and development
  • clinic performance optimization

Handwritten reports or notes, manual order entry, non-standard abbreviations and poor legibility lead to substantial errors and injuries, according to the Institute of Medicine (2000) report. The follow-up IOM (2004) report, Crossing the quality chasm: A new health system for the 21st century, advised rapid adoption of electronic patient records, electronic medication ordering, with computer- and internet-based information systems to support clinical decisions. However, many system implementations have experienced costly failures. Furthermore, there is evidence that CPOE may actually contribute to some types of adverse events and other medical errors. For example, the period immediately following CPOE implementation resulted in significant increases in reported adverse drug events in at least one study, and evidence of other errors have been reported. Collectively, these reported adverse events describe phenomena related to the disruption of the complex adaptive system resulting from poorly implemented or inadequately planned technological innovation.

Technological iatrogenesis

Technology may introduce new sources of error. Technologically induced errors are significant and increasingly more evident in care delivery systems. Terms to describe this new area of error production include the label technological iatrogenesis for the process and e-iatrogenic for the individual error. The sources for these errors include:

  • prescriber and staff inexperience may lead to a false sense of security; that when technology suggests a course of action, errors are avoided.
  • shortcut or default selections can override non-standard medication regimens for elderly or underweight patients, resulting in toxic doses.
  • CPOE and automated drug dispensing were identified as a cause of error by 84% of over 500 health care facilities participating in a surveillance system by the United States Pharmacopoeia.
  • irrelevant or frequent warnings can interrupt work flow

Healthcare information technology can also result in iatrogenesis if design and engineering are substandard, as illustrated in a 14-part detailed analysis done at the University of Sydney.

Revenue Cycle HIT

The HIMSS Revenue Cycle Improvement Task Force was formed to prepare for the IT changes in the U.S. (e.g. the American Recovery and Reinvestment Act of 2009 (HITECH), Affordable Care Act, 5010 (electronic exchanges), ICD-10). An important change to the revenue cycle is the international classification of diseases (ICD) codes from 9 to 10. ICD-9 codes are set up to use three to five alphanumeric codes that represent 4,000 different types of procedures, while ICD-10 uses three to seven alphanumeric codes increasing procedural codes to 70,000. ICD-9 was outdated because there were more codes than procedures available, and to document for procedures without an ICD-9 code, unspecified codes were utilized which did not fully capture the procedures or the work involved in turn affecting reimbursement. Hence, ICD-10 was introduced to simplify the procedures with unknown codes and unify the standards closer to world standards (ICD-11). One of the main parts of Revenue Cycle HIT is charge capture, it utilizes codes to capture costs for reimbursements from different payers, such as CMS.

International comparisons through HIT

International health system performance comparisons are important for understanding health system complexities and finding better opportunities, which can be done through health information technology. It gives policy makers the chance to compare and contrast the systems through established indicators from health information technology, as inaccurate comparisons can lead to adverse policies.

Information and communications technology

A Concept Map on the Use of Information and Communication Technology (ICT) in Educational as per the International Federation of ICT the IFGICT , the IFGICT Assessment

Information and communications technology (ICT) is an extensional term for information technology (IT) that stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals) and computers as per IFGICT, as well as necessary enterprise software, middleware, storage and audiovisual, that enable users to access, store, transmit, understand and manipulate information as per the international federation of ICT.

The term ICT is also used to refer to the convergence of audiovisual and telephone networks with computer networks through a single cabling or link system. There are large economic incentives to merge the telephone network with the computer network system using a single unified system of cabling, signal distribution, and management. ICT is an umbrella term that includes any communication device, encompassing radio, television, cell phones, computer and network hardware, satellite systems and so on, as well as the various services and appliances with them such as video conferencing and distance learning.

ICT is a broad subject and the concepts are evolving. It covers any product that will store, retrieve, manipulate, transmit, or receive information electronically in a digital form (e.g., personal computers including smartphones, digital television, email, or robots). Theoretical differences between interpersonal-communication technologies and mass-communication technologies have been identified by the philosopher Piyush Mathur. Skills Framework for the Information Age is one of many models for describing and managing competencies for ICT professionals for the 21st century.

Etymology

The phrase "information and communication technologies" has been used by academic researchers since the 1980s. The abbreviation "ICT" became popular after it was used in a report to the UK government by Dennis Stevenson in 1997, and then in the revised National Curriculum for England, Wales and Northern Ireland in 2000. However, in 2012, the Royal Society recommended that the use of the term "ICT" should be discontinued in British schools "as it has attracted too many negative connotations". From 2014, the National Curriculum has used the word computing, which reflects the addition of computer programming into the curriculum.

Variations of the phrase have spread worldwide. The United Nations has created a "United Nations Information and Communication Technologies Task Force" and an internal "Office of Information and Communications Technology".

Monetisation

The money spent on IT worldwide has been estimated as US$3.8 trillion  in 2017 and has been growing at less than 5% per year since 2009. The estimate 2018 growth of the entire ICT is 5%. The biggest growth of 16% is expected in the area of new technologies (IoT, Robotics, AR/VR, and AI).

The 2014 IT budget of the US federal government was nearly $82 billion. IT costs, as a percentage of corporate revenue, have grown 50% since 2002, putting a strain on IT budgets. When looking at current companies' IT budgets, 75% are recurrent costs, used to "keep the lights on" in the IT department, and 25% are the cost of new initiatives for technology development.

The average IT budget has the following breakdown:

  • 31% personnel costs (internal)
  • 29% software costs (external/purchasing category)
  • 26% hardware costs (external/purchasing category)
  • 14% costs of external service providers (external/services).

The estimate of money to be spent in 2022 is just over US$6 trillion.

Technological capacity

The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed) exabytes in 2007, and some 5 zetta bytes in 2014. This is the informational equivalent to 1.25 stacks of CD-ROM from the earth to the moon in 2007, and the equivalent of 4,500 stacks of printed books from the earth to the sun in 2014. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986, 715 (optimally compressed) exabytes in 1993, 1.2 (optimally compressed) zettabytes in 2000, and 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (optimally compressed) information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in 2000, 65 (optimally compressed) exabytes in 2007, and some 100 exabytes in 2014. The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007.

ICT sector in the OECD

The following is a list of OECD countries by share of ICT sector in total value added in 2013.

Rank Country ICT sector in % Relative size
1  South Korea 10.7
 
2  Japan 7.02
 
3  Ireland 6.99
 
4  Sweden 6.82
 
5  Hungary 6.09
 
6  United States 5.89
 
7  India 5.87
 
8  Czech Republic 5.74
 
9  Finland 5.60
 
10  United Kingdom 5.53
 
11  Estonia 5.33
 
12  Slovakia 4.87
 
13  Germany 4.84
 
14  Luxembourg 4.54
 
15   Switzerland 4.63
 
16  France 4.33
 
17  Slovenia 4.26
 
18  Denmark 4.06
 
19  Spain 4.00
 
20  Canada 3.86
 
21  Italy 3.72
 
22  Belgium 3.72
 
23  Austria 3.56
 
24  Portugal 3.43
 
25  Poland 3.33
 
26  Norway 3.32
 
27  Greece 3.31
 
28  Iceland 2.87
 
29  Mexico 2.77
 

ICT Development Index

The ICT Development Index ranks and compares the level of ICT use and access across the various countries around the world. In 2014 ITU (International Telecommunications Union) released the latest rankings of the IDI, with Denmark attaining the top spot, followed by South Korea. The top 30 countries in the rankings include most high-income countries where the quality of life is higher than average, which includes countries from Europe and other regions such as "Australia, Bahrain, Canada, Japan, Macao (China), New Zealand, Singapore, and the United States; almost all countries surveyed improved their IDI ranking this year."

The WSIS process and ICT development goals

On 21 December 2001, the United Nations General Assembly approved Resolution 56/183, endorsing the holding of the World Summit on the Information Society (WSIS) to discuss the opportunities and challenges facing today's information society. According to this resolution, the General Assembly related the Summit to the United Nations Millennium Declaration's goal of implementing ICT to achieve Millennium Development Goals. It also emphasized a multi-stakeholder approach to achieve these goals, using all stakeholders including civil society and the private sector, in addition to governments.

To help anchor and expand ICT to every habitable part of the world, "2015 is the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000."

In education

Today's society shows the ever-growing computer-centric lifestyle, which includes the rapid influx of computers in the modern classroom.

There is evidence that, to be effective in education, ICT must be fully integrated into the pedagogy. Specifically, when teaching literacy and math, using ICT in combination with Writing to Learn  produces better results than traditional methods alone or ICT alone. The United Nations Educational, Scientific and Cultural Organisation (UNESCO), a division of the United Nations, has made integrating ICT into education part of its efforts to ensure equity and access to education. The following, taken directly from a UNESCO publication on educational ICT, explains the organization's position on the initiative.

Information and Communication Technology can contribute to universal access to education, equity in education, the delivery of quality learning and teaching, teachers' professional development and more efficient education management, governance, and administration. UNESCO takes a holistic and comprehensive approach to promote ICT in education. Access, inclusion, and quality are among the main challenges they can address. The Organization's Intersectoral Platform for ICT in education focuses on these issues through the joint work of three of its sectors: Communication & Information, Education and Science.

OLPC Laptops at school in Rwanda2

Despite the power of computers to enhance and reform teaching and learning practices, improper implementation is a widespread issue beyond the reach of increased funding and technological advances with little evidence that teachers and tutors are properly integrating ICT into everyday learning. Intrinsic barriers such as a belief in more traditional teaching practices and individual attitudes towards computers in education as well as the teachers own comfort with computers and their ability to use them all as result in varying effectiveness in the integration of ICT in the classroom. 

Mobile learning for refugees

School environments play an important role in facilitating language learning. However, language and literacy barriers are obstacles preventing refugees from accessing and attending school, especially outside camp settings.

Mobile-assisted language learning apps are key tools for language learning. Mobile solutions can provide support for refugees’ language and literacy challenges in three main areas: literacy development, foreign language learning and translations. Mobile technology is relevant because communicative practice is a key asset for refugees and immigrants as they immerse themselves in a new language and a new society. Well-designed mobile language learning activities connect refugees with mainstream cultures, helping them learn in authentic contexts.

Developing countries

Africa

A computer screen at the front of a room of policymakers shows the Mobile Learning Week logo
Representatives meet for a policy forum on M-Learning at UNESCO's Mobile Learning Week in March 2017

ICT has been employed as an educational enhancement in Sub-Saharan Africa since the 1960s. Beginning with television and radio, it extended the reach of education from the classroom to the living room, and to geographical areas that had been beyond the reach of the traditional classroom. As the technology evolved and became more widely used, efforts in Sub-Saharan Africa were also expanded. In the 1990s a massive effort to push computer hardware and software into schools was undertaken, with the goal of familiarizing both students and teachers with computers in the classroom. Since then, multiple projects have endeavoured to continue the expansion of ICT's reach in the region, including the One Laptop Per Child (OLPC) project, which by 2015 had distributed over 2.4 million laptops to nearly 2 million students and teachers.

The inclusion of ICT in the classroom often referred to as M-Learning, has expanded the reach of educators and improved their ability to track student progress in Sub-Saharan Africa. In particular, the mobile phone has been most important in this effort. Mobile phone use is widespread, and mobile networks cover a wider area than internet networks in the region. The devices are familiar to student, teacher, and parent, and allow increased communication and access to educational materials. In addition to benefits for students, M-learning also offers the opportunity for better teacher training, which leads to a more consistent curriculum across the educational service area. In 2011, UNESCO started a yearly symposium called Mobile Learning Week with the purpose of gathering stakeholders to discuss the M-learning initiative.

Implementation is not without its challenges. While mobile phone and internet use are increasing much more rapidly in Sub-Saharan Africa than in other developing countries, the progress is still slow compared to the rest of the developed world, with smartphone penetration only expected to reach 20% by 2017. Additionally, there are gender, social, and geo-political barriers to educational access, and the severity of these barriers vary greatly by country. Overall, 29.6 million children in Sub-Saharan Africa were not in school in the year 2012, owing not just to the geographical divide, but also to political instability, the importance of social origins, social structure, and gender inequality. Once in school, students also face barriers to quality education, such as teacher competency, training and preparedness, access to educational materials, and lack of information management.

Modern ICT In modern society ICT is ever-present, with over three billion people having access to the Internet. With approximately 8 out of 10 Internet users owning a smartphone, information and data are increasing by leaps and bounds. This rapid growth, especially in developing countries, has led ICT to become a keystone of everyday life, in which life without some facet of technology renders most of clerical, work and routine tasks dysfunctional.

The most recent authoritative data, released in 2014, shows "that Internet use continues to grow steadily, at 6.6% globally in 2014 (3.3% in developed countries, 8.7% in the developing world); the number of Internet users in developing countries has doubled in five years (2009-2014), with two-thirds of all people online now living in the developing world."

However, hurdles are still large. "Of the 4.3 billion people not yet using the Internet, 90% live in developing countries. In the world's 42 Least Connected Countries (LCCs), which are home to 2.5 billion people, access to ICTs remains largely out of reach, particularly for these countries' large rural populations." ICT has yet to penetrate the remote areas of some countries, with many developing countries dearth of any type of Internet. This also includes the availability of telephone lines, particularly the availability of cellular coverage, and other forms of electronic transmission of data. The latest "Measuring the Information Society Report" cautiously stated that the increase in the aforementioned cellular data coverage is ostensible, as "many users have multiple subscriptions, with global growth figures sometimes translating into little real improvement in the level of connectivity of those at the very bottom of the pyramid; an estimated 450 million people worldwide live in places which are still out of reach of mobile cellular service."

Favourably, the gap between the access to the Internet and mobile coverage has decreased substantially in the last fifteen years, in which "2015 [was] the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000, and the new data show ICT progress and highlight remaining gaps." ICT continues to take on a new form, with nanotechnology set to usher in a new wave of ICT electronics and gadgets. ICT newest editions into the modern electronic world include smartwatches, such as the Apple Watch, smart wristbands such as the Nike+ FuelBand, and smart TVs such as Google TV. With desktops soon becoming part of a bygone era, and laptops becoming the preferred method of computing, ICT continues to insinuate and alter itself in the ever-changing globe.

Information communication technologies play a role in facilitating accelerated pluralism in new social movements today. The internet according to Bruce Bimber is "accelerating the process of issue group formation and action" and coined the term accelerated pluralism to explain this new phenomena. ICTs are tools for "enabling social movement leaders and empowering dictators" in effect promoting societal change. ICTs can be used to garner grassroots support for a cause due to the internet allowing for political discourse and direct interventions with state policy as well as change the way complaints from the populace are handled by governments. Furthermore, ICTs in a household are associated with women rejecting justifications for intimate partner violence. According to a study published in 2017, this is likely because “[a]ccess to ICTs exposes women to different ways of life and different notions about women’s role in society and the household, especially in culturally conservative regions where traditional gender expectations contrast observed alternatives."

Models of access to ICT

Scholar Mark Warschauer defines a “models of access” framework for analyzing ICT accessibility. In the second chapter of his book, Technology and Social Inclusion: Rethinking the Digital Divide, he describes three models of access to ICTs: devices, conduits, and literacy. Devices and conduits are the most common descriptors for access to ICTs, but they are insufficient for meaningful access to ICTs without third model of access, literacy. Combined, these three models roughly incorporate all twelve of the criteria of “Real Access” to ICT use, conceptualized by a non-profit organization called Bridges.org in 2005:

  1. Physical access to technology
  2. Appropriateness of technology
  3. Affordability of technology and technology use
  4. Human capacity and training
  5. Locally relevant content, applications, and services
  6. Integration into daily routines
  7. Socio-cultural factors
  8. Trust in technology
  9. Local economic environment
  10. Macro-economic environment
  11. Legal and regulatory framework
  12. Political will and public support

Devices

The most straightforward model of access for ICT in Warschauer’s theory is devices. In this model, access is defined most simply as the ownership of a device such as a phone or computer. Warschauer identifies many flaws with this model, including its inability to account for additional costs of ownership such as software, access to telecommunications, knowledge gaps surrounding computer use, and the role of government regulation in some countries. Therefore, Warschauer argues that considering only devices understates the magnitude of digital inequality. For example, the Pew Research Center notes that 96% of Americans own a smartphone, although most scholars in this field would contend that comprehensive access to ICT in the United States is likely much lower than that.

Conduits

A conduit requires a connection to a supply line, which for ICT could be a telephone line or Internet line. Accessing the supply requires investment in the proper infrastructure from a commercial company or local government and recurring payments from the user once the line is set up. For this reason, conduits usually divide people based on their geographic locations. As a Pew Research Center poll reports, rural Americans are 12% less likely to have broadband access than other Americans, thereby making them less likely to own the devices. Additionally, these costs can be prohibitive to lower-income families accessing ICTs. These difficulties have led to a shift toward mobile technology; fewer people are purchasing broadband connection and are instead relying on their smartphones for Internet access, which can be found for free at public places such as libraries. Indeed, smartphones are on the rise, with 37% of Americans using smartphones as their primary medium for internet access and 96% of Americans owning a smartphone.

Literacy

Youth and adults with ICT skills, 2017

In 1981, Sylvia Scribner and Michael Cole studied a tribe in Liberia, the Vai people, that has its own local language. Since about half of those literate in Vai have never had formal schooling, Scribner and Cole were able to test more than 1,000 subjects to measure the mental capabilities of literates over non-literates. This research, which they laid out in their book The Psychology of Literacy, allowed them to study whether the literacy divide exists at the individual level. Warschauer applied their literacy research to ICT literacy as part of his model of ICT access.

Scribner and Cole found no generalizable cognitive benefits from Vai literacy; instead, individual differences on cognitive tasks were due to other factors, like schooling or living environment. The results suggested that there is “no single construct of literacy that divides people into two cognitive camps; [...] rather, there are gradations and types of literacies, with a range of benefits closely related to the specific functions of literacy practices.” Furthermore, literacy and social development are intertwined, and the literacy divide does not exist on the individual level.

Warschauer draws on Scribner and Cole’s research to argue that ICT literacy functions similarly to literacy acquisition, as they both require resources rather than a narrow cognitive skill. Conclusions about literacy serve as the basis for a theory of the digital divide and ICT access, as detailed below:

There is not just one type of ICT access, but many types. The meaning and value of access varies in particular social contexts. Access exists in gradations rather than in a bipolar opposition. Computer and Internet use brings no automatic benefit outside of its particular functions. ICT use is a social practice, involving access to physical artifacts, content, skills, and social support. And acquisition of ICT access is a matter not only of education but also of power.

Therefore, Warschauer concludes that access to ICT cannot rest on devices or conduits alone; it must also engage physical, digital, human, and social resources. Each of these categories of resources have iterative relations with ICT use. If ICT is used well, it can promote these resources, but if it is used poorly, it can contribute to a cycle of underdevelopment and exclusion.

Reproductive rights

From Wikipedia, the free encyclo...