Search This Blog

Wednesday, November 14, 2018

Telemedicine

From Wikipedia, the free encyclopedia

Telemedicine is the use of telecommunication and information technology to provide clinical health care from a distance. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities. It is also used to save lives in critical care and emergency situations.

Although there were distant precursors to telemedicine, it is essentially a product of 20th century telecommunication and information technologies. These technologies permit communications between patient and medical staff with both convenience and fidelity, as well as the transmission of medical, imaging and health informatics data from one site to another.

Early forms of telemedicine achieved with telephone and radio have been supplemented with videotelephony, advanced diagnostic methods supported by distributed client/server applications, and additionally with telemedical devices to support in-home care.

Disambiguation

The definition of telemedicine is somewhat controversial. Some definitions (such as the definition given by the World Health Organization) include all aspects of healthcare including preventive care. The American Telemedicine Association uses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.

eHealth is another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth, electronic medical records, and other components of health information technology.

Benefits and drawbacks

Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them. Recent developments in mobile collaboration technology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place. Remote patient monitoring through mobile technology can reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care. Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.

Telemedicine also can eliminate the possible transmission of infectious diseases or parasites between patients and medical staff. This is particularly an issue where MRSA is a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example, white coat syndrome may be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration.

The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk that protected health information may be compromised through electronic storage and transmission. There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that a teledermatology consultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation. Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor. Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.

Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient suffering from a bacterial infection might be given an antibiotic hypodermic injection in the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form.

History

In the early 1900s, people living in remote areas of Australia used two-way radios, powered by a dynamo driven by a set of bicycle pedals, to communicate with the Royal Flying Doctor Service of Australia.

In 1967 one of the first telemedicine clinics was founded by Kenneth Bird at Massachusetts General Hospital. The clinic addressed the fundamental problem of delivering occupational and emergency health services to employees and travellers at Boston's Logan International Airport, located three congested miles from the hospital. Over 1,000 patients are documented as having received remote treatment from doctors at MGH using the clinic's two-way audiovisual microwave circuit. The timing of Bird's clinic more or less coincided with NASA's foray into telemedicine through the use of physiologic monitors for astronauts. Other pioneering programs in telemedicine were designed to deliver healthcare services to people in rural settings. The first interactive telemedicine system, operating over standard telephone lines, designed to remotely diagnose and treat patients requiring cardiac resuscitation (defibrillation) was developed and launched by an American company, MedPhone Corporation, in 1989. A year later under the leadership of its President/CEO S Eric Wachtel, MedPhone introduced a mobile cellular version, the MDPhone. Twelve hospitals in the U.S. served as receiving and treatment centers.

Types

Categories

Telemedicine can be broken into three main categories: store-and-forward, remote patient monitoring and (real-time) interactive services.

Store and forward

Store-and-forward telemedicine involves acquiring medical data (like medical images, biosignals etc.) and then transmitting this data to a doctor or medical specialist at a convenient time for assessment offline. It does not require the presence of both parties at the same time. Dermatology (cf: teledermatology), radiology, and pathology are common specialties that are conducive to asynchronous telemedicine. A properly structured medical record preferably in electronic form should be a component of this transfer. A key difference between traditional in-person patient meetings and telemedicine encounters is the omission of an actual physical examination and history. The 'store-and-forward' process requires the clinician to rely on a history report and audio/video information in lieu of a physical examination.

Remote monitoring

Telehealth Blood Pressure Monitor

Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective. Examples include home-based nocturnal dialysis and improved joint management.

Real-time interactive

Electronic consultations are possible through interactive telemedicine services which provide real-time interactions between patient and provider. Videoconferencing has been used in a wide range of clinical disciplines and settings for various purposes including management, diagnosis, counselling and monitoring of patients.

Emergency

U.S. Navy medical staff being trained in the use of handheld telemedical devices (2006).

Common daily emergency telemedicine is performed by SAMU Regulator Physicians in France, Spain, Chile and Brazil. Aircraft and maritime emergencies are also handled by SAMU centres in Paris, Lisbon and Toulouse.

A recent study identified three major barriers to adoption of telemedicine in emergency and critical care units. They include:
  • regulatory challenges related to the difficulty and cost of obtaining licensure across multiple states, malpractice protection and privileges at multiple facilities
  • Lack of acceptance and reimbursement by government payers and some commercial insurance carriers creating a major financial barrier, which places the investment burden squarely upon the hospital or healthcare system.
  • Cultural barriers occurring from the lack of desire, or unwillingness, of some physicians to adapt clinical paradigms for telemedicine applications.
Telemedicine system. Federal Center of Neurosurgery in Tyumen, 2013

Telenursing

Telenursing refers to the use of telecommunications and information technology in order to provide nursing services in health care whenever a large physical distance exists between patient and nurse, or between any number of nurses. As a field it is part of telehealth, and has many points of contacts with other medical and non-medical applications, such as telediagnosis, teleconsultation, telemonitoring, etc.
Telenursing is achieving significant growth rates in many countries due to several factors: the preoccupation in reducing the costs of health care, an increase in the number of aging and chronically ill population, and the increase in coverage of health care to distant, rural, small or sparsely populated regions. Among its benefits, telenursing may help solve increasing shortages of nurses; to reduce distances and save travel time, and to keep patients out of hospital. A greater degree of job satisfaction has been registered among telenurses.

Baby Eve with Georgia for the Breastfeeding Support Project

In Australia, during January 2014, Melbourne tech startup Small World Social collaborated with the Australian Breastfeeding Association to create the first hands-free breastfeeding Google Glass application for new mothers. The application, named Google Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture etc.) or call a lactation consultant via a secure Google Hangout, who can view the issue through the mother's Google Glass camera. The trial was successfully concluded in Melbourne in April 2014, and 100% of participants were breastfeeding confidently.

Telepharmacy

Pharmacists filling prescriptions at a computer
Pharmacy personnel deliver medical prescriptions electronically; remote delivery of pharmaceutical care is an example of telemedicine.

Telepharmacy is the delivery of pharmaceutical care via telecommunications to patients in locations where they may not have direct contact with a pharmacist. It is an instance of the wider phenomenon of telemedicine, as implemented in the field of pharmacy. Telepharmacy services include drug therapy monitoring, patient counseling, prior authorization and refill authorization for prescription drugs, and monitoring of formulary compliance with the aid of teleconferencing or videoconferencing. Remote dispensing of medications by automated packaging and labeling systems can also be thought of as an instance of telepharmacy. Telepharmacy services can be delivered at retail pharmacy sites or through hospitals, nursing homes, or other medical care facilities.

The term can also refer to the use of videoconferencing in pharmacy for other purposes, such as providing education, training, and management services to pharmacists and pharmacy staff remotely.

Teleneuropsychology

Teleneuropsychology (Cullum et al., 2014) is the use of telehealth/videoconference technology for the remote administration of neuropsychological tests. Neuropsychological tests are used to evaluate the cognitive status of individuals with known or suspected brain disorders and provide a profile of cognitive strengths and weaknesses. Through a series of studies, there is growing support in the literature showing that remote videoconference-based administration of many standard neuropsychological tests results in test findings that are similar to traditional in-person evaluations, thereby establishing the basis for the reliability and validity of teleneuropsychological assessment.

Telerehabilitation

Telerehabilitation (or e-rehabilitation) is the delivery of rehabilitation services over telecommunication networks and the Internet. Most types of services fall into two categories: clinical assessment (the patient’s functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance.

Most telerehabilitation is highly visual. As of 2014, the most commonly used mediums are webcams, videoconferencing, phone lines, videophones and webpages containing rich Internet applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used for neuropsychological rehabilitation; fitting of rehabilitation equipment such as wheelchairs, braces or artificial limbs; and in speech-language pathology. Rich internet applications for neuropsychological rehabilitation (aka cognitive rehabilitation) of cognitive impairment (from many etiologies) were first introduced in 2001. This endeavor has expanded as a teletherapy application for cognitive skills enhancement programs for school children. Tele-audiology (hearing assessments) is a growing application. Currently, telerehabilitation in the practice of occupational therapy and physical therapy is limited, perhaps because these two disciplines are more "hands on".

Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy, and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research in telehaptics (the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice, in the future.

In the United States, the National Institute on Disability and Rehabilitation Research's (NIDRR) supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at the University of Pittsburgh, the Rehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital in Washington DC. Other federal funders of research are the Veterans Health Administration, the Health Services Research Administration in the US Department of Health and Human Services, and the Department of Defense. Outside the United States, excellent research is conducted in Australia and Europe.

Only a few health insurers in the United States, and about half of Medicaid programs, reimburse for telerehabilitation services. If the research shows that teleassessments and teletherapy are equivalent to clinical encounters, it is more likely that insurers and Medicare will cover telerehabilitation services.

Teletrauma care

Telemedicine can be utilized to improve the efficiency and effectiveness of the delivery of care in a trauma environment. Examples include:

Telemedicine for trauma triage: using telemedicine, trauma specialists can interact with personnel on the scene of a mass casualty or disaster situation, via the internet using mobile devices, to determine the severity of injuries. They can provide clinical assessments and determine whether those injured must be evacuated for necessary care. Remote trauma specialists can provide the same quality of clinical assessment and plan of care as a trauma specialist located physically with the patient.

Telemedicine for intensive care unit (ICU) rounds: Telemedicine is also being used in some trauma ICUs to reduce the spread of infections. Rounds are usually conducted at hospitals across the country by a team of approximately ten or more people to include attending physicians, fellows, residents and other clinicians. This group usually moves from bed to bed in a unit discussing each patient. This aids in the transition of care for patients from the night shift to the morning shift, but also serves as an educational experience for new residents to the team. A new approach features the team conducting rounds from a conference room using a video-conferencing system. The trauma attending, residents, fellows, nurses, nurse practitioners, and pharmacists are able to watch a live video stream from the patient's bedside. They can see the vital signs on the monitor, view the settings on the respiratory ventilator, and/or view the patient's wounds. Video-conferencing allows the remote viewers two-way communication with clinicians at the bedside.

Telemedicine for trauma education: some trauma centers are delivering trauma education lectures to hospitals and health care providers worldwide using video conferencing technology. Each lecture provides fundamental principles, firsthand knowledge and evidenced-based methods for critical analysis of established clinical practice standards, and comparisons to newer advanced alternatives. The various sites collaborate and share their perspective based on location, available staff, and available resources.

Telemedicine in the trauma operating room: trauma surgeons are able to observe and consult on cases from a remote location using video conferencing. This capability allows the attending to view the residents in real time. The remote surgeon has the capability to control the camera (pan, tilt and zoom) to get the best angle of the procedure while at the same time providing expertise in order to provide the best possible care to the patient.

Specialist care delivery

Telemedicine can facilitate specialty care delivered by primary care physicians according to a controlled study of the treatment of hepatitis C. Various specialties are contributing to telemedicine, in varying degrees.

Telecardiology

ECGs, or electrocardiographs, can be transmitted using telephone and wireless. Willem Einthoven, the inventor of the ECG, actually did tests with transmission of ECG via telephone lines. This was because the hospital did not allow him to move patients outside the hospital to his laboratory for testing of his new device. In 1906 Einthoven came up with a way to transmit the data from the hospital directly to his lab. See above reference-General health care delivery. Remotely treating ventricular fibrillation Medphone Corporation, 1989

Teletransmission of ECG using methods indigenous to Asia

One of the oldest known telecardiology systems for teletransmissions of ECGs was established in Gwalior, India in 1975 at GR Medical college by Ajai Shanker, S. Makhija, P.K. Mantri using an indigenous technique for the first time in India.

This system enabled wireless transmission of ECG from the moving ICU van or the patients home to the central station in ICU of the department of Medicine. Transmission using wireless was done using frequency modulation which eliminated noise. Transmission was also done through telephone lines. The ECG output was connected to the telephone input using a modulator which converted ECG into high frequency sound. At the other end a demodulator reconverted the sound into ECG with a good gain accuracy. The ECG was converted to sound waves with a frequency varying from 500 Hz to 2500 Hz with 1500 Hz at baseline.

This system was also used to monitor patients with pacemakers in remote areas. The central control unit at the ICU was able to correctly interpret arrhythmia. This technique helped medical aid reach in remote areas.

In addition, electronic stethoscopes can be used as recording devices, which is helpful for purposes of telecardiology. There are many examples of successful telecardiology services worldwide.

In Pakistan three pilot projects in telemedicine was initiated by the Ministry of IT & Telecom, Government of Pakistan (MoIT) through the Electronic Government Directorate in collaboration with Oratier Technologies (a pioneer company within Pakistan dealing with healthcare and HMIS) and PakDataCom (a bandwidth provider). Three hub stations through were linked via the Pak Sat-I communications satellite, and four districts were linked with another hub. A 312 Kb link was also established with remote sites and 1 Mbit/s bandwidth was provided at each hub. Three hubs were established: the Mayo Hospital (the largest hospital in Asia), JPMC Karachi and Holy Family Rawalpindi. These 12 remote sites were connected and on average of 1,500 patients being treated per month per hub. The project was still running smoothly after two years.

Telepsychiatry

Telepsychiatry, another aspect of telemedicine, also utilizes videoconferencing for patients residing in underserved areas to access psychiatric services. It offers wide range of services to the patients and providers, such as consultation between the psychiatrists, educational clinical programs, diagnosis and assessment, medication therapy management, and routine follow-up meetings. Most telepsychiatry is undertaken in real time (synchronous) although in recent years research at UC Davis has developed and validated the process of asynchronous telepsychiatry. Recent reviews of the literature by Hilty et al. in 2013, and by Yellowlees et al. in 2015 confirmed that telepsychiatry is as effective as in-person psychiatric consultations for diagnostic assessment, is at least as good for the treatment of disorders such as depression and post traumatic stress disorder, and may be better than in-person treatment in some groups of patients, notably children, veterans and individuals with agoraphobia.

As of 2011, the following are some of the model programs and projects which are deploying telepsychiatry in rural areas in the United States:
  1. University of Colorado Health Sciences Center (UCHSC) supports two programs for American Indian and Alaskan Native populations
a. The Center for Native American Telehealth and Tele-education (CNATT) and
b. Telemental Health Treatment for American Indian Veterans with Post-traumatic Stress Disorder (PTSD)
  1. Military Psychiatry, Walter Reed Army Medical Center.
  2. In 2009, the South Carolina Department of Mental Health established a partnership with the University of South Carolina School of Medicine and the South Carolina Hospital Association to form a statewide telepsychiatry program that provides access to psychiatrists 16 hours a day, 7 days a week, to treat patients with mental health issues who present at rural emergency departments in the network.
  3. Between 2007 and 2012, the University of Virginia Health System hosted a videoconferencing project that allowed child psychiatry fellows to conduct approximately 12,000 sessions with children and adolescents living in rural parts of the State.
There are a growing number of HIPAA compliant technologies for performing telepsychiatry. There is an independent comparison site of current technologies.

Links for several sites related to telemedicine, telepsychiatry policy, guidelines, and networking are available at the website for the American Psychiatric Association.

There has also been a recent trend towards Video CBT sites with the recent endorsement and support of CBT by the National Health Service (NHS) in the United Kingdom.

In April 2012, a Manchester-based Video CBT pilot project was launched to provide live video therapy sessions for those with depression, anxiety, and stress related conditions called InstantCBT The site supported at launch a variety of video platforms (including Skype, GChat, Yahoo, MSN as well as bespoke) and was aimed at lowering the waiting times for mental health patients. This is a Commercial, For-Profit business.

In the United States, the American Telemedicine Association and the Center of Telehealth and eHealth are the most respectable places to go for information about telemedicine.

The Health Insurance Portability and Accountability Act (HIPAA), is a United States Federal Law that applies to all modes of electronic information exchange such as video-conferencing mental health services. In the United States, Skype, Gchat, Yahoo, and MSN are not permitted to conduct video-conferencing services unless these companies sign a Business Associate Agreement stating that their employees are HIPAA trained. For this reason, most companies provide their own specialized videotelephony services. Violating HIPAA in the United States can result in penalties of hundreds of thousands of dollars.

The momentum of telemental health and telepsychiatry is growing. In June 2012 the U.S. Veterans Administration announced expansion of the successful telemental health pilot. Their target was for 200,000 cases in 2012.

A growing number of HIPAA compliant technologies are now available. There is an independent comparison site that provides a criteria-based comparison of telemental health technologies.

The SATHI Telemental Health Support project cited above is another example of successful Telemental health support. - Also see SCARF India.

Teleradiology

A CT exam displayed through teleradiology

Teleradiology is the ability to send radiographic images (x-rays, CT, MR, PET/CT, SPECT/CT, MG, US...) from one location to another. For this process to be implemented, three essential components are required, an image sending station, a transmission network, and a receiving-image review station. The most typical implementation are two computers connected via the Internet. The computer at the receiving end will need to have a high-quality display screen that has been tested and cleared for clinical purposes. Sometimes the receiving computer will have a printer so that images can be printed for convenience.

The teleradiology process begins at the image sending station. The radiographic image and a modem or other connection are required for this first step. The image is scanned and then sent via the network connection to the receiving computer.

Today's high-speed broadband based Internet enables the use of new technologies for teleradiology: the image reviewer can now have access to distant servers in order to view an exam. Therefore, they do not need particular workstations to view the images; a standard personal computer (PC) and digital subscriber line (DSL) connection is enough to reach keosys central server. No particular software is necessary on the PC and the images can be reached from wherever in the world.

Teleradiology is the most popular use for telemedicine and accounts for at least 50% of all telemedicine usage.

Telepathology

Telepathology is the practice of pathology at a distance. It uses telecommunications technology to facilitate the transfer of image-rich pathology data between distant locations for the purposes of diagnosis, education, and research. Performance of telepathology requires that a pathologist selects the video images for analysis and the rendering diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement is the selection of microscopic fields-of-view for analysis and diagnosis.

A pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In an editorial in a medical journal, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services. He, and his collaborators, published the first scientific paper on robotic telepathology. Weinstein was also granted the first U.S. patents for robotic telepathology systems and telepathology diagnostic networks. Weinstein is known to many as the "father of telepathology". In Norway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989. This is still in operation, decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia.

Telepathology has been successfully used for many applications including the rendering histopathology tissue diagnoses, at a distance, for education, and for research. Although digital pathology imaging, including virtual microscopy, is the mode of choice for telepathology services in developed countries, analog telepathology imaging is still used for patient services in some developing countries.

Teledermatology

Teledermatology allows dermatology consultations over a distance using audio, visual and data communication, and has been found to improve efficiency. Applications comprise health care management such as diagnoses, consultation and treatment as well as (continuing medical) education. The dermatologists Perednia and Brown were the first to coin the term "teledermatology" in 1995. In a scientific publication, they described the value of a teledermatologic service in a rural area underserved by dermatologists.

Teledentistry

Teledentistry is the use of information technology and telecommunications for dental care, consultation, education, and public awareness in the same manner as telehealth and telemedicine.

Teleaudiology

Tele-audiology is the utilization of telehealth to provide audiological services and may include the full scope of audiological practice. This term was first used by Dr Gregg Givens in 1999 in reference to a system being developed at East Carolina University in North Carolina, USA.

Teleophthalmology

Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Today, applications of teleophthalmology encompass access to eye specialists for patients in remote areas, ophthalmic disease screening, diagnosis and monitoring; as well as distant learning. Teleophthalmology may help reduce disparities by providing remote, low-cost screening tests such as diabetic retinopathy screening to low-income and uninsured patients. In Mizoram, India, a hilly area with poor roads, between 2011 till 2015, Tele-ophthalmology has provided care to over 10000 patients. These patients were examined by ophthalmic assistants locally but surgery was done on appointment after viewing the patient images online by Eye Surgeons in the hospital 6–12 hours away. Instead of an average 5 trips for say, a cataract procedure, only one was required for surgery alone as even post op care like stitch removal and glasses was done locally. There were huge cost savings in travel etc.

Licensure

U.S. licensing and regulatory issues

Restrictive licensure laws in the United States require a practitioner to obtain a full license to deliver telemedicine care across state lines. Typically, states with restrictive licensure laws also have several exceptions (varying from state to state) that may release an out-of-state practitioner from the additional burden of obtaining such a license. A number of states require practitioners who seek compensation to frequently deliver interstate care to acquire a full license.

If a practitioner serves several states, obtaining this license in each state could be an expensive and time-consuming proposition. Even if the practitioner never practices medicine face-to-face with a patient in another state, he/she still must meet a variety of other individual state requirements, including paying substantial licensure fees, passing additional oral and written examinations, and traveling for interviews.

In 2008, the U.S. passed the Ryan Haight Act which required face-to-face or valid telemedicine consultations prior to receiving a prescription.

State medical licensing boards have sometimes opposed telemedicine; for example, in 2012 electronic consultations were illegal in Idaho, and an Idaho-licensed general practitioner was punished by the board for prescribing an antibiotic, triggering reviews of her licensure and board certifications across the country. Subsequently, in 2015 the state legislature legalized electronic consultations.

In 2015, Teladoc filed suit against the Texas Medical Board over a rule that required in-person consultations initially; the judge refused to dismiss the case, noting that antitrust laws apply to state medical boards.

Companies

In the United States, the major companies offering primary care for non-acute illnesses include Teladoc, American Well, and PlushCare. Companies such as Grand Rounds offer remote access to specialty care. Additionally, telemedicine companies are collaborating with health insurers and other telemedicine providers to expand marketshare and patient access to telemedicine consultations. For example, In 2015, UnitedHealthcare announced that it would cover a range of video visits from Doctor On Demand, American Well’s AmWell, and its own Optum’s NowClinic, which is a white-labeled American Well offering. In November 30, 2017, PlushCare launched in some U.S. states, the Pre-Exposure Prophylaxis (PrEP) therapy for prevention of HIV. In this PrEP initiative, PlushCare does not require an initial check-up and provides consistent online doctor visits, regular local laboratory testing and prescriptions filled at partner pharmacies.

Advanced and experimental services

Telesurgery

Remote surgery (also known as telesurgery) is the ability for a doctor to perform surgery on a patient even though they are not physically in the same location. It is a form of telepresence. Remote surgery combines elements of robotics, cutting edge communication technology such as high-speed data connections, haptics and elements of management information systems. While the field of robotic surgery is fairly well established, most of these robots are controlled by surgeons at the location of the surgery.

Remote surgery is essentially advanced telecommuting for surgeons, where the physical distance between the surgeon and the patient is immaterial. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital.

Remote surgery or telesurgery is performance of surgical procedures where the surgeon is not physically in the same location as the patient, using a robotic teleoperator system controlled by the surgeon. The remote operator may give tactile feedback to the user. Remote surgery combines elements of robotics and high-speed data connections. A critical limiting factor is the speed, latency and reliability of the communication system between the surgeon and the patient, though trans-Atlantic surgeries have been demonstrated.

Enabling technologies

Videotelephony

Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time.

At the dawn of the technology, videotelephony also included image phones which would exchange still images between units every few seconds over conventional POTS-type telephone lines, essentially the same as slow scan TV systems.

Currently videotelephony is particularly useful to the deaf and speech-impaired who can use them with sign language and also with a video relay service, and well as to those with mobility issues or those who are located in distant places and are in need of telemedical or tele-educational services.

Developing countries

For developing countries, telemedicine and eHealth can be the only means of healthcare provision in remote areas. For example, the difficult financial situation in many African states and lack of trained health professionals has meant that the majority of the people in sub-Saharan Africa are badly disadvantaged in medical care, and in remote areas with low population density, direct healthcare provision is often very poor However, provision of telemedicine and eHealth from urban centres or from other countries is hampered by the lack of communications infrastructure, with no landline phone or broadband internet connection, little or no mobile connectivity, and often not even a reliable electricity supply.

The Satellite African eHEalth vaLidation (SAHEL) demonstration project has shown how satellite broadband technology can be used to establish telemedicine in such areas. SAHEL was started in 2010 in Kenya and Senegal, providing self-contained, solar-powered internet terminals to rural villages for use by community nurses for collaboration with distant health centres for training, diagnosis and advice on local health issues.

In 2014, the government of Luxembourg, along with satellite operator, SES and NGOs, Archemed, Fondation Follereau, Friendship Luxembourg, German Doctors and Médecins Sans Frontières, established SATMED, a multilayer eHealth platform to improve public health in remote areas of emerging and developing countries, using the Emergency.lu disaster relief satellite platform and the Astra 2G TV satellite. SATMED was first deployed in response to a report in 2014 by German Doctors of poor communications in Sierra Leone hampering the fight against Ebola, and SATMED equipment arrived in the Serabu clinic in Sierra Leone in December 2014. In June 2015 SATMED was deployed at Maternité Hospital in Ahozonnoude, Benin to provide remote consultation and monitoring, and is the only effective communication link between Ahozonnoude, the capital and a third hospital in Allada, since land routes are often inaccessible due to flooding during the rainy season.

A Simple Explanation Of 'The Internet Of Things'



Shutterstock

The "Internet of things" (IoT) is becoming an increasingly growing topic of conversation both in the workplace and outside of it. It's a concept that not only has the potential to impact how we live but also how we work. But what exactly is the "Internet of things" and what impact is it going to have on you, if any? There are a lot of complexities around the "Internet of things" but I want to stick to the basics. Lots of technical and policy-related conversations are being had but many people are still just trying to grasp the foundation of what the heck these conversations are about.

Let's start with understanding a few things.

Broadband Internet is become more widely available, the cost of connecting is decreasing, more devices are being created with Wi-Fi capabilities and sensors built into them, technology costs are going down, and smartphone penetration is sky-rocketing.  All of these things are creating a "perfect storm" for the IoT.


So What Is The Internet Of Things?

Simply put, this is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from cellphones, coffee makers, washing machines, headphones, lamps, wearable devices and almost anything else you can think of.  This also applies to components of machines, for example a jet engine of an airplane or the drill of an oil rig. As I mentioned, if it has an on and off switch then chances are it can be a part of the IoT.  The analyst firm Gartner says that by 2020 there will be over 26 billion connected devices... That's a lot of connections (some even estimate this number to be much higher, over 100 billion).  The IoT is a giant network of connected "things" (which also includes people).  The relationship will be between people-people, people-things, and things-things.

How Does This Impact You?

The new rule for the future is going to be, "Anything that can be connected, will be connected." But why on earth would you want so many connected devices talking to each other? There are many examples for what this might look like or what the potential value might be. Say for example you are on your way to a meeting; your car could have access to your calendar and already know the best route to take. If the traffic is heavy your car might send a text to the other party notifying them that you will be late. What if your alarm clock wakes up you at 6 a.m. and then notifies your coffee maker to start brewing coffee for you? What if your office equipment knew when it was running low on supplies and automatically re-ordered more?  What if the wearable device you used in the workplace could tell you when and where you were most active and productive and shared that information with other devices that you used while working?

On a broader scale, the IoT can be applied to things like transportation networks: "smart cities" which can help us reduce waste and improve efficiency for things such as energy use; this helping us understand and improve how we work and live. Take a look at the visual below to see what something like that can look like.

libelium_smart_world_infographic_big

The reality is that the IoT allows for virtually endless opportunities and connections to take place, many of which we can't even think of or fully understand the impact of today. It's not hard to see how and why the IoT is such a hot topic today; it certainly opens the door to a lot of opportunities but also to many challenges. Security is a big issue that is oftentimes brought up. With billions of devices being connected together, what can people do to make sure that their information stays secure? Will someone be able to hack into your toaster and thereby get access to your entire network? The IoT also opens up companies all over the world to more security threats. Then we have the issue of privacy and data sharing. This is a hot-button topic even today, so one can only imagine how the conversation and concerns will escalate when we are talking about many billions of devices being connected. Another issue that many companies specifically are going to be faced with is around the massive amounts of data that all of these devices are going to produce. Companies need to figure out a way to store, track, analyze and make sense of the vast amounts of data that will be generated.
So what now?

Conversations about the IoT are (and have been for several years) taking place all over the world as we seek to understand how this will impact our lives. We are also trying to understand what the many opportunities and challenges are going to be as more and more devices start to join the IoT. For now the best thing that we can do is educate ourselves about what the IoT is and the potential impacts that can be seen on how we work and live.

Jacob Morgan is a keynote speaker, author (most recently of The Future of Work), and futurist.

Wireless sensor network

From Wikipedia, the free encyclopedia

Typical multi-hop wireless sensor network architecture

Wireless sensor network (WSN) refers to a group of spatially dispersed and dedicated sensors for monitoring and recording the physical conditions of the environment and organizing the collected data at a central location. WSNs measure environmental conditions like temperature, sound, pollution levels, humidity, wind, and so on.

These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs are spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on.

The WSN is built of "nodes" – from a few to several hundreds or even thousands, where each node is connected to one (or sometimes several) sensors. Each such sensor network node has typically several parts: a radio transceiver with an internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from that of a shoebox down to the size of a grain of dust, although functioning "motes" of genuine microscopic dimensions have yet to be created. The cost of sensor nodes is similarly variable, ranging from a few to hundreds of dollars, depending on the complexity of the individual sensor nodes. Size and cost constraints on sensor nodes result in corresponding constraints on resources such as energy, memory, computational speed and communications bandwidth. The topology of the WSNs can vary from a simple star network to an advanced multi-hop wireless mesh network. The propagation technique between the hops of the network can be routing or flooding.

In computer science and telecommunications, wireless sensor networks are an active research area with numerous workshops and conferences arranged each year, for example IPSN, SenSys, and EWSN.

Application

Area monitoring

Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some phenomenon is to be monitored. A military example is the use of sensors detect enemy intrusion; a civilian example is the geo-fencing of gas or oil pipelines.

Health care monitoring

The sensor networks for medical applications can be of several types: implanted, wearable, and environment-embedded. The implantable medical devices are those that are inserted inside human body. Wearable devices are used on the body surface of a human or just at close proximity of the user. Environment-embedded systems employ sensors contained in the environment. Possible applications include body position measurement, location of persons, overall monitoring of ill patients in hospitals and at homes. Devices embedded in the environment track the physical state of a person for continuous health diagnosis, using as input the data from a network of depth cameras, a sensing floor, or other similar devices. Body-area networks can collect information about an individual's health, fitness, and energy expenditure. In health care applications the privacy and authenticity of user data has prime importance. Especially due to the integration of sensor networks, with IoT, the authentication of user become more challenging; however, a solution is presented in recent work.

Environmental/Earth sensing

There are many applications in monitoring environmental parameters, examples of which are given below. They share the extra challenges of harsh environments and reduced power supply.

Air pollution monitoring

Wireless sensor networks have been deployed in several cities (Stockholm, London, and Brisbane) to monitor the concentration of dangerous gases for citizens. These can take advantage of the ad hoc wireless links rather than wired installations, which also make them more mobile for testing readings in different areas.

Forest fire detection

A network of Sensor Nodes can be installed in a forest to detect when a fire has started. The nodes can be equipped with sensors to measure temperature, humidity and gases which are produced by fire in the trees or vegetation. The early detection is crucial for a successful action of the firefighters; thanks to Wireless Sensor Networks, the fire brigade will be able to know when a fire is started and how it is spreading.

Landslide detection

A landslide detection system makes use of a wireless sensor network to detect the slight movements of soil and changes in various parameters that may occur before or during a landslide. Through the data gathered it may be possible to know the impending occurrence of landslides long before it actually happens.

Water quality monitoring

Water quality monitoring involves analyzing water properties in dams, rivers, lakes and oceans, as well as underground water reserves. The use of many wireless distributed sensors enables the creation of a more accurate map of the water status, and allows the permanent deployment of monitoring stations in locations of difficult access, without the need of manual data retrieval.

Natural disaster prevention

Wireless sensor networks can effectively act to prevent the consequences of natural disasters, like floods. Wireless nodes have successfully been deployed in rivers where changes of the water levels have to be monitored in real time.

Industrial monitoring

Machine health monitoring

Wireless sensor networks have been developed for machinery condition-based maintenance (CBM) as they offer significant cost savings and enable new functionality.

Wireless sensors can be placed in locations difficult or impossible to reach with a wired system, such as rotating machinery and untethered vehicles.

Data center monitoring

Due to the high density of servers racks in a data center, often cabling and IP addresses are an issue. To overcome that problem more and more racks are fitted out with wireless temperature sensors to monitor the intake and outtake temperatures of racks. As ASHRAE recommends up to 6 temperature sensors per rack, meshed wireless temperature technology gives an advantage compared to traditional cabled sensors.

Data logging

Wireless sensor networks are also used for the collection of data for monitoring of environmental information, this can be as simple as the monitoring of the temperature in a fridge to the level of water in overflow tanks in nuclear power plants. The statistical information can then be used to show how systems have been working. The advantage of WSNs over conventional loggers is the "live" data feed that is possible.

Water/waste water monitoring

Monitoring the quality and level of water includes many activities such as checking the quality of underground or surface water and ensuring a country’s water infrastructure for the benefit of both human and animal. It may be used to protect the wastage of water.

Structural health monitoring

Wireless sensor networks can be used to monitor the condition of civil infrastructure and related geo-physical processes close to real time, and over long periods through data logging, using appropriately interfaced sensors.

Wine production

Wireless sensor networks are used to monitor wine production, both in the field and the cellar.

Characteristics

The main characteristics of a WSN include
  • Power consumption constraints for nodes using batteries or energy harvesting. Examples of suppliers are ReVibe Energy and Perpetuum
  • Ability to cope with node failures (resilience)
  • Some mobility of nodes (for highly mobile nodes see MWSNs)
  • Heterogeneity of nodes
  • Homogeneity of nodes
  • Scalability to large scale of deployment
  • Ability to withstand harsh environmental conditions
  • Ease of use
  • Cross-layer design
Cross-layer is becoming an important studying area for wireless communications. In addition, the traditional layered approach presents three main problems:
  1. Traditional layered approach cannot share different information among different layers, which leads to each layer not having complete information. The traditional layered approach cannot guarantee the optimization of the entire network.
  2. The traditional layered approach does not have the ability to adapt to the environmental change.
  3. Because of the interference between the different users, access conflicts, fading, and the change of environment in the wireless sensor networks, traditional layered approach for wired networks is not applicable to wireless networks.
So the cross-layer can be used to make the optimal modulation to improve the transmission performance, such as data rate, energy efficiency, QoS (Quality of Service), etc. Sensor nodes can be imagined as small computers which are extremely basic in terms of their interfaces and their components. They usually consist of a processing unit with limited computational power and limited memory, sensors or MEMS (including specific conditioning circuitry), a communication device (usually radio transceivers or alternatively optical), and a power source usually in the form of a battery. Other possible inclusions are energy harvesting modules, secondary ASICs, and possibly secondary communication interface (e.g. RS-232 or USB).

The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server. Other special components in routing based networks are routers, designed to compute, calculate and distribute the routing tables.

Platforms

Hardware

One major challenge in a WSN is to produce low cost and tiny sensor nodes. There are an increasing number of small companies producing WSN hardware and the commercial situation can be compared to home computing in the 1970s. Many of the nodes are still in the research and development stage, particularly their software. Also inherent to sensor network adoption is the use of very low power methods for radio communication and data acquisition.

In many applications, a WSN communicates with a Local Area Network or Wide Area Network through a gateway. The Gateway acts as a bridge between the WSN and the other network. This enables data to be stored and processed by devices with more resources, for example, in a remotely located server. A wireless wide area network used primarily for low-power devices is known as a Low-Power Wide-Area Network (LPWAN).

Wireless

There are several wireless standards and solutions for sensor node connectivity. Thread and ZigBee can connect sensors operating at 2.4 GHz with a data rate of 250kbit/s. Many use a lower frequency to increase radio range (typically 1 km), for example Z-wave operates at 915 MHz and in the EU 868 MHz has been widely used but these have a lower data rate (typically 50 kb/s). The IEEE 802.15.4 working group provides a standard for low power device connectivity and commonly sensors and smart meters use one of these standards for connectivity. With the emergence of Internet of Things, many other proposals have been made to provide sensor connectivity. LORA is a form of LPWAN which provides long range low power wireless connectivity for devices, which has been used in smart meters. Wi-SUN connects devices at home. NarrowBand IOT and LTE-M can connect up to millions of sensors and devices using cellular technology.

Software

Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs may be deployed in large numbers in various environments, including remote and hostile regions, where ad hoc communications are a key component. For this reason, algorithms and protocols need to address the following issues:
  • Increased lifespan
  • Robustness and fault tolerance
  • Self-configuration
Lifetime maximization: Energy/Power Consumption of the sensing device should be minimized and sensor nodes should be energy efficient since their limited energy resource determines their lifetime. To conserve power, wireless sensor nodes normally power off both the radio transmitter and the radio receiver when not in use.

Routing Protocols

Wireless sensor networks are composed of low-energy, small-size, and low-range unattended sensor nodes. Recently, it has been observed that by periodically turning on and off the sensing and communication capabilities of sensor nodes, we can significantly reduce the active time and thus prolong network lifetime. However, this duty cycling may result in high network latency, routing overhead, and neighbor discovery delays due to asynchronous sleep and wake-up scheduling. These limitations call for a countermeasure for duty-cycled wireless sensor networks which should minimize routing information, routing traffic load, and energy consumption. Researchers from Sungkyunkwan University have proposed a lightweight non-increasing delivery-latency interval routing referred as LNDIR. This scheme can discover minimum latency routes at each non-increasing delivery-latency interval instead of each time slot. Simulation experiments demonstrated the validity of this novel approach in minimizing routing information stored at each sensor. Furthermore, this novel routing can also guarantee the minimum delivery latency from each source to the sink. Performance improvements of up to 12-fold and 11-fold are observed in terms of routing traffic load reduction and energy efficiency, respectively, as compared to existing schemes.

Operating systems

Operating systems for wireless sensor network nodes are typically less complex than general-purpose operating systems. They more strongly resemble embedded systems, for two reasons. First, wireless sensor networks are typically deployed with a particular application in mind, rather than as a general platform. Second, a need for low costs and low power leads most wireless sensor nodes to have low-power microcontrollers ensuring that mechanisms such as virtual memory are either unnecessary or too expensive to implement.

It is therefore possible to use embedded operating systems such as eCos or uC/OS for sensor networks. However, such operating systems are often designed with real-time properties.

TinyOS is perhaps the first operating system specifically designed for wireless sensor networks. TinyOS is based on an event-driven programming model instead of multithreading. TinyOS programs are composed of event handlers and tasks with run-to-completion semantics. When an external event occurs, such as an incoming data packet or a sensor reading, TinyOS signals the appropriate event handler to handle the event. Event handlers can post tasks that are scheduled by the TinyOS kernel some time later.

LiteOS is a newly developed OS for wireless sensor networks, which provides UNIX-like abstraction and support for the C programming language.

Contiki is an OS which uses a simpler programming style in C while providing advances such as 6LoWPAN and Protothreads.

RIOT (operating system) is a more recent real-time OS including similar functionality to Contiki.

PreonVM is an OS for wireless sensor networks, which provides 6LoWPAN based on Contiki and support for the Java programming language.

Online collaborative sensor data management platforms

Online collaborative sensor data management platforms are on-line database services that allow sensor owners to register and connect their devices to feed data into an online database for storage and also allow developers to connect to the database and build their own applications based on that data. Examples include Xively and the Wikisensing platform. Such platforms simplify online collaboration between users over diverse data sets ranging from energy and environment data to that collected from transport services. Other services include allowing developers to embed real-time graphs & widgets in websites; analyse and process historical data pulled from the data feeds; send real-time alerts from any datastream to control scripts, devices and environments.

The architecture of the Wikisensing system describes the key components of such systems to include APIs and interfaces for online collaborators, a middleware containing the business logic needed for the sensor data management and processing and a storage model suitable for the efficient storage and retrieval of large volumes of data.

Simulation

At present, agent-based modeling and simulation is the only paradigm which allows the simulation of complex behavior in the environments of wireless sensors (such as flocking). Agent-based simulation of wireless sensor and ad hoc networks is a relatively new paradigm. Agent-based modelling was originally based on social simulation.

Network simulators like Opnet, Tetcos NetSim and NS can be used to simulate a wireless sensor network.

Other concepts

Security

Infrastructure-less architecture (i.e. no gateways are included, etc.) and inherent requirements (i.e. unattended working environment, etc.) of WSNs might pose several weak points that attract adversaries. Therefore, security is a big concern when WSNs are deployed for special applications such as military and healthcare. Owing to their unique characteristics, traditional security methods of computer networks would be useless (or less effective) for WSNs. Hence, lack of security mechanisms would cause intrusions towards those networks. These intrusions need to be detected and mitigation methods should be applied. More interested readers would refer to Butun et al.'s paper regarding intrusion detection systems devised for WSNs.

Distributed sensor network

If a centralized architecture is used in a sensor network and the central node fails, then the entire network will collapse, however the reliability of the sensor network can be increased by using a distributed control architecture. Distributed control is used in WSNs for the following reasons:
  1. Sensor nodes are prone to failure,
  2. For better collection of data,
  3. To provide nodes with backup in case of failure of the central node.
There is also no centralised body to allocate the resources and they have to be self organized.

Data integration and sensor web

The data gathered from wireless sensor networks is usually saved in the form of numerical data in a central base station. Additionally, the Open Geospatial Consortium (OGC) is specifying standards for interoperability interfaces and metadata encodings that enable real time integration of heterogeneous sensor webs into the Internet, allowing any individual to monitor or control wireless sensor networks through a web browser.

In-network processing

To reduce communication costs some algorithms remove or reduce nodes' redundant sensor information and avoid forwarding data that is of no use. As nodes can inspect the data they forward, they can measure averages or directionality for example of readings from other nodes. For example, in sensing and monitoring applications, it is generally the case that neighboring sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires techniques for in-network data aggregation and mining. Aggregation reduces the amount of network traffic which helps to reduce energy consumption on sensor nodes. Recently, it has been found that network gateways also play an important role in improving energy efficiency of sensor nodes by scheduling more resources for the nodes with more critical energy efficiency need and advanced energy efficient scheduling algorithms need to be implemented at network gateways for the improvement of the overall network energy efficiency.

Secure data aggregation

This is a form of in-network processing where sensor nodes are assumed to be unsecured with limited available energy, while the base station is assumed to be secure with unlimited available energy. Aggregation complicates the already existing security challenges for wireless sensor networks and requires new security techniques tailored specifically for this scenario. Providing security to aggregate data in wireless sensor networks is known as secure data aggregation in WSN.were the first few works discussing techniques for secure data aggregation in wireless sensor networks.

Two main security challenges in secure data aggregation are confidentiality and integrity of data. While encryption is traditionally used to provide end to end confidentiality in wireless sensor network, the aggregators in a secure data aggregation scenario need to decrypt the encrypted data to perform aggregation. This exposes the plaintext at the aggregators, making the data vulnerable to attacks from an adversary. Similarly an aggregator can inject false data into the aggregate and make the base station accept false data. Thus, while data aggregation improves energy efficiency of a network, it complicates the existing security challenges.

Wearable computer

From Wikipedia, the free encyclopedia

The Apple Watch, released in 2015

Wearable computers, also known as wearables or body-borne computers, are small computing devices (nowadays usually electronic) that are worn under, with, or on top of clothing.

The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches. This article uses the broadest definition.

Wearables may be for general use, in which case they are just a particularly small example of mobile computing. Alternatively they may be for specialized purposes such as fitness trackers. They may incorporate special sensors such as accelerometers, thermometer and heart rate monitors, or novel user interfaces such as Google Glass, an optical head-mounted display controlled by gestures. It may be that specialized wearables will evolve into general all-in-one devices, as happened with the convergence of PDAs and mobile phones into smartphones.

Wearables are typically worn on the wrist (e.g. fitness trackers), hung from the neck (like a necklace), strapped to the arm or leg (smartphones when exercising), or on the head (as glasses or a helmet), though some have been located elsewhere (e.g. on a finger or in a shoe). Devices carried in a pocket or bag – such as smartphones and before them pocket calculators and PDAs, may or may not be regarded as 'worn'.

Wearable computers have various technical issues common to other mobile computing, such as batteries, heat dissipation, software architectures, wireless and personal area networks, and data management. Many wearable computers are active all the time, e.g. processing or recording data continuously.

Applications

Wearable computers are not only limited to the computers such as fitness trackers, that are worn on wrists, they also includes wearables such as Heart pacemakers and other prosthetic. It is used most often in research that focuses on behavioral modeling, health monitoring systems, IT and media development, where the person wearing the computer actually moves or is otherwise engaged with his or her surroundings. Wearable computers have been used for the following:
Wearable computing is the subject of active research, especially the form-factor and location on the body, with areas of study including user interface design, augmented reality, and pattern recognition. The use of wearables for specific applications, for compensating disabilities or supporting elderly people steadily increases.

History

Evolution of Steve Mann's WearComp wearable computer from
backpack based systems of the 1980s to his current covert systems

Due to the varied definitions of "wearable" and "computer", the first wearable computer could be as early as the first abacus on a necklace, a 16th-century abacus ring, a wristwatch and 'finger-watch' owned by Queen Elizabeth I of England, or the covert timing devices hidden in shoes to cheat at roulette by Thorp and Shannon in the 1960s and 1970s.

However, a computer is not merely a time-keeping or calculating device, but rather a user-programmable item for complex algorithms, interfacing, and data management. By this definition, the wearable computer was invented by Steve Mann, in the late 1970s:
Steve Mann, a professor at the University of Toronto, was hailed as the father of the wearable computer and the ISSCC's first virtual panelist, by moderator Woodward Yang of Harvard University (Cambridge Mass.).
— IEEE ISSCC 8 Feb. 2000
The development of wearable items has taken several steps of miniaturization from discrete electronics over hybrid designs to fully integrated designs, where just one processor chip, a battery and some interface conditioning items make the whole unit.

1500s

Queen Elizabeth I of England received a watch from Robert Dudley in 1571, as a New Year present; it may have been worn on the forearm rather than the wrist. She also possessed a 'finger-watch' set in a ring, with an alarm that prodded her finger. 

1600s

The Qing Dynasty saw the introduction of a fully functional abacus on a ring, which could be used while it was being worn.

1960s

In 1961, mathematicians Edward O. Thorp and Claude Shannon built some computerized timing devices to help them win at a game of roulette. One such timer was concealed in a shoe and another in a pack of cigarettes. Various versions of this apparatus were built in the 1960s and 1970s. Detailed pictures of a shoe-based timing device can be viewed at www.eyetap.org.

Thorp refers to himself as the inventor of the first "wearable computer" In other variations, the system was a concealed cigarette-pack sized analog computer designed to predict the motion of roulette wheels. A data-taker would use microswitches hidden in his shoes to indicate the speed of the roulette wheel, and the computer would indicate an octant of the roulette wheel to bet on by sending musical tones via radio to a miniature speaker hidden in a collaborator's ear canal. The system was successfully tested in Las Vegas in June 1961, but hardware issues with the speaker wires prevented it from being used beyond test runs. This was not a wearable computer, because it could not be re-purposed during use; rather it was an example of task-specific hardware. This work was kept secret until it was first mentioned in Thorp's book Beat the Dealer (revised ed.) in 1966 and later published in detail in 1969.

1970s

Pocket calculators became mass-market devices from 1970, starting in Japan. Programmable calculators followed in the late 1970s, being somewhat more general-purpose computers. The HP-01 algebraic calculator watch by Hewlett-Packard was released in 1977.

A camera-to-tactile vest for the blind, launched by C.C. Collins in 1977, converted images into a 1024-point, 10-inch square tactile grid on a vest.

1980s

The 1980s saw the rise of more general-purpose wearable computers. In 1981, Steve Mann designed and built a backpack-mounted 6502-based wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability (cameras and other photographic systems). Mann went on to be an early and active researcher in the wearables field, especially known for his 1994 creation of the Wearable Wireless Webcam, the first example of Lifelogging.

Seiko Epson released the RC-20 Wrist Computer in 1984. It was an early smartwatch, powered by a computer on a chip.

In 1989, Reflection Technology marketed the Private Eye head-mounted display, which scans a vertical array of LEDs across the visual field using a vibrating mirror. This display gave rise to several hobbyist and research wearables, including Gerald "Chip" Maguire's IBM / Columbia University Student Electronic Notebook, Doug Platt's Hip-PC, and Carnegie Mellon University's VuMan 1 in 1991.

The Student Electronic Notebook consisted of the Private Eye, Toshiba diskless AIX notebook computers (prototypes), a stylus based input system and a virtual keyboard. It used direct-sequence spread spectrum radio links to provide all the usual TCP/IP based services, including NFS mounted file systems and X11, which all ran in the Andrew Project environment.

The Hip-PC included an Agenda palmtop used as a chording keyboard attached to the belt and a 1.44 megabyte floppy drive. Later versions incorporated additional equipment from Park Engineering. The system debuted at "The Lap and Palmtop Expo" on 16 April 1991.

VuMan 1 was developed as part of a Summer-term course at Carnegie Mellon's Engineering Design Research Center, and was intended for viewing house blueprints. Input was through a three-button unit worn on the belt, and output was through Reflection Tech's Private Eye. The CPU was an 8 MHz 80188 processor with 0.5 MB ROM.

1990s

In the 1990s PDAs became widely used, and in 1999 were combined with mobile phones in Japan to produce the first mass-market smartphone.

Timex Datalink USB Dress edition with Invasion video game. The watch crown (icontrol) can be used to move the defender left to right and the fire control is the Start/Split button on the lower side of the face of the watch at 6 o' clock.

In 1993, the Private Eye was used in Thad Starner's wearable, based on Doug Platt's system and built from a kit from Park Enterprises, a Private Eye display on loan from Devon Sean McCullough, and the Twiddler chording keyboard made by Handykey. Many iterations later this system became the MIT "Tin Lizzy" wearable computer design, and Starner went on to become one of the founders of MIT's wearable computing project. 1993 also saw Columbia University's augmented-reality system known as KARMA (Knowledge-based Augmented Reality for Maintenance Assistance). Users would wear a Private Eye display over one eye, giving an overlay effect when the real world was viewed with both eyes open. KARMA would overlay wireframe schematics and maintenance instructions on top of whatever was being repaired. For example, graphical wireframes on top of a laser printer would explain how to change the paper tray. The system used sensors attached to objects in the physical world to determine their locations, and the entire system ran tethered from a desktop computer.

In 1994, Edgar Matias and Mike Ruicci of the University of Toronto, debuted a "wrist computer." Their system presented an alternative approach to the emerging head-up display plus chord keyboard wearable. The system was built from a modified HP 95LX palmtop computer and a Half-QWERTY one-handed keyboard. With the keyboard and display modules strapped to the operator's forearms, text could be entered by bringing the wrists together and typing. The same technology was used by IBM researchers to create the half-keyboard "belt computer. Also in 1994, Mik Lamming and Mike Flynn at Xerox EuroPARC demonstrated the Forget-Me-Not, a wearable device that would record interactions with people and devices and store this information in a database for later query. It interacted via wireless transmitters in rooms and with equipment in the area to remember who was there, who was being talked to on the telephone, and what objects were in the room, allowing queries like "Who came by my office while I was on the phone to Mark?". As with the Toronto system, Forget-Me-Not was not based on a head-mounted display.

Also in 1994, DARPA started the Smart Modules Program to develop a modular, humionic approach to wearable and carryable computers, with the goal of producing a variety of products including computers, radios, navigation systems and human-computer interfaces that have both military and commercial use. In July 1996, DARPA went on to host the "Wearables in 2005" workshop, bringing together industrial, university, and military visionaries to work on the common theme of delivering computing to the individual.[31] A follow-up conference was hosted by Boeing in August 1996, where plans were finalized to create a new academic conference on wearable computing. In October 1997, Carnegie Mellon University, MIT, and Georgia Tech co-hosted the IEEE International Symposium on Wearables Computers (ISWC) in Cambridge, Massachusetts. The symposium was a full academic conference with published proceedings and papers ranging from sensors and new hardware to new applications for wearable computers, with 382 people registered for the event.

In 1998, Steve Mann invented and built the world's first smartwatch. It was featured on the cover of Linux Journal in 2000, and demonstrated at ISSCC 2000.

2000s

Dr. Bruce H Thomas and Dr. Wayne Piekarski developed the Tinmith wearable computer system to support augmented reality. This work was first published internationally in 2000 at the ISWC conference. The work was carried out at the Wearable Computer Lab in the University of South Australia.

In 2002, as part of Kevin Warwick's Project Cyborg, Warwick's wife, Irena, wore a necklace which was electronically linked to Warwick's nervous system via an implanted electrode array The color of the necklace changed between red and blue dependent on the signals on Warwick's nervous system.

Also in 2002, Xybernaut released a wearable computer called the Xybernaut Poma Wearable PC, Poma for short. Poma stood for Personal Media Appliance. The project failed for a few reasons though the top reasons are that the equipment was expensive and clunky. The user would wear a head mounted optical piece, a CPU that could be clipped onto clothing, and a mini keyboard that was attached to the user's arm.

GoPro released their first product, the GoPro HERO 35mm, which began a successful franchise of wearable cameras. The cameras can be worn atop the head or around the wrist and are shock and waterproof. GoPro cameras are used by many athletes and extreme sports enthusiasts, a trend that became very apparent during the early 2010s.

In the late 2000s, various Chinese companies began producing mobile phones in the form of wristwatches, the descendants of which as of 2013 include the i5 and i6, which are GSM phones with 1.8 inch displays, and the ZGPAX s5 Android wristwatch phone.

2010s

LunaTik, a machined wristband attachment for the 6th-generation iPod Nano

Standardization with IEEE, IETF, and several industry groups (e.g. Bluetooth) lead to more various interfacing under the WPAN (wireless personal area network). It also led the WBAN (Wireless body area network) to offer new classification of designs for interfacing and networking. The 6th-generation iPod Nano, released in September 2010, has a wristband attachment available to convert it into a wearable wristwatch computer.

The development of wearable computing spread to encompass rehabilitation engineering, ambulatory intervention treatment, life guard systems, and defense wearable systems.

Sony produced a wristwatch called Sony SmartWatch that must be paired with an Android phone. Once paired, it becomes an additional remote display and notification tool.

Fitbit released several wearable fitness trackers and the Fitbit Surge, a full smartwatch that is compatible with Android and iOS.

On April 11, 2012, Pebble launched a Kickstarter campaign to raise $100,000 for their initial smartwatch model. The campaign ended on May 18 with $10,266,844, over 100 times the fundraising target. Pebble has released several smartwatches since, including the Pebble Time and the Pebble Round.

Google Glass, Google's head-mounted display, which was launched in 2013.

Google Glass launched their optical head-mounted display (OHMD) to a test group of users in 2013, before it became available to the public on May 15, 2014. Google's mission was to produce a mass-market ubiquitous computer that displays information in a smartphone-like hands-free format that can interact with the Internet via natural language voice commands. Google Glass received criticism over privacy and safety concerns. On January 15, 2015, Google announced that it would stop producing the Google Glass prototype but would continue to develop the product. According to Google, Project Glass was ready to "graduate" from Google X, the experimental phase of the project.

Thync, a headset launched in 2014, is a wearable that stimulates the brain with mild electrical pulses, causing the wearer to feel energized or calm based on input into a phone app. The device is attached to the temple and to the back of the neck with an adhesive strip.

Macrotellect launched 2 portable brainwave(EEG) sensing devices, BrainLink Pro and BrainLink Lite in 2014, which allows families and meditation students to enhance the mental fitness and stress relief with 20+ brain fitness enhancement Apps on Apple and Android App Stores.

In January 2015, Intel announced the sub-miniature Intel Curie for wearable applications, based on its Intel Quark platform. As small as a button, it features a 6-axis accelerometer, a DSP sensor hub, a Bluetooth LE unit, and a battery charge controller. It was scheduled to ship in the second half of the year.

On April 24, 2015, Apple released their take on the smartwatch, known as the Apple Watch. The Apple Watch features a touchscreen, many applications, and a heart-rate sensor.

Commercialization

Image of the ZYPAD wrist wearable computer from Eurotech
 

The commercialization of general-purpose wearable computers, as led by companies such as Xybernaut, CDI and ViA, Inc. has thus far been met with limited success. Publicly traded Xybernaut tried forging alliances with companies such as IBM and Sony in order to make wearable computing widely available, and managed to get their equipment seen on such shows as The X-Files, but in 2005 their stock was delisted and the company filed for Chapter 11 bankruptcy protection amid financial scandal and federal investigation. Xybernaut emerged from bankruptcy protection in January, 2007. ViA, Inc. filed for bankruptcy in 2001 and subsequently ceased operations.

In 1998, Seiko marketed the Ruputer, a computer in a (fairly large) wristwatch, to mediocre returns. In 2001, IBM developed and publicly displayed two prototypes for a wristwatch computer running Linux. The last message about them dates to 2004, saying the device would cost about $250, but it is still under development. In 2002, Fossil, Inc. announced the Fossil Wrist PDA, which ran the Palm OS. Its release date was set for summer of 2003, but was delayed several times and was finally made available on January 5, 2005. Timex Datalink is another example of a practical wearable computer. Hitachi launched a wearable computer called Poma in 2002. Eurotech offers the ZYPAD, a wrist wearable touch screen computer with GPS, Wi-Fi and Bluetooth connectivity and which can run a number of custom applications. In 2013, a wearable computing device on the wrist to control body temperature was developed at MIT.

Evidence of weak market acceptance was demonstrated when Panasonic Computer Solutions Company's product failed. Panasonic has specialized in mobile computing with their Toughbook line for over 10 years and has extensive market research into the field of portable, wearable computing products. In 2002, Panasonic introduced a wearable brick computer coupled with a handheld or a touchscreen worn on the arm. The "Brick" Computer is the CF-07 Toughbook, dual batteries, screen used same batteries as the base, 800 x 600 resolution, optional GPS and WWAN. Has one M-PCI slot and one PCMCIA slot for expansion. CPU used is a 600 MHz Pentium 3 factory under clocked to 300 MHz so it can stay cool passively as it has no fan. Micro DIM RAM is upgradeable. The screen can be used wirelessly on other computers. The brick would communicate wirelessly to the screen, and concurrently the brick would communicate wirelessly out to the internet or other networks. The wearable brick was quietly pulled from the market in 2005, while the screen evolved to a thin client touchscreen used with a handstrap.

Google has announced that it has been working on a head-mounted display-based wearable "augmented reality" device called Google Glass. An early version of the device was available to the US public from April 2013 until January 2015. Despite ending sales of the device through their Explorer Program, Google has stated that they plan to continue developing the technology.

LG and iriver produce earbud wearables measuring heart rate and other biometrics, as well as various activity metrics.

Greater response to commercialization has been found in creating devices with designated purposes rather than all-purpose. One example is the WSS1000. The WSS1000 is a wearable computer designed to make the work of inventory employees easier and more efficient. The device allows workers to scan the barcode of items and immediately enter the information into the company system. This removed the need for carrying a clipboard, removed error and confusion from hand written notes, and allowed workers the freedom of both hands while working; the system improves accuracy as well as efficiency.

Popular culture

Many technologies for wearable computers derive their ideas from science fiction. There are many examples of ideas from popular movies that have become technologies or are technologies currently being developed.
  • 3D User Interface: Devices that display usable, tactile interfaces that can be manipulated in front of the user. Examples include the glove-operated hologram computer featured at the Pre-Crime headquarters in the beginning of Minority Report and the computers used by the gate workers at Zion in The Matrix trilogy.
  • Intelligent Textiles: Clothing that can relay and collect information. Examples include Tron and its sequel, and also many sci-fi military films.
  • Threat Glasses: Scan others in vicinity and assess threat-to-self level. Examples include Terminator 2, 'Threep' Technology in Lock-In, and Kill switch.
  • Computerized Contact Lenses: A special contact lenses that is used to confirm one's identity. Used in Mission Impossible 4.
  • Combat Suit Armor: A wearable exoskeleton that provides protection to its wearer and is typically equipped with powerful weapons and a computer system. Examples include numerous Iron Man suits, along with Samus Aran's Power Suit and Fusion Suit in the Metroid video game series.
  • Brain Nano-Bots to Store Memories in the Cloud: Used in Total Recall.
  • Infrared Headsets: Can help identify suspects and see through walls. Examples include Robocop's special eye system, as well as some more advanced visors that Samus Aran uses in the Metroid Prime trilogy.
  • Wrist-Worn Computers: Provide various abilities and information, such as data about the wearer, a vicinity map, a flashlight, a communicator, a poison detector or an enemy-tracking device. Examples include the Pip-Boy 3000 from the Fallout games and Leela's Wrist Device from the Futurama TV sitcom.
  • On-chest device or smart necklace form-factor of wearable computer was shown in many sci-fi movies, including Prometheus and Iron Man, however such location of the most precious individual's possession comes from history of wearing amulets and charms.

Military use

The wearable computer was introduced to the US Army in 1989, as a small computer that was meant to assist soldiers in battle. Since then, the concept has grown to include the Land Warrior program and proposal for future systems. The most extensive military program in the wearables arena is the US Army's Land Warrior system, which will eventually be merged into the Future Force Warrior system. There are also researches for increasing the reliability of terrestrial navigation.

F-INSAS is an Indian Military Project, designed largely with wearable computing.

Right to property

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_property The right to property , or the right to own property ...