Search This Blog

Sunday, April 25, 2021

Science and technology studies

Science and technology studies or science, technology and society studies (STS) are the study of how society, politics, and culture affect scientific research and technological innovation, and how these, in turn, affect society, politics and culture.

History

Like most interdisciplinary fields of study, STS emerged from the confluence of a variety of disciplines and disciplinary subfields, all of which had developed an interest—typically, during the 1960s or 1970s—in viewing science and technology as socially embedded enterprises. The key disciplinary components of STS took shape independently, beginning in the 1960s, and developed in isolation from each other well into the 1980s, although Ludwik Fleck's (1935) monograph Genesis and Development of a Scientific Fact anticipated many of STS's key themes. In the 1970s Elting E. Morison founded the STS program at Massachusetts Institute of Technology (MIT), which served as a model. By 2011, 111 STS research centres and academic programs were counted worldwide.

Key themes

  • History of technology, that examines technology in its social and historical context. Starting in the 1960s, some historians questioned technological determinism, a doctrine that can induce public passivity to technologic and scientific "natural" development. At the same time, some historians began to develop similarly contextual approaches to the history of medicine.
  • History and philosophy of science (1960s). After the publication of Thomas Kuhn's well-known The Structure of Scientific Revolutions (1962), which attributed changes in scientific theories to changes in underlying intellectual paradigms, programs were founded at the University of California, Berkeley and elsewhere that brought historians of science and philosophers together in unified programs.
  • Science, technology, and society. In the mid- to late-1960s, student and faculty social movements in the U.S., UK, and European universities helped to launch a range of new interdisciplinary fields (such as women's studies) that were seen to address relevant topics that the traditional curriculum ignored. One such development was the rise of "science, technology, and society" programs, which are also—confusingly—known by the STS acronym. Drawn from a variety of disciplines, including anthropology, history, political science, and sociology, scholars in these programs created undergraduate curricula devoted to exploring the issues raised by science and technology. Feminist scholars in this and other emerging STS areas addressed themselves to the exclusion of women from science and engineering.
  • Science, engineering, and public policy studies emerged in the 1970s from the same concerns that motivated the founders of the science, technology, and society movement: A sense that science and technology were developing in ways that were increasingly at odds with the public's best interests. The science, technology, and society movement tried to humanize those who would make tomorrow's science and technology, but this discipline took a different approach: It would train students with the professional skills needed to become players in science and technology policy. Some programs came to emphasize quantitative methodologies, and most of these were eventually absorbed into systems engineering. Others emphasized sociological and qualitative approaches, and found that their closest kin could be found among scholars in science, technology, and society departments.

During the 1970s and 1980s, leading universities in the US, UK, and Europe began drawing these various components together in new, interdisciplinary programs. For example, in the 1970s, Cornell University developed a new program that united science studies and policy-oriented scholars with historians and philosophers of science and technology. Each of these programs developed unique identities due to variation in the components that were drawn together, as well as their location within the various universities. For example, the University of Virginia's STS program united scholars drawn from a variety of fields (with particular strength in the history of technology); however, the program's teaching responsibilities—it is located within an engineering school and teaches ethics to undergraduate engineering students—means that all of its faculty share a strong interest in engineering ethics.

The "turn to technology" (and beyond)

A decisive moment in the development of STS was the mid-1980s addition of technology studies to the range of interests reflected in science. During that decade, two works appeared en seriatim that signaled what Steve Woolgar was to call the "turn to technology". In a seminal 1984 article, Trevor Pinch and Wiebe Bijker attached the sociology of scientific knowledge to technology by showing how the sociology of technology could proceed along the theoretical and methodological lines established by the sociology of scientific knowledge. This was the intellectual foundation of the field they called the social construction of technology. Donald MacKenzie and Judy Wajcman primed the pump by publishing a collection of articles attesting to the influence of society on technological design (Social Shaping of Technology, 1985).

The "turn to technology" helped to cement an already growing awareness of underlying unity among the various emerging STS programs. More recently, there has been an associated turn to ecology, nature, and materiality in general, whereby the socio-technical and natural/material co-produce each other. This is especially evident in work in STS analyses of biomedicine (such as Carl May and Annemarie Mol) and ecological interventions (such as Bruno Latour, Sheila Jasanoff, Matthias Gross, and S. Lochlann Jain).

Important concepts

Social construction(s)

Social constructions are human-created ideas, objects, or events created by a series of choices and interactions. These interactions have consequences that change the perception that different groups of people have on these constructs. Some examples of social construction include class, race, money, and citizenship.

The following also alludes to the notion that not everything is set, a circumstance or result could potentially be one way or the other. According to the article "What is Social Construction?" by Laura Flores, "Social construction work is critical of the status quo. Social constructionists about X tend to hold that:

  1. X need not have existed, or need not be at all as it is. X, or X as it is at present, is not determined by the nature of things; it is not inevitable

Very often they go further, and urge that:

  1. X is quite as bad as it is.
  2. We would be much better off if X were done away with, or at least radically transformed."

In the past, there have been viewpoints that were widely regarded as fact until being called to question due to the introduction of new knowledge. Such viewpoints include the past concept of a correlation between intelligence and the nature of a human's ethnicity or race (X may not be at all as it is).

An example of the evolution and interaction of various social constructions within science and technology can be found in the development of both the high-wheel bicycle, or velocipede, and then of the bicycle. The velocipede was widely used in the latter half of the 19th century. In the latter half of the 19th century, a social need was first recognized for a more efficient and rapid means of transportation. Consequently, the velocipede was first developed, which was able to reach higher translational velocities than the smaller non-geared bicycles of the day, by replacing the front wheel with a larger radius wheel. One notable trade-off was a certain decreased stability leading to a greater risk of falling. This trade-off resulted in many riders getting into accidents by losing balance while riding the bicycle or being thrown over the handle bars.

The first "social construction" or progress of the velocipede caused the need for a newer "social construction" to be recognized and developed into a safer bicycle design. Consequently, the velocipede was then developed into what is now commonly known as the "bicycle" to fit within society's newer "social construction," the newer standards of higher vehicle safety. Thus the popularity of the modern geared bicycle design came as a response to the first social construction, the original need for greater speed, which had caused the high-wheel bicycle to be designed in the first place. The popularity of the modern geared bicycle design ultimately ended the widespread use of the velocipede itself, as eventually it was found to best accomplish the social-needs/ social-constructions of both greater speed and of greater safety.

Technoscience

Technoscience is a subset of Science, Technology, and Society studies that focuses on the inseparable connection between science and technology. It states that fields are linked and grow together, and scientific knowledge requires an infrastructure of technology in order to remain stationary or move forward. Both technological development and scientific discovery drive one another towards more advancement. Technoscience excels at shaping human thought and behavior by opening up new possibilities that gradually or quickly come to be perceived as necessities.

Recently, an Italian sociologist has studied the relationship with the history of science, which is underestimated by modern STS sociologists. Instead, it is worth emphasising the links that exist between the production of books on the history of science and technology and the study of the relationship between science and technology within a framework of social developments. We must always consider the generational leap between historical periods and scientific discoveries, machine building, creation of tools in relation to technological change occurring in very specific situations. From this point of view, the study of the motives of scientific history is important for studying the development of technoscience. And also for its sociological benefit ( Cfr. Guglielmo Rinzivillo, Raccontare la tecnoscienza. Storia di macchine, strumenti e idee per fare funzionare il mondo, Roma, Edizioni Nuova Cultura, 2020, ISBN 978-88-3365-349-5; ISSN 2284-0567).

Technosocial

"Technological action is a social process." Social factors and technology are intertwined so that they are dependent upon each other. This includes the aspect that social, political, and economic factors are inherent in technology and that social structure influences what technologies are pursued. In other words, "technoscientific phenomena combined inextricably with social/political/ economic/psychological phenomena, so 'technology' includes a spectrum of artifacts, techniques, organizations, and systems." Winner expands on this idea by saying "in the late twentieth century technology and society, technology and culture, technology and politics are by no means separate."

Examples

  • Ford PintoFord Motor Company sold and produced the Pinto during the 1970s. A flaw in the automobile design of the rear gas tank caused a fiery explosion upon impact. The exploding fuel tank killed and injured hundreds of people. Internal documents of test results, proved Ford CEO Lee Iacocca and engineers were aware of the flaw. The company decided to ignore improving their technology because of profit-driven motives, strict internal control, and competition from foreign competitors such as Volkswagen. Ford Motor Company conducted a cost-benefit analysis to determine if altering the Ford Pinto model was feasible. An analysis conducted by Ford employees argued against a new design because of increased cost. Employees were also under tight control by the CEO who rushed the Pinto through production lines to increase profits. Ford finally changed after public scrutiny. Safety organizations later influenced this technology by requiring stricter safety standards for motor vehicles.
  • DDT/toxins – DDT was a common and highly effective insecticide used during the 1940s until its ban in the early 1970s. It was utilized during World War 2 to combat insect-borne human disease that plagued military members and civilian populations. People and companies soon realized other benefits of DDT for agricultural purposes. Rachel Carson became worried of wide spread use on public health and the environment. Rachel Carson's book Silent Spring left an imprint on the industry by claiming linkage of DDT to many serious illness such as cancer. Carson's book drew criticism from chemical companies who felt their reputation and business threatened by such claims.. DDT was eventually banned by the United States Environmental Protection Agency (EPA) after a long and arduous process of research on the chemical substance. The main cause for the removal of DDT was the public deciding that any benefits were outweighed by the potential health risk.
  • Autopilots/computer-aided tasks (CATs) – From a security point of view the effects of making a task more computer driven is in the favor of technological advance because there is less reaction time required and computational error than a human pilot. Due to reduced error and reaction times flights on average, using autopilot, have been shown to be safer. Thus the technology has a direct impact on people by increasing their safety, and society affects technology because people want to be safer so they are constantly trying to improve the autopilot systems.
  • Cell phones – Cell phone technology emerged in the early 1920s after advancements were made in radio technology. Engineers at Bell Laboratories, the research, and development division of AT&T discovered that cell towers can transmit and receive signals to and from many directions. The discovery by Bell Labs revolutionized the capabilities and outcomes of cellular technology. Technology only improved once mobile phone users could communicate outside of a designated area. First-generation mobile phones were first created and sold by Motorola. Their phone was only intended for use in cars. Second-generation mobile phone capabilities continued to improve because of the switch to digital. Phones were faster which enhanced the communication capabilities of customers. They were also sleeker and weighed less than bulky first-generation technology. Technological advances boosted customer satisfaction and broadened cell phone companies' customer base. Third-generation technology changed the way people interact with others. Now customers had access to Wi-Fi, texting and other applications. Mobile phones are now entering into the fourth generation. Cellular and mobile phones revolutionized the way people socialize and communicate in order to establish a modern social structure. People have affected the development of this technology by demanding features such as larger screens, touch capabilities, and internet accessibility.
  • Internet – The internet arose because of extensive research on ARPANET between various universities, corporations, and ARPA (Advanced Research Project Agency), an agency of the Department of Defense. Scientists theorized a network of computers connected to each other. Computing capabilities contributed to developments and the creation of the modern day computer or laptop. The internet has become a normal part of life and business, to such a degree that the United Nations views it as a basic human right. The internet is becoming larger, one way is that more things are being moved into the digital world due to demand, for example online banking. It has drastically changed the way most people go about daily habits.

Deliberative democracy

Deliberative democracy is a reform of representative or direct democracies which mandates discussion and debate of popular topics which affect society. Deliberative democracy is a tool for making decisions. Deliberative democracy can be traced back all the way to Aristotle's writings. More recently, the term was coined by Joseph Bessette in his 1980 work Deliberative Democracy: The Majority Principle in Republican Government, where he uses the idea in opposition to the elitist interpretations of the United States Constitution with emphasis on public discussion.

Deliberative democracy can lead to more legitimate, credible, and trustworthy outcomes. Deliberative democracy allows for "a wider range of public knowledge", and it has been argued that this can lead to "more socially intelligent and robust" science. One major shortcoming of deliberative democracy is that many models insufficiently ensure critical interaction.

According to Ryfe, there are five mechanisms that stand out as critical to the successful design of deliberative democracy:

  • Rules of equality, civility, and inclusivity may prompt deliberation even when our first impulse is to avoid it.
  • Stories anchor reality by organizing experience and instilling a normative commitment to civic identities and values, and function as a medium for framing discussions.
  • Leadership provides important cues to individuals in deliberative settings, and can keep groups on a deliberative track when their members slip into routine and habit.
  • Individuals are more likely to sustain deliberative reasoning when they have a stake in the outcomes.
  • Apprenticeship teaches citizens to deliberate well. We might do well to imagine education as a form of apprenticeship learning, in which individuals learn to deliberate by doing it in concert with others more skilled in the activity.

Importance

Recently, there has been a movement towards greater transparency in the fields of policy and technology. Jasanoff comes to the conclusion that there is no longer a question of if there needs to be increased public participation in making decisions about science and technology, but now there needs to be ways to make a more meaningful conversation between the public and those developing the technology.

In practice

Bruce Ackerman and James S. Fishkin offered an example of a reform in their paper "Deliberation Day." The deliberation is to enhance public understanding of popular, complex and controversial issues through devices such as Fishkin's deliberative polling, though implementation of these reforms is unlikely in a large government such as that of the United States. However, things similar to this have been implemented in small, local governments like New England towns and villages. New England town hall meetings are a good example of deliberative democracy in a realistic setting.

An ideal deliberative democracy balances the voice and influence of all participants. While the main aim is to reach consensus, deliberative democracy should encourage the voices of those with opposing viewpoints, concerns due to uncertainties, and questions about assumptions made by other participants. It should take its time and ensure that those participating understand the topics on which they debate. Independent managers of debates should also have substantial grasp of the concepts discussed, but must "[remain] independent and impartial as to the outcomes of the process."

Tragedy of the commons

In 1968, Garrett Hardin popularised the phrase "tragedy of the commons." It is an economic theory where rational people act against the best interest of the group by consuming a common resource. Since then, the tragedy of the commons has been used to symbolize the degradation of the environment whenever many individuals use a common resource. Although Garrett Hardin was not an STS scholar, the concept of the tragedy of the commons still applies to science, technology and society.

In a contemporary setting, the Internet acts as an example of the tragedy of the commons through the exploitation of digital resources and private information. Data and internet passwords can be stolen much more easily than physical documents. Virtual spying is almost free compared to the costs of physical spying. Additionally, net neutrality can be seen as an example of tragedy of the commons in an STS context. The movement for net neutrality argues that the Internet should not be a resource that is dominated by one particular group, specifically those with more money to spend on Internet access.

A counterexample to the tragedy of the commons is offered by Andrew Kahrl. Privatization can be a way to deal with the tragedy of the commons. However, Kahrl suggests that the privatization of beaches on Long Island, in an attempt to combat the overuse of Long Island beaches, made the residents of Long Island more susceptible to flood damage from Hurricane Sandy. The privatization of these beaches took away from the protection offered by the natural landscape. Tidal lands that offer natural protection were drained and developed. This attempt to combat the tragedy of the commons by privatization was counter-productive. Privatization actually destroyed the public good of natural protection from the landscape.

Alternative modernity

Alternative modernity is a conceptual tool conventionally used to represent the state of present western society. Modernity represents the political and social structures of the society, the sum of interpersonal discourse, and ultimately a snapshot of society's direction at a point in time. Unfortunately conventional modernity is incapable of modeling alternative directions for further growth within our society. Also, this concept is ineffective at analyzing similar but unique modern societies such as those found in the diverse cultures of the developing world. Problems can be summarized into two elements: inward failure to analyze growth potentials of a given society, and outward failure to model different cultures and social structures and predict their growth potentials.

Previously, modernity carried a connotation of the current state of being modern, and its evolution through European colonialism. The process of becoming "modern" is believed to occur in a linear, pre-determined way, and is seen by Philip Brey as a way to interpret and evaluate social and cultural formations. This thought ties in with modernization theory, the thought that societies progress from "pre-modern" to "modern" societies.

Within the field of science and technology, there are two main lenses with which to view modernity. The first is as a way for society to quantify what it wants to move towards. In effect, we can discuss the notion of "alternative modernity" (as described by Andrew Feenberg) and which of these we would like to move towards. Alternatively, modernity can be used to analyze the differences in interactions between cultures and individuals. From this perspective, alternative modernities exist simultaneously, based on differing cultural and societal expectations of how a society (or an individual within society) should function. Because of different types of interactions across different cultures, each culture will have a different modernity.

Pace of innovation

Pace of Innovation is the speed at which technological innovation or advancement is occurring, with the most apparent instances being too slow or too rapid. Both these rates of innovation are extreme and therefore have effects on the people that get to use this technology.

No innovation without representation

"No innovation without representation" is a democratic ideal of ensuring that everyone involved gets a chance to be represented fairly in technological developments.

  • Langdon Winner states that groups and social interests likely to be affected by a particular kind of technological change ought to be represented at an early stage in defining exactly what that technology will be. It is the idea that relevant parties have a say in technological developments and are not left in the dark.
  • Spoken about by Massimiano Bucchi
  • This ideal does not require the public to become experts on the topics of science and engineering, it only asks that the opinions and ideas be heard before making drastic decisions, as talked about by Steven L. Goldman.

Privileged positions of business and science

The privileged positions of business and science refer to the unique authority that persons in these areas hold in economic, political, and technosocial affairs. Businesses have strong decision-making abilities in the function of society, essentially choosing what technological innovations to develop. Scientists and technologists have valuable knowledge, the ability to pursue the technological innovations they want. They proceed largely without public scrutiny and as if they had the consent of those potentially affected by their discoveries and creations.

Legacy thinking

Legacy thinking is defined as an inherited method of thinking imposed from an external source without objection by the individual, because it is already widely accepted by society.

Legacy thinking can impair the ability to drive technology for the betterment of society by blinding people to innovations that do not fit into their accepted model of how society works. By accepting ideas without questioning them, people often see all solutions that contradict these accepted ideas as impossible or impractical. Legacy thinking tends to advantage the wealthy, who have the means to project their ideas on the public. It may be used by the wealthy as a vehicle to drive technology in their favor rather than for the greater good. Examining the role of citizen participation and representation in politics provides an excellent example of legacy thinking in society. The belief that one can spend money freely to gain influence has been popularized, leading to public acceptance of corporate lobbying. As a result, a self-established role in politics has been cemented where the public does not exercise the power ensured to them by the Constitution to the fullest extent. This can become a barrier to political progress as corporations who have the capital to spend have the potential to wield great influence over policy. Legacy thinking, however, keeps the population from acting to change this, despite polls from Harris Interactive that report over 80% of Americans to feel that big business holds too much power in government. Therefore, Americans are beginning to try to steer away from this line of thought, rejecting legacy thinking, and demanding less corporate, and more public, participation in political decision making.

Additionally, an examination of net neutrality functions as a separate example of legacy thinking. Starting with dial-up, the internet has always been viewed as a private luxury good. Internet today is a vital part of modern-day society members. They use it in and out of life every day. Corporations are able to mislabel and greatly overcharge for their internet resources. Since the American public is so dependent upon internet there is little for them to do. Legacy thinking has kept this pattern on track despite growing movements arguing that the internet should be considered a utility. Legacy thinking prevents progress because it was widely accepted by others before us through advertising that the internet is a luxury and not a utility. Due to pressure from grassroots movements the Federal Communications Commission (FCC) has redefined the requirements for broadband and internet in general as a utility. Now AT&T and other major internet providers are lobbying against this action and are in-large able to delay the onset of this movement due to legacy thinking's grip on American culture and politics.

For example, those who cannot overcome the barrier of legacy thinking may not consider the privatization of clean drinking water as an issue. This is partial because access to water has become such a given fact of the matter to them. For a person living in such circumstances, it may be widely accepted to not concern themselves with drinking water because they have not needed to be concerned with it in the past. Additionally, a person living within an area that does not need to worry about their water supply or the sanitation of their water supply is less likely to be concerned with the privatization of water.

This notion can be examined through the thought experiment of "veil of ignorance". Legacy thinking causes people to be particularly ignorant about the implications behind the "you get what you pay for" mentality applied to a life necessity. By utilizing the "veil of ignorance", one can overcome the barrier of legacy thinking as it requires a person to imagine that they are unaware of their own circumstances, allowing them to free themselves from externally imposed thoughts or widely accepted ideas.

Related concepts

  • Technoscience – The perception that science and technology are intertwined and depend on each other.
  • Technosociety – An industrially developed society with a reliance on technology.
  • Technological utopianism – A positive outlook on the effect technology has on social welfare. Includes the perception that technology will one day enable society to reach a utopian state.
  • Technosocial systems – people and technologies that combine to work as heterogeneous but functional wholes.
  • Critical Technical Practice – the practice of technological creation while simultaneously critiquing and maintaining awareness of the inherent biases and value systems which become embedded in those technologies.

Classifications

  • Technological optimism – The opinion that technology has positive effects on society and should be used in order to improve the welfare of people.
  • Technological pessimism – The opinion that technology has negative effects on society and should be discouraged from use.
  • Technological neutrality – "maintains that a given technology has no systematic effects on society: individuals are perceived as ultimately responsible, for better or worse, because technologies are merely tools people use for their own ends."
  • Technological determinism – "maintains that technologies are understood as simply and directly causing particular societal outcomes."
  • Scientism – The belief in the total separation of facts and values.
  • Technological progressivism – technology is a means to an end itself and an inherently positive pursuit.

Academic programs

STS is taught in several countries. According to the STS wiki, STS programs can be found in twenty countries, including 45 programs in the United States, three programs in India, and eleven programs in the UK. STS programs can be found in Canada, Germany, Israel, Malaysia, and Taiwan. Some examples of institutions offering STS programs are Stanford University, Harvard University, the University of Oxford, Mines ParisTech, Bar-Ilan University, and York University.

Professional associations

The field has professional associations in regions and countries around the world.

In Europe

  • In Europe, the European Association for the Study of Science and Technology (EASST) was founded in 1981 to "improve scholarly communication and exchange in the field", "increase the visibility of the subject to policy-makers and to the general public", and "stimulate and support teaching on the subject at all levels". Similarly, the European Inter-University Association on Society, Science and Technology (ESST) researches and studies science and technology in society, in both historical and contemporary perspectives.
  • In European nation states and language communities, a range of STS associations exist, including in the UK, Spain, Germany, Austria, Turkey. For instance, in 2015, the UK-based Association for Studies in Innovation, Science and Technology (AsSIST-UK) was established, Chaired by Andrew Webster (York) and Robin Williams (Edinburgh) principally to foster stronger integration between the innovation studies and STS fields. In 2021 it had a membership of 380. It holds annual conferences and has built strong links to policy practitioners in Westminster.
  • In Italy, STS Italia – The Italian Society for Social Studies of Science and Technology was founded in 2005. Its mission is "to build up an Italian network of researchers oriented to study Science and Technology starting from the social dynamics which characterize and interweave science and technology themselves".

In Asia

  • The Asia Pacific Science Technology & Society Network (APSTSN) primarily has members from Australasia, Southeast and East Asia and Oceania.
  • In Japan, the Japanese Society for Science and Technology Studies (JSSTS) was founded in 2001.

In Latin America

  • Estudios Sociales de la Ciencia y la Tecnología (ESOCITE) is the biggest association of Science and Technology studies. The study of STS (CyT in Spanish, CTS in Portuguese) here was shaped by authors like Amílcar Herrera [es] and Jorge Sabato and Oscar Varsavsky [es] in Argentina, José Leite Lopes in Brazil, Miguel Wionczek in Mexico, Francisco Sagasti in Peru, Máximo Halty Carrere in Uruguay and Marcel Roche in Venezuela.

In North America

  • Founded in 1975, the Society for Social Studies of Science initially provided scholarly communication facilities, including a journal (Science, Technology, and Human Values) and annual meetings that were mainly attended by science studies scholars. The society has since grown into the most important professional association of science and technology studies scholars worldwide. The Society for Social Studies of Science members also include government and industry officials concerned with research and development as well as science and technology policy; scientists and engineers who wish to better understand the social embeddedness of their professional practice; and citizens concerned about the impact of science and technology in their lives.
  • Founded in 1958, the Society for the History of Technology initially attracted members from the history profession who had interests in the contextual history of technology. After the "turn to technology" in the mid-1980s, the society's well-regarded journal (Technology and Culture) and its annual meetings began to attract considerable interest from non-historians with technology studies interests.
  • Less identified with STS, but also of importance to many STS scholars, are the History of Science Society, the Philosophy of Science Association, and the American Association for the History of Medicine.
  • Additionally, within the US there are significant STS-oriented special interest groups within major disciplinary associations, including the American Anthropological Association, the American Political Science Association, the National Women's Studies Association, and the American Sociological Association.

Technology and society

From Wikipedia, the free encyclopedia

Technology society and life or technology and culture refers to the inter-dependency, co-dependence, co-influence, and co-production of technology and society upon one another. Evidence for this synergy has been found since humanity first started using simple tools. The inter-relationship has continued as modern technologies such as the printing press and computers have helped shape society. The first scientific approach to this relationship occurred with the development of tektology, the "science of organization", in early twentieth century Imperial Russia. In modern academia, the interdisciplinary study of the mutual impacts of science, technology, and society, is called science and technology studies.

The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times have lessened physical barriers to communication and allowed humans to interact freely on a global scale, such as the printing press, telephone, and Internet.

Technology has developed advanced economies, such as the modern global economy, and has led to the rise of a leisure class. Many technological processes produce by-products known as pollution, and deplete natural resources to the detriment of Earth's environment. Innovations influence the values of society and raise new questions in the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.

Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people. However, proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.

Pre-historical

The importance of stone tools, circa 2.5 million years ago, is considered fundamental in the human development in the hunting hypothesis.

Primatologist, Richard Wrangham, theorizes that the control of fire by early humans and the associated development of cooking was the spark that radically changed human evolution. Texts such as Guns, Germs, and Steel suggest that early advances in plant agriculture and husbandry fundamentally shifted the way that collective groups of individuals, and eventually societies, developed.

Modern examples and effects

Technology has become a huge part in society and day-to-day life. When societies know more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society. The use of technology in education provides students with technology literacy, information literacy, capacity for life-long learning, and other skills necessary for the 21st century workplace. Digital technology has entered each process and activity made by the social system. In fact, it constructed another worldwide communication system in addition to its origin.

A 1982 study by The New York Times described a technology assessment study by the Institute for the Future, "peering into the future of an electronic world." The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications, and computing. It estimated that 40 percent of American households will have two-way videotex service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun.

Since the creation of computers achieved an entire better approach to transmit and store data. Digital technology became commonly used for downloading music and watching movies at home either by DVDs or purchasing it online. Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free.

Around the globe many schools have implemented educational technology in primary schools, universities and colleges. According to the statistics, in the early beginnings of 1990s the use of Internet in schools was, on average, 2–3%. Continuously, by the end of 1990s the evolution of technology increases rapidly and reaches to 60%, and by the year of 2008 nearly 100% of schools use Internet on educational form. According to ISTE researchers, technological improvements can lead to numerous achievements in classrooms. E-learning system, collaboration of students on project based learning, and technological skills for future results in motivation of students.

Although these previous examples only show a few of the positive aspects of technology in society, there are negative side effects as well. Within this virtual realm, social media platforms such as Instagram, Facebook, and Snapchat have altered the way Generation Y culture is understanding the world and thus how they view themselves. In recent years, there has been more research on the development of social media depression in users of sites like these. "Facebook Depression" is when users are so affected by their friends' posts and lives that their own jealousy depletes their sense of self-worth. They compare themselves to the posts made by their peers and feel unworthy or monotonous because they feel like their lives are not nearly as exciting as the lives of others.

Technology has a serious effect on youth's health. The overuse of technology is said to be associated with sleep deprivation which is linked to obesity and poor academic performance in the lives of adolescents.

Economics and technological development

Nuclear power plant, Doel, Belgium

In ancient history, economics began when spontaneous exchange of goods and services was replaced over time by deliberate trade structures. Makers of arrowheads, for example, might have realized they could do better by concentrating on making arrowheads and barter for other needs. Regardless of goods and services bartered, some amount of technology was involved—if no more than in the making of shell and bead jewelry. Even the shaman's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies. Technology is seen as primary source in economic development.

Technology advancement and economic growth are related to each other. The level of technology is important to determine the economic growth. It is the technological process which keeps the economy moving.

In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well-functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions.

Funding

Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises) and private (involving more limited, but generally more sharply focused) business or individual enterprises.

Government funding for new technology

The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology.

In 1980, the UK government invested just over six million pounds in a four-year program, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, software, training materials, and extensive teacher training. Similar programs have been instituted by governments around the world.

Technology has frequently been driven by the military, with many modern applications developed for the military before they were adapted for civilian use. However, this has always been a two-way flow, with industry often developing and adopting a technology only later adopted by the military.

Entire government agencies are specifically dedicated to research, such as America's National Science Foundation, the United Kingdom's scientific research institutes, America's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development.

Private funding

Research and development is one of the smallest areas of investments made by corporations toward new and innovative technology.

Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD, about two-thirds of research and development in scientific and technical fields is carried out by industry, and 98 percent and 10 percent, respectively, by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 10 percent.

The 2009 founding of Kickstarter allows individuals to receive funding via crowdsourcing for many technology related products including both new physical creations as well as documentaries, films, and web-series that focus on technology management. This circumvents the corporate or government oversight most inventors and artists struggle against but leaves the accountability of the project completely with the individual receiving the funds.

Other economic considerations

Relation to science

Science and technology feed into each other. Science may drive technological development, by generating demand for new instruments to address a scientific question, or by illustrating technical possibilities previously unconsidered. In turn, technology may drive scientific investigation, by creating a need for technological improvements that can only be produced through research, and by raising questions about the underlying principles that a new technology relies on.

For most of human history, technological improvements were arrived at by chance, trial and error, or spontaneous inspiration. When the modern scientific enterprise matured in the Enlightenment, it primarily concerned itself with fundamental questions of nature rather than technical applications. Research and development directed towards immediate technical application is a relatively recent occurrence, arising with the Industrial Revolution and becoming commonplace in the 20th century.

Sociological factors and effects

Values

The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations:

  • Mechanistic world view: Viewing the universe as a collection of parts (like a machine), that can be individually analyzed and understood. This is a form of reductionism that is rare nowadays. However, the "neo-mechanistic world view" holds that nothing in the universe cannot be understood by the human intellect. Also, while all things are greater than the sum of their parts (e.g., even if we consider nothing more than the information involved in their combination), in principle, even this excess must eventually be understood by human intelligence. That is, no divine or vital principle or essence is involved.
  • Efficiency: A value, originally applied only to machines, but now applied to all aspects of society, so that each element is expected to attain a higher and higher percentage of its maximal possible performance, output, or ability.
  • Social progress: The belief that there is such a thing as social progress, and that, in the main, it is beneficent. Before the Industrial Revolution, and the subsequent explosion of technology, almost all societies believed in a cyclical theory of social movement and, indeed, of all history and the universe. This was, obviously, based on the cyclicity of the seasons, and an agricultural economy's and society's strong ties to that cyclicity. Since much of the world is closer to their agricultural roots, they are still much more amenable to cyclicity than progress in history. This may be seen, for example, in Prabhat Rainjan Sarkar's modern social cycles theory. For a more westernized version of social cyclicity, see Generations: The History of America's Future, 1584 to 2069 (Paperback) by Neil Howe and William Strauss; Harper Perennial; Reprint edition (September 30, 1992); ISBN 0-688-11912-3, and subsequent books by these authors.

Institutions and groups

Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Examples of this might include:

  • The rise of very large organizations: e.g., governments, the military, health and social welfare institutions, supranational corporations.
  • The commercialization of leisure: sports events, products, etc. (McGinn)
  • The almost instantaneous dispersal of information (especially news) and entertainment around the world.

International

Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following: 

  • Globalization of ideas
  • Embeddedness of values
  • Population growth and control

Environment

Technology provides an understanding, and an appreciation for the world around us.

Most modern technological processes produce unwanted by products in addition to the desired products, which is known as industrial waste and pollution. While most material waste is re-used in the industrial process, many forms are released into the environment, with negative environmental side effects, such as pollution and lack of sustainability. Different social and political systems establish different balances between the value they place on additional goods versus the disvalues of waste products and pollution. Some technologies are designed specifically with the environment in mind, but most are designed first for economic or ergonomic effects. Historically, the value of a clean environment and more efficient productive processes has been the result of an increase in the wealth of society, because once people are able to provide for their basic needs, they are able to focus on less tangible goods such as clean air and water.

The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores), and the added pollution of air, water, and land. The more subtle effects include debates over long-term effects (e.g., global warming, deforestation, natural habitat destruction, coastal wetland loss.)

Each wave of technology creates a set of waste previously unknown by humans: toxic waste, radioactive waste, electronic waste.

Electronic waste creates direct environmental impacts through the production and maintaining the infrastructure necessary for using technology and indirect impacts by breaking barriers for global interaction through the use of information and communications technology. Using technology, processing information and managing infrastructure consume energy contributes to cyber warming.

One of the main problems is the lack of an effective way to remove these pollutants on a large scale expediently. In nature, organisms "recycle" the wastes of other organisms, for example, plants produce oxygen as a by-product of photosynthesis, oxygen-breathing organisms use oxygen to metabolize food, producing carbon dioxide as a by-product, which plants use in a process to make sugar, with oxygen as a waste in the first place. No such mechanism exists for the removal of technological wastes.

Construction and shaping

Choice

Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include:

  • the channels of distribution, how do products go from raw materials to consumption to disposal;
  • the cultural beliefs regarding style, freedom of choice, consumerism, materialism, etc.;
  • the economic values we place on the environment, individual wealth, government control, capitalism, etc.

According to Williams and Edge, the construction and shaping of technology includes the concept of choice (and not necessarily conscious choice). Choice is inherent in both the design of individual artifacts and systems, and in the making of those artifacts and systems.

The idea here is that a single technology may not emerge from the unfolding of a predetermined logic or a single determinant, technology could be a garden of forking paths, with different paths potentially leading to different technological outcomes. This is a position that has been developed in detail by Judy Wajcman. Therefore, choices could have differing implications for society and for particular social groups.

Autonomous technology

In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self-augmenting."

Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity cannot resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is because humans do not adequately consider the responsibility that is inherent in technological processes.

Langdon Winner critiques the idea that technological evolution is essentially beyond the control of individuals or society in his book Autonomous Technology. He argues instead that the apparent autonomy of technology is a result of "technological somnambulism," the tendency of people to uncritically and unreflectively embrace and utilize new technologies without regard for their broader social and political effects.

In 1980, Mike Cooley published a critique of the automation and computerisation of engineering work under the title "Architect or Bee? The human/technology relationship". The title alludes to a comparison made by Karl Marx, on the issue of the creative achievements of human imaginative power. According to Cooley ""Scientific and technological developments have invariably proved to be double-edged. They produced the beauty of Venice and the hideousness of Chernobyl; the caring therapies of Rontgen's X-rays and the destruction of Hiroshima," 

Government

Individuals rely on governmental assistance to control the side effects and negative consequences of technology.

  • Supposed independence of government. An assumption commonly made about the government is that their governance role is neutral or independent. However, some argue that governing is a political process, so government will be influenced by political winds of influence. In addition, because government provides much of the funding for technological research and development, it has a vested interest in certain outcomes. Other point out that the world's biggest ecological disasters, such as the Aral Sea, Chernobyl, and Lake Karachay have been caused by government projects, which are not accountable to consumers.
  • Liability. One means for controlling technology is to place responsibility for the harm with the agent causing the harm. Government can allow more or less legal liability to fall to the organizations or individuals responsible for damages.
  • Legislation. A source of controversy is the role of industry versus that of government in maintaining a clean environment. While it is generally agreed that industry needs to be held responsible when pollution harms other people, there is disagreement over whether this should be prevented by legislation or civil courts, and whether ecological systems as such should be protected from harm by governments.

Recently, the social shaping of technology has had new influence in the fields of e-science and e-social science in the United Kingdom, which has made centers focusing on the social shaping of science and technology a central part of their funding programs.

Health 2.0

From Wikipedia, the free encyclopedia

"Health 2.0" is a term introduced in the mid-2000s, as the subset of health care technologies mirroring the wider Web 2.0 movement. It has been defined variously as including social media, user-generated content, and cloud-based and mobile technologies. Some Health 2.0 proponents see these technologies as empowering patients to have greater control over their own health care and diminishing medical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy.

History

Health 2.0 built on the possibilities for changing health care, which started with the introduction of eHealth in the mid-1990s following the emergence of the World Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication, social networking, and self-publishing, there was spate of media attention to and increasing interest from patients, clinicians, and medical librarians in using these tools for health care and medical purposes.

Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities, podcasts, search, tagging, Twitter, videos, wikis, and more) by actors in health care including doctors, patients, and scientists, using principles of open source and user-generated content, and the power of networks and social networks in order to personalize health care, to collaborate, and to promote health education. Possible explanations why health care has generated its own "2.0" term are the availability and proliferation of Health 2.0 applications across health care in general, and the potential for improving public health in particular.

Current use

While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, and social networking, in recent years the term "Health 2.0" has evolved to mean the role of Saas and cloud-based technologies, and their associated applications on multiple devices. Health 2.0 describes the integration of these into much of general clinical and administrative workflow in health care. As of 2014, approximately 3,000 companies were offering products and services matching this definition, with venture capital funding in the sector exceeding $2.3 billion in 2013.

Definitions

The "traditional" definition of "Health 2.0" focused on technology as an enabler for care collaboration: "The use of social software t-weight tools to promote collaboration between patients, their caregivers, medical professionals, and other stakeholders in health."

In 2011, Indu Subaiya redefined Health 2.0 as the use in health care of new cloud, Saas, mobile, and device technologies that are:

  1. Adaptable technologies which easily allow other tools and applications to link and integrate with them, primarily through use of accessible APIs
  2. Focused on the user experience, bringing in the principles of user-centered design
  3. Data driven, in that they both create data and present data to the user in order to help improve decision making

This wider definition allows recognition of what is or what isn't a Health 2.0 technology. Typically, enterprise-based, customized client-server systems are not, while more open, cloud based systems fit the definition. However, this line was blurring by 2011-2 as more enterprise vendors started to introduce cloud-based systems and native applications for new devices like smartphones and tablets.

In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—including Connected Health, Digital Health, Medicine 2.0, and mHealth. All of these support a goal of wider change to the health care system, using technology-enabled system reform—usually changing the relationship between patient and professional.:

  1. Personalized search that looks into the long tail but cares about the user experience
  2. Communities that capture the accumulated knowledge of patients, caregivers, and clinicians, and explains it to the world
  3. Intelligent tools for content delivery—and transactions
  4. Better integration of data with content

Wider health system definitions

In the late 2000s, several commentators used Health 2.0 as a moniker for a wider concept of system reform, seeking a participatory process between patient and clinician: "New concept of health care wherein all the constituents (patients, physicians, providers, and payers) focus on health care value (outcomes/price) and use competition at the medical condition level over the full cycle of care as the catalyst for improving the safety, efficiency, and quality of health care".

Health 2.0 defines the combination of health data and health information with (patient) experience, through the use of ICT, enabling the citizen to become an active and responsible partner in his/her own health and care pathway.

Health 2.0 is participatory healthcare. Enabled by information, software, and communities that we collect or create, we the patients can be effective partners in our own healthcare, and we the people can participate in reshaping the health system itself.

Definitions of Medicine 2.0 appear to be very similar but typically include more scientific and research aspects—Medicine 2.0: "Medicine 2.0 applications, services and tools are Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups. Published in JMIR Tom Van de Belt, Lucien Engelen et al. systematic review found 46 (!) unique definitions of health 2.0

Overview

A model of Health 2.0

Health 2.0 refers to the use of a diverse set of technologies including Connected Health, electronic medical records, mHealth, telemedicine, and the use of the Internet by patients themselves such as through blogs, Internet forums, online communities, patient to physician communication systems, and other more advanced systems. A key concept is that patients themselves should have greater insight and control into information generated about them. Additionally Health 2.0 relies on the use of modern cloud and mobile-based technologies.

Much of the potential for change from Health 2.0 is facilitated by combining technology driven trends such as Personal Health Records with social networking —"[which] may lead to a powerful new generation of health applications, where people share parts of their electronic health records with other consumers and 'crowdsource' the collective wisdom of other patients and professionals." Traditional models of medicine had patient records (held on paper or a proprietary computer system) that could only be accessed by a physician or other medical professional. Physicians acted as gatekeepers to this information, telling patients test results when and if they deemed it necessary. Such a model operates relatively well in situations such as acute care, where information about specific blood results would be of little use to a lay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases, psychiatric disorders, or diseases of unknown etiology patients were at risk of being left without well-coordinated care because data about them was stored in a variety of disparate places and in some cases might contain the opinions of healthcare professionals which were not to be shared with the patient. Increasingly, medical ethics deems such actions to be medical paternalism, and they are discouraged in modern medicine.

A hypothetical example demonstrates the increased engagement of a patient operating in a Health 2.0 setting: a patient goes to see their primary care physician with a presenting complaint, having first ensured their own medical record was up to date via the Internet. The treating physician might make a diagnosis or send for tests, the results of which could be transmitted directly to the patient's electronic medical record. If a second appointment is needed, the patient will have had time to research what the results might mean for them, what diagnoses may be likely, and may have communicated with other patients who have had a similar set of results in the past. On a second visit a referral might be made to a specialist. The patient might have the opportunity to search for the views of other patients on the best specialist to go to, and in combination with their primary care physician decides whom to see. The specialist gives a diagnosis along with a prognosis and potential options for treatment. The patient has the opportunity to research these treatment options and take a more proactive role in coming to a joint decision with their healthcare provider. They can also choose to submit more data about themselves, such as through a personalized genomics service to identify any risk factors that might improve or worsen their prognosis. As treatment commences, the patient can track their health outcomes through a data-sharing patient community to determine whether the treatment is having an effect for them, and they can stay up to date on research opportunities and clinical trials for their condition. They also have the social support of communicating with other patients diagnosed with the same condition throughout the world.

Level of use of Web 2.0 in health care

Partly due to weak definitions, the novelty of the endeavor and its nature as an entrepreneurial (rather than academic) movement, little empirical evidence exists to explain how much Web 2.0 is being used in general. While it has been estimated that nearly one-third of the 100 million Americans who have looked for health information online say that they or people they know have been significantly helped by what they found, this study considers only the broader use of the Internet for health management.

A study examining physician practices has suggested that a segment of 245,000 physicians in the U.S are using Web 2.0 for their practice, indicating that use is beyond the stage of the early adopter with regard to physicians and Web 2.0.

Types of Web 2.0 technology in health care

Web 2.0 is commonly associated with technologies such as podcasts, RSS feeds, social bookmarking, weblogs (health blogs), wikis, and other forms of many-to-many publishing; social software; and web application programming interfaces (APIs).

The following are examples of uses that have been documented in academic literature.

Purpose Description Case example in academic literature Users
Staying informed Used to stay informed of latest developments in a particular field Podcasts, RSS, and search tools All (medical professionals and public)
Medical education Use for professional development for doctors, and public health promotion for by public health professionals and the general public How podcasts can be used on the move to increase total available educational time or the many applications of these tools to public health All (medical professionals and public)
Collaboration and practice Web 2.0 tools use in daily practice for medical professionals to find information and make decisions Google searches revealed the correct diagnosis in 15 out of 26 cases (58%, 95% confidence interval 38% to 77%) in a 2005 study Doctors, nurses
Managing a particular disease Patients who use search tools to find out information about a particular condition Shown that patients have different patterns of usage depending on if they are newly diagnosed or managing a severe long-term illness. Long-term patients are more likely to connect to a community in Health 2.0 Public
Sharing data for research Completing patient-reported outcomes and aggregating the data for personal and scientific research Disease specific communities for patients with rare conditions aggregate data on treatments, symptoms, and outcomes to improve their decision making ability and carry out scientific research such as observational trials All (medical professionals and public)

Criticism of the use of Web 2.0 in health care

Hughes et al. (2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern:

  1. the lack of clear definitions
  2. issues around the loss of control over information that doctors perceive
  3. safety and the dangers of inaccurate information
  4. issues of ownership and privacy

Several criticisms have been raised about the use of Web 2.0 in health care. Firstly, Google has limitations as a diagnostic tool for Medical Doctors (MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term. Studies of its accuracy have returned varying results, and this remains in dispute. Secondly, long-held concerns exist about the effects of patients obtaining information online, such as the idea that patients may delay seeking medical advice or accidentally reveal private medical data. Finally, concerns exist about the quality of user-generated content leading to misinformation, such as perpetuating the discredited claim that the MMR vaccine may cause autism. In contrast, a 2004 study of a British epilepsy online support group suggested that only 6% of information was factually wrong. In a 2007 Pew Research Center survey of Americans, only 3% reported that online advice had caused them serious harm, while nearly one-third reported that they or their acquaintances had been helped by online health advice.

Saturday, April 24, 2021

eHealth

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/EHealth

eHealth (also written e-health) is a relatively recent healthcare practice supported by electronic processes and communication, dating back to at least 1999. Usage of the term varies as it just not covers the "Internet medicine" as it was conceived during that time, but also covers "virtually everything related to computers and medicine". A study in 2005 found 51 unique definitions. Some argue that it is interchangeable with health informatics with a broad definition covering electronic/digital processes in health while others use it in the narrower sense of healthcare practice using the Internet. It can also include health applications and links on mobile phones, referred to as mHealth or m-Health.

Types

The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including:

  • Electronic health record: enabling the communication of patient data between different healthcare professionals (GPs, specialists etc.);
  • Computerized physician order entry: a means of requesting diagnostic tests and treatments electronically and receiving the results
  • ePrescribing: access to prescribing options, printing prescriptions to patients and sometimes electronic transmission of prescriptions from doctors to pharmacists
  • Clinical decision support system: providing information electronically about protocols and standards for healthcare professionals to use in diagnosing and treating patients
  • Telemedicine: physical and psychological diagnosis and treatments at a distance, including telemonitoring of patients functions;
  • Telerehabilitation: providing rehabilitation services over a distance through telecommunications.
  • Telesurgery: use robots and wireless communication to perform surgery remotely.
  • Teledentistry: exchange clinical information and images over a distance.
  • Consumer health informatics: use of electronic resources on medical topics by healthy individuals or patients;
  • Health knowledge management: e.g. in an overview of latest medical journals, best practice guidelines or epidemiological tracking (examples include physician resources such as Medscape and MDLinx);
  • Virtual healthcare teams: consisting of healthcare professionals who collaborate and share information on patients through digital equipment (for transmural care);
  • mHealth or m-Health: includes the use of mobile devices in collecting aggregate and patient-level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct provision of care (via mobile telemedicine);
  • Medical research using grids: powerful computing and data management capabilities to handle large amounts of heterogeneous data.
  • Health informatics / healthcare information systems: also often refer to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health

Contested definition

Several authors have noted the variable usage in the term; from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare. Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally. Oh et al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives. One thing that all sources seem to agree on is that e-health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in the e-Patient movement.

eHealth literacy

eHealth literacy is defined as "the ability to seek, find, understand and appraise health information from electronic sources and apply knowledge gained to addressing or solving a health problem." According to this definition, eHealth literacy encompasses six types of literacy: traditional (literacy and numeracy), information, media, health, computer, and scientific. Of these, media and computer literacies are unique to the Internet context, with eHealth media literacy being the awareness of media bias or perspective, the ability to discern both explicit and implicit meaning from media messages, and to derive meaning from media messages. The literature includes other definitions of perceived media capability or efficacy, but these were not specific to health information on the Internet. Having the composite skills of eHealth literacy allows health consumers to achieve positive outcomes from using the Internet for health purposes. eHealth literacy has the potential to both protect consumers from harm and empower them to fully participate in informed health-related decision making. People with high levels of eHealth literacy are also more aware of the risk of encountering unreliable information on the Internet On the other hand, the extension of digital resources to the health domain in the form of eHealth literacy can also create new gaps between health consumers. eHealth literacy hinges not on the mere access to technology, but rather on the skill to apply the accessed knowledge.

Data exchange

One of the factors blocking the use of e-health tools from widespread acceptance is the concern about privacy issues regarding patient records, most specifically the EPR (Electronic patient record). This main concern has to do with the confidentiality of the data. There is also concern about non-confidential data however. Each medical practice has its own jargon and diagnostic tools. To standardize the exchange of information, various coding schemes may be used in combination with international medical standards. Systems that deal with these transfers are often referred to as Health Information Exchange (HIE). Of the forms of e-health already mentioned, there are roughly two types; front-end data exchange and back-end exchange.

Front-end exchange typically involves the patient, while back-end exchange does not. A common example of a rather simple front-end exchange is a patient sending a photo taken by mobile phone of a healing wound and sending it by email to the family doctor for control. Such an actions may avoid the cost of an expensive visit to the hospital.

A common example of a back-end exchange is when a patient on vacation visits a doctor who then may request access to the patient's health records, such as medicine prescriptions, x-ray photographs, or blood test results. Such an action may reveal allergies or other prior conditions that are relevant to the visit.

Thesaurus

Successful e-health initiatives such as e-Diabetes have shown that for data exchange to be facilitated either at the front-end or the back-end, a common thesaurus is needed for terms of reference.Various medical practices in chronic patient care (such as for diabetic patients) already have a well defined set of terms and actions, which makes standard communication exchange easier, whether the exchange is initiated by the patient or the caregiver.

In general, explanatory diagnostic information (such as the standard ICD-10) may be exchanged insecurely, and private information (such as personal information from the patient) must be secured. E-health manages both flows of information, while ensuring the quality of the data exchange.

Early adopters

Patients living with long term conditions (also called chronic conditions) over time often acquire a high level of knowledge about the processes involved in their own care, and often develop a routine in coping with their condition. For these types of routine patients, front-end e-health solutions tend to be relatively easy to implement.

E-mental health

E-mental health is frequently used to refer to internet based interventions and support for mental health conditions. However, it can also refer to the use of information and communication technologies that also includes the use of social media, landline and mobile phones. E-mental health services can include information; peer support services, computer and internet based programs, virtual applications and games as well as real time interaction with trained clinicians. Programs can also be delivered using telephones and interactive voice response (IVR).

Mental disorders includes a range of conditions such as alcohol and drug use disorders, mood disorders such as depression, dementia and Alzheimer's disease, delusional disorders such as schizophrenia and anxiety disorders. The majority of e-mental health interventions have focused on the treatment of depression and anxiety. There are also E-mental health programs available for other interventions such as smoking cessation, gambling, and post-disaster mental health.

Advantages and disadvantages

E-mental health has a number of advantages such as being low cost, easily accessible and providing anonymity to users. However, there are also a number of disadvantages such as concerns regarding treatment credibility, user privacy and confidentiality. Online security involves the implementation of appropriate safeguards to protect user privacy and confidentiality. This includes appropriate collection and handling of user data, the protection of data from unauthorized access and modification and the safe storage of data.

E-mental health has been gaining momentum in the academic research as well as practical arenas in a wide variety of disciplines such as psychology, clinical social work, family and marriage therapy, and mental health counseling. Testifying to this momentum, the E-Mental Health movement has its own international organization, the International Society for Mental Health Online. However, e-Mental health implementation into clinical practice and healthcare systems remains limited and fragmented.

Programs

There are at least five programs currently available to treat anxiety and depression. Several programs have been identified by the UK National Institute for Health and Care Excellence as cost effective for use in primary care. These include Fearfighter, a text based cognitive behavioral therapy program to treat people with phobias, and Beating the Blues, an interactive text, cartoon and video CBT program for anxiety and depression. Two programs have been supported for use in primary care by the Australian Government. The first is Anxiety Online, a text based program for the anxiety, depressive and eating disorders, and the second is THIS WAY UP, a set of interactive text, cartoon and video programs for the anxiety and depressive disorders. Another is iFightDepression a multilingual, free to use, web-based tool for self-management of less severe forms of depression, for use under guidance of a GP or psychotherapist.

There are a number of online programs relating to smoking cessation. QuitCoach is a personalised quit plan based on the users response to questions regarding giving up smoking and tailored individually each time the user logs into the site. Freedom From Smoking takes users through lessons that are grouped into modules that provide information and assignments to complete. The modules guide participants through steps such as preparing to quit smoking, stopping smoking and preventing relapse.

Other internet programs have been developed specifically as part of research into treatment for specific disorders. For example, an online self-directed therapy for problem gambling was developed to specifically test this as a method of treatment. All participants were given access to a website. The treatment group was provided with behavioural and cognitive strategies to reduce or quit gambling. This was presented in the form of a workbook which encouraged participants to self-monitor their gambling by maintaining an online log of gambling and gambling urges. Participants could also use a smartphone application to collect self-monitoring information. Finally participants could also choose to receive motivational email or text reminders of their progress and goals.

An internet based intervention was also developed for use after Hurricane Ike in 2009. During this study, 1,249 disaster-affected adults were randomly recruited to take part in the intervention. Participants were given a structured interview then invited to access the web intervention using a unique password. Access to the website was provided for a four-month period. As participants accessed the site they were randomly assigned to either the intervention. those assigned to the intervention were provided with modules consisting of information regarding effective coping strategies to manage mental health and health risk behaviour.

eHealth programs have been found to be effective in treating Borderline Personality Disorder (BPD).

Cybermedicine

Cybermedicine is the use of the Internet to deliver medical services, such as medical consultations and drug prescriptions. It is the successor to telemedicine, wherein doctors would consult and treat patients remotely via telephone or fax.

Cybermedicine is already being used in small projects where images are transmitted from a primary care setting to a medical specialist, who comments on the case and suggests which intervention might benefit the patient. A field that lends itself to this approach is dermatology, where images of an eruption are communicated to a hospital specialist who determines if referral is necessary.

The field has also expanded to include online "ask the doctor" services that allow patients direct, paid access to consultations (with varying degrees of depth) with medical professionals (examples include Bundoo.com, Teladoc, and Ask The Doctor).

A Cyber Doctor, known in the UK as a Cyber Physician, is a medical professional who does consultation via the internet, treating virtual patients, who may never meet face to face. This is a new area of medicine which has been utilized by the armed forces and teaching hospitals offering online consultation to patients before making their decision to travel for unique medical treatment only offered at a particular medical facility.

Self-monitoring healthcare devices

Self-monitoring is the use of sensors or tools which are readily available to the general public to track and record personal data. The sensors are usually wearable devices and the tools are digitally available through mobile device applications. Self-monitoring devices were created for the purpose of allowing personal data to be instantly available to the individual to be analyzed. As of now, fitness and health monitoring are the most popular applications for self-monitoring devices. The biggest benefit to self-monitoring devices is the elimination of the necessity for third party hospitals to run tests, which are both expensive and lengthy. These devices are an important advancement in the field of personal health management.

Self-monitoring healthcare devices exist in many forms. An example is the Nike+ FuelBand, which is a modified version of the original pedometer. This device is wearable on the wrist and allows one to set a personal goal for a daily energy burn. It records the calories burned and the number of steps taken for each day while simultaneously functioning as a watch. To add to the ease of the user interface, it includes both numeric and visual indicators of whether or not the individual has achieved his or her daily goal. Finally, it is also synced to an iPhone app which allows for tracking and sharing of personal record and achievements.

Other monitoring devices have more medical relevance. A well-known device of this type is the blood glucose monitor. The use of this device is restricted to diabetic patients and allows users to measure the blood glucose levels in their body. It is extremely quantitative and the results are available instantaneously. However, this device is not as independent of a self-monitoring device as the Nike+ Fuelband because it requires some patient education before use. One needs to be able to make connections between the levels of glucose and the effect of diet and exercise. In addition, the users must also understand how the treatment should be adjusted based on the results. In other words, the results are not just static measurements.

The demand for self-monitoring health devices is skyrocketing, as wireless health technologies have become especially popular in the last few years. In fact, it is expected that by 2016, self-monitoring health devices will account for 80% of wireless medical devices. The key selling point for these devices is the mobility of information for consumers. The accessibility of mobile devices such as smartphones and tablets has increased significantly within the past decade. This has made it easier for users to access real-time information in a number of peripheral devices.

There are still many future improvements for self-monitoring healthcare devices. Although most of these wearable devices have been excellent at providing direct data to the individual user, the biggest task which remains at hand is how to effectively use this data. Although the blood glucose monitor allows the user to take action based on the results, measurements such as the pulse rate, EKG signals, and calories do not necessarily serve to actively guide an individual's personal healthcare management. Consumers are interested in qualitative feedback in addition to the quantitative measurements recorded by the devices.

Evaluation

Knowledge of the socio-economic performance of eHealth is limited, and findings from evaluations are often challenging to transfer to other settings. Socio-economic evaluations of some narrow types of mHealth can rely on health economic methodologies, but larger scale eHealth may have too many variables, and tortuous, intangible cause and effect links may need a wider approach.

In developing countries

eHealth in general, and telemedicine in particular, is a vital resource to remote regions of emerging and developing countries but is often difficult to establish because of the lack of communications infrastructure. For example, in Benin, hospitals often can become inaccessible due to flooding during the rainy season and across Africa, the low population density, along with severe weather conditions and the difficult financial situation in many African states, has meant that the majority of the African people are badly disadvantaged in medical care. In many regions there is not only a significant lack of facilities and trained health professionals, but also no access to eHealth because there is also no internet access in remote villages, or even a reliable electricity supply.

Internet connectivity, and the benefits of eHealth, can be brought to these regions using satellite broadband technology, and satellite is often the only solution where terrestrial access may be limited, or poor quality, and one that can provide a fast connection over a vast coverage area.

Psychohistory

  From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Psychohistory Psychohistory is an ...