Search This Blog

Saturday, June 21, 2025

Digital divide

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Digital_divide

The digital divide is the unequal access to digital technology, including smartphones, tablets, laptops, and the internet. The digital divide worsens inequality around access to information and resources. In the Information Age, people without access to the Internet and other technology are at a disadvantage, for they are unable or less able to connect with others, find and apply for jobs, shop, and learn.

People who are homeless, living in poverty, elderly people, and those living in rural communities may have limited access to the Internet; in contrast, urban middle class and upper-class people have easy access to the Internet. Another divide is between producers and consumers of Internet content, which could be a result of educational disparities. While social media use varies across age groups, a US 2010 study reported no racial divide.

History

The historical roots of the digital divide in America refer to the increasing gap that occurred during the early modern period between those who could and could not access the real time forms of calculation, decision-making, and visualization offered via written and printed media. Within this context, ethical discussions regarding the relationship between education and the free distribution of information were raised by thinkers such as Immanuel Kant, Jean Jacques Rousseau, and Mary Wollstonecraft (1712–1778). The latter advocated that governments should intervene to ensure that any society's economic benefits should be fairly and meaningfully distributed. Amid the Industrial Revolution in Great Britain, Rousseau's idea helped to justify poor laws that created a safety net for those who were harmed by new forms of production. Later when telegraph and postal systems evolved, many used Rousseau's ideas to argue for full access to those services, even if it meant subsidizing hard-to-serve citizens. Thus, "universal services" referred to innovations in regulation and taxation that would allow phone services such as AT&T in the United States to serve hard-to-serve rural users. In 1996, as telecommunications companies merged with Internet companies, the Federal Communications Commission adopted Telecommunications Services Act of 1996 to consider regulatory strategies and taxation policies to close the digital divide. Though the term "digital divide" was coined among consumer groups that sought to tax and regulate information and communications technology (ICeT) companies to close the digital divide, the topic soon moved onto a global stage. The focus was the World Trade Organization which passed a Telecommunications Services Act, which resisted regulation of ICT companies so that they would be required to serve hard to serve individuals and communities. In 1999, to assuage anti-globalization forces, the WTO hosted the "Financial Solutions to Digital Divide" in Seattle, US, co-organized by Craig Warren Smith of Digital Divide Institute and Bill Gates Sr. the chairman of the Bill and Melinda Gates Foundation. It catalyzed a full-scale global movement to close the digital divide, which quickly spread to all sectors of the global economy. In 2000, US president Bill Clinton mentioned the term in the State of the Union Address.

During the COVID-19 pandemic

At the outset of the COVID-19 pandemic, governments worldwide issued stay-at-home orders that established lockdowns, quarantines, restrictions, and closures. The resulting interruptions to schooling, public services, and business operations drove nearly half of the world's population into seeking alternative methods to live while in isolation. These methods included telemedicine, virtual classrooms, online shopping, technology-based social interactions and working remotely, all of which require access to high-speed or broadband internet access and digital technologies. A Pew Research Centre study reports that 90% of Americans describe the use of the Internet as "essential" during the pandemic. The accelerated use of digital technologies creates a landscape where the ability, or lack thereof, to access digital spaces becomes a crucial factor in everyday life.

According to the Pew Research Center, 59% of children from lower-income families were likely to face digital obstacles in completing school assignments. These obstacles included the use of a cellphone to complete homework, having to use public Wi-Fi because of unreliable internet service in the home and lack of access to a computer in the home. This difficulty, titled the homework gap, affects more than 30% of K-12 students living below the poverty threshold, and disproportionally affects American Indian/Alaska Native, Black, and Hispanic students. These types of interruptions or privilege gaps in education exemplify problems in the systemic marginalization of historically oppressed individuals in primary education. The pandemic exposed inequity causing discrepancies in learning.

A lack of "tech readiness", that is, confident and independent use of devices, was reported among the US elderly population; with more than 50% reporting an inadequate knowledge of devices and more than one-third reporting a lack of confidence. Moreover, according to a UN research paper, similar results can be found across various Asian countries, with those above the age of 74 reporting a lower and more confused usage of digital devices. This aspect of the digital divide and the elderly occurred during the pandemic as healthcare providers increasingly relied upon telemedicine to manage chronic and acute health conditions.

Aspects

There are various definitions of the digital divide, all with slightly different emphasis, which is evidenced by related concepts like digital inclusion, digital participation, digital skills, media literacy, and digital accessibility.

Infrastructure

The infrastructure by which individuals, households, businesses, and communities connect to the Internet address the physical mediums that people use to connect to the Internet such as desktop computers, laptops, basic mobile phones or smartphones, iPods or other MP3 players, gaming consoles such as Xbox or PlayStation, electronic book readers, and tablets such as iPads.

The digital divide measured in terms of bandwidth is not closing, but fluctuating up and down. Gini coefficients for telecommunication capacity (in kbit/s) among individuals worldwide

Traditionally, the nature of the divide has been measured in terms of the existing numbers of subscriptions and digital devices. Given the increasing number of such devices, some have concluded that the digital divide among individuals has increasingly been closing as the result of a natural and almost automatic process. Others point to persistent lower levels of connectivity among women, racial and ethnic minorities, people with lower incomes, rural residents, and less educated people as evidence that addressing inequalities in access to and use of the medium will require much more than the passing of time. Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita).

As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". During the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 5G and fiber optics FTTH). Measurement methodologies of the digital divide, and more specifically an Integrated Iterative Approach General Framework (Integrated Contextual Iterative Approach – ICI) and the digital divide modeling theory under measurement model DDG (Digital Divide Gap) are used to analyze the gap existing between developed and developing countries, and the gap among the 27 members-states of the European Union. The Good Things Foundation, a UK non-profit organisation, collates data on the extent and impact of the digital divide in the UK and lobbies the government to fix digital exclusion

Skills and digital literacy

Research from 2001 showed that the digital divide is more than just an access issue and cannot be alleviated merely by providing the necessary equipment. There are at least three factors at play: information accessibility, information utilization, and information receptiveness. More than just accessibility, the digital divide consists of society's lack of knowledge on how to make use of the information and communication tools once they exist within a community. Information professionals have the ability to help bridge the gap by providing reference and information services to help individuals learn and utilize the technologies to which they do have access, regardless of the economic status of the individual seeking help.

Location

One can connect to the internet in a variety of locations, such as homes, offices, schools, libraries, public spaces, and Internet cafes. Levels of connectivity often vary between rural, suburban, and urban areas.

In 2017, the Wireless Broadband Alliance published the white paper The Urban Unconnected, which highlighted that in the eight countries with the world's highest GNP about 1.75 billion people had no internet connection, and one third of them lived in the major urban centers. Delhi (5.3 millions, 9% of the total population), São Paulo (4.3 millions, 36%), New York (1.6 mln, 19%), and Moscow (2.1 mln, 17%) registered the highest percentages of citizens who had no internet access of any type.

As of 2021, only about half of the world's population had access to the internet, leaving 3.7 billion people without internet. A majority of those are in developing countries, and a large portion of them are women. Also, the governments of different countries have different policies about privacy, data governance, speech freedoms and many other factors. Government restrictions make it challenging for technology companies to provide services in certain countries. This disproportionately impacts the different regions of the world; Europe has the highest percentage of the population online while Africa has the lowest. From 2010 to 2014 Europe went from 67% to 75% and in the same time span Africa went from 10% to 19%.

Network speeds play a large role in the quality of an internet connection. Large cities and towns may have better access to high speed internet than rural areas, which may have limited or no service. Households can be locked into a specific service provider, since it may be the only carrier that even offers service to the area. This applies to regions that have developed networks, like the United States, but also applies to developing countries, so that very large areas have virtually no coverage. In those areas there are very limited actions that a consumer could take, since the issue is mainly infrastructure. Technologies that provide an internet connection through satellite are becoming more common, like Starlink, but they are still not available in many regions.

Based on location, a connection may be so slow as to be virtually unusable, solely because a network provider has limited infrastructure in the area. For example, to download 5 GB of data in Taiwan it might take about 8 minutes, while the same download might take 30 hours in Yemen.

From 2020 to 2022, average download speeds in the EU climbed from 70 Mbps to more than 120 Mbps, owing mostly to the demand for digital services during the pandemic. There is still a large rural-urban disparity in internet speeds, with metropolitan areas in France and Denmark reaching rates of more than 150 Mbps, while many rural areas in Greece, Croatia, and Cyprus have speeds of less than 60 Mbps.

The EU aspires for complete gigabit coverage by 2030, however as of 2022, only over 60% of Europe has high-speed internet infrastructure, signalling the need for more enhancements.

Applications

Common Sense Media, a nonprofit group based in San Francisco, surveyed almost 1,400 parents and reported in 2011 that 47 percent of families with incomes more than $75,000 had downloaded apps for their children, while only 14 percent of families earning less than $30,000 had done so.

Reasons and correlating variables

As of 2014, the gap in a digital divide was known to exist for a number of reasons. Obtaining access to ICTs and using them actively has been linked to demographic and socio-economic characteristics including income, education, race, gender, geographic location (urban-rural), age, skills, awareness, political, cultural and psychological attitudes. Multiple regression analysis across countries has shown that income levels and educational attainment are identified as providing the most powerful explanatory variables for ICT access and usage. Evidence was found that Caucasians are much more likely than non-Caucasians to own a computer as well as have access to the Internet in their homes. As for geographic location, people living in urban centers have more access and show more usage of computer services than those in rural areas.

In developing countries, a digital divide between women and men is apparent in tech usage, with men more likely to be competent tech users. Controlled statistical analysis has shown that income, education and employment act as confounding variables and that women with the same level of income, education and employment actually embrace ICT more than men (see Women and ICT4D), this argues against any suggestion that women are "naturally" more technophobic or less tech-savvy. However, each nation has its own set of causes or the digital divide. For example, the digital divide in Germany is unique because it is not largely due to difference in quality of infrastructure.

The correlation between income and internet use suggests that the digital divide persists at least in part due to income disparities. Most commonly, a digital divide stems from poverty and the economic barriers that limit resources and prevent people from obtaining or otherwise using newer technologies.

In research, while each explanation is examined, others must be controlled to eliminate interaction effects or mediating variables, but these explanations are meant to stand as general trends, not direct causes. Measurements for the intensity of usages, such as incidence and frequency, vary by study. Some report usage as access to Internet and ICTs while others report usage as having previously connected to the Internet. Some studies focus on specific technologies, others on a combination (such as Infostate, proposed by Orbicom-UNESCO, the Digital Opportunity Index, or ITU's ICT Development Index).

Economic gap in the United States

During the mid-1990s, the United States Department of Commerce, National Telecommunications & Information Administration (NTIA) began publishing reports about the Internet and access to and usage of the resource. The first of three reports is titled "Falling Through the Net: A Survey of the "Have Nots" in Rural and Urban America" (1995), the second is "Falling Through the Net II: New Data on the Digital Divide" (1998), and the final report "Falling Through the Net: Defining the Digital Divide" (1999). The NTIA's final report attempted clearly to define the term digital divide as "the divide between those with access to new technologies and those without". Since the introduction of the NTIA reports, much of the early, relevant literature began to reference the NTIA's digital divide definition. The digital divide is commonly defined as being between the "haves" and "have-nots".

The U.S. Federal Communications Commission's (FCC) 2019 Broadband Deployment Report indicated that 21.3 million Americans do not have access to wired or wireless broadband internet. As of 2020, BroadbandNow, an independent research company studying access to internet technologies, estimated that the actual number of United States Americans without high-speed internet is twice that number. According to a 2021 Pew Research Center report, smartphone ownership and internet use has increased for all Americans, however, a significant gap still exists between those with lower incomes and those with higher incomes: U.S. households earning $100K or more are twice as likely to own multiple devices and have home internet service as those making $30K or more, and three times as likely as those earning less than $30K per year. The same research indicated that 13% of the lowest income households had no access to internet or digital devices at home compared to only 1% of the highest income households.

According to a Pew Research Center survey of U.S. adults executed from January 25 to February 8, 2021, the digital lives of Americans with high and low incomes are varied. Conversely, the proportion of Americans that use home internet or cell phones has maintained constant between 2019 and 2021. A quarter of those with yearly average earnings under $30,000 (24%) says they don't own smartphones. Four out of every ten low-income people (43%) do not have home internet access or a computer (43%). Furthermore, the more significant part of lower-income Americans does not own a tablet device.

On the other hand, every technology is practically universal among people earning $100,000 or higher per year. Americans with larger family incomes are also more likely to buy a variety of internet-connected products. Wi-Fi at home, a smartphone, a computer, and a tablet are used by around six out of ten families making $100,000 or more per year, compared to 23 percent in the lesser household.

Racial gap in the United States

Although many groups in society are affected by a lack of access to computers or the Internet, communities of color are specifically observed to be negatively affected by the digital divide. Pew research shows that as of 2021, home broadband rates are 81% for White households, 71% for Black households and 65% for Hispanic households. While 63% of adults find the lack of broadband to be a disadvantage, only 49% of White adults do. Smartphone and tablet ownership remains consistent with about 8 out of 10 Black, White, and Hispanic individuals reporting owning a smartphone and half owning a tablet. A 2021 survey found that a quarter of Hispanics rely on their smartphone and do not have access to broadband.

Physical and mental disability gap

Inequities in access to information technologies are present among individuals living with a physical disability in comparison to those who are not living with a disability. In 2011, according to the Pew Research Center, 54% of households with a person who had a disability had home Internet access, compared to 81% of households that did not have a person who has a disability. The type of disability an individual has can prevent them from interacting with computer screens and smartphone screens, such as having a quadriplegia disability or having a disability in the hands. However, there is still a lack of access to technology and home Internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society. Issues such as the perception of disabilities in society, national and regional government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. In 2022, a survey of people in the UK with severe mental illness found that 42% lacked basic digital skills, such as changing passwords or connecting to Wi-Fi.

People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% across the UK between 2016–17 and 2017–18 according to a report published by Leonard Cheshire, a health and welfare charity. Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when model Katie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Price to ensure that Britain's MPs held accountable those who perpetuate online abuse towards those with disabilities. Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations.

Gender gap

Due to the rapidly declining price of connectivity and hardware, skills deficits have eclipsed barriers of access as the primary contributor to the gender digital divide. Studies show that women are less likely to know how to leverage devices and Internet access to their full potential, even when they do use digital technologies. In rural India, for example, a study found that the majority of women who owned mobile phones only knew how to answer calls. They could not dial numbers or read messages without assistance from their husbands, due to a lack of literacy and numeracy skills. A survey of 3,000 respondents across 25 countries found that adolescent boys with mobile phones used them for a wider range of activities, such as playing games and accessing financial services online. Adolescent girls in the same study tended to use just the basic functionalities of their phone, such as making calls and using the calculator. Similar trends can be seen even in areas where Internet access is near-universal. A survey of women in nine cities around the world revealed that although 97% of women were using social media, only 48% of them were expanding their networks, and only 21% of Internet-connected women had searched online for information related to health, legal rights or transport. In some cities, less than one quarter of connected women had used the Internet to look for a job.

Abilities and perceptions of abilities

Studies show that despite strong performance in computer and information literacy (CIL), girls do not have confidence in their ICT abilities. According to the International Computer and Information Literacy Study (ICILS) assessment girls' self-efficacy scores (their perceived as opposed to their actual abilities) for advanced ICT tasks were lower than boys'.

A paper published by J. Cooper from Princeton University points out that learning technology is designed to be receptive to men instead of women. Overall, the study presents the problem of various perspectives in society that are a result of gendered socialization patterns that believe that computers are a part of the male experience since computers have traditionally presented as a toy for boys when they are children. This divide is followed as children grow older and young girls are not encouraged as much to pursue degrees in IT and computer science. In 1990, the percentage of women in computing jobs was 36%, however in 2016, this number had fallen to 25%. This can be seen in the under representation of women in IT hubs such as Silicon Valley.

There has also been the presence of algorithmic bias that has been shown in machine learning algorithms that are implemented by major companies. In 2015, Amazon had to abandon a recruiting algorithm that showed a difference between ratings that candidates received for software developer jobs as well as other technical jobs. As a result, it was revealed that Amazon's machine algorithm was biased against women and favored male resumes over female resumes. This was due to the fact that Amazon's computer models were trained to vet patterns in resumes over a 10-year period. During this ten-year period, the majority of the resumes belong to male individuals, which is a reflection of male dominance across the tech industry.

Age gap

The age gap contributes to the digital divide due to the fact that people born before 1983 did not grow up with the internet. According to Marc Prensky, people who fall into this age range are classified as "digital immigrants." A digital immigrant is defined as "a person born or brought up before the widespread use of digital technology." The internet became officially available for public use on January 1, 1983; anyone born before then has had to adapt to the new age of technology. On the contrary, people born after 1983 are considered "digital natives". Digital natives are defined as people born or brought up during the age of digital technology.

Across the globe, there is a 10% difference in internet usage between people aged 15–24 years old and people aged 25 years or older. According to the International Telecommunication Union (ITU), 75% of people aged 15–24 used the internet in 2022 compared to 65% of people aged 25 years or older. The highest amount of digital divide between generations occurs in Africa with 55% of the younger age group using the internet compared to 36% of people aged 25 years or older. The lowest amount of divide occurs between the Commonwealth of Independent States with 91% of the younger age group using the internet compared to 83% of people aged 25 years or older.

In addition to being less connected with the internet, older generations are less likely to use financial technology, also known as fintech. Fintech is any way of managing money via digital devices. Some examples of fintech include digital payment apps such as Venmo and Apple Pay, tax services such as TurboTax, or applying for a mortgage digitally. In data from World Bank Findex, 40% of people younger than 40 years old utilized fintech compared to less than 25% of people aged 60 years or older.

Global level

The divide between differing countries or regions of the world is referred to as the global digital divide, which examines the technological gap between developing and developed countries. The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. In contrast, the global digital divide describes disparities in access to computing and information resources, and the opportunities derived from such access. As the internet rapidly expands it is difficult for developing countries to keep up with the constant changes. In 2014 only three countries (China, US, Japan) host 50% of the globally installed bandwidth potential. This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, replaced by China, who hosted more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total).

Some zero-rating programs such as Facebook Zero offer free/subsidized data access to certain websites. Critics object that this is an anti-competitive program that undermines net neutrality and creates a "walled garden". A 2015 study reported that 65% of Nigerians, 61% of Indonesians, and 58% of Indians agree with the statement that "Facebook is the Internet" compared with only 5% in the US.

Implications

Social capital

Once an individual is connected, Internet connectivity and ICTs can enhance his or her future social and cultural capital. Social capital is acquired through repeated interactions with other individuals or groups of individuals. Connecting to the Internet creates another set of means by which to achieve repeated interactions. ICTs and Internet connectivity enable repeated interactions through access to social networks, chat rooms, and gaming sites. Once an individual has access to connectivity, obtains infrastructure by which to connect, and can understand and use the information that ICTs and connectivity provide, that individual is capable of becoming a "digital citizen."

Economic disparity

In the United States, the research provided by Unguarded Availability Services notes a direct correlation between a company's access to technological advancements and its overall success in bolstering the economy. The study, which includes over 2,000 IT executives and staff officers, indicates that 69 percent of employees feel they do not have access to sufficient technology to make their jobs easier, while 63 percent of them believe the lack of technological mechanisms hinders their ability to develop new work skills. Additional analysis provides more evidence to show how the digital divide also affects the economy in places all over the world. A BEG report suggests that in countries like Sweden, Switzerland, and the U.K., the digital connection among communities is made easier, allowing for their populations to obtain a much larger share of the economies via digital business. In fact, in these places, populations hold shares approximately 2.5 percentage points higher. During a meeting with the United Nations a Bangladesh representative expressed his concern that poor and undeveloped countries would be left behind due to a lack of funds to bridge the digital gap.

Education

The digital divide impacts children's ability to learn and grow in low-income school districts. Without Internet access, students are unable to cultivate necessary technological skills to understand today's dynamic economy. The need for the internet starts while children are in school – necessary for matters such as school portal access, homework submission, and assignment research. The Federal Communications Commission's Broadband Task Force created a report showing that about 70% of teachers give students homework that demand access to broadband. Approximately 65% of young scholars use the Internet at home to complete assignments as well as connect with teachers and other students via discussion boards and shared files. A recent study indicates that approximately 50% of students say that they are unable to finish their homework due to an inability to either connect to the Internet or in some cases, find a computer. Additionally, The Public Policy Institute of California reported in 2023 that 27% of the state’s school children lack the necessary broadband to attend school remotely, and 16% have no internet connection at all.

This has led to a new revelation: 42% of students say they received a lower grade because of this disadvantage. According to research conducted by the Center for American Progress, "if the United States were able to close the educational achievement gaps between native-born white children and black and Hispanic children, the U.S. economy would be 5.8 percent—or nearly $2.3 trillion—larger in 2050".

In a reverse of this idea, well-off families, especially the tech-savvy parents in Silicon Valley, carefully limit their own children's screen time. The children of wealthy families attend play-based preschool programs that emphasize social interaction instead of time spent in front of computers or other digital devices, and they pay to send their children to schools that limit screen time. American families that cannot afford high-quality childcare options are more likely to use tablet computers filled with apps for children as a cheap replacement for a babysitter, and their government-run schools encourage screen time during school. Students in school are also learning about the digital divide.

To reduce the impact of the digital divide and increase digital literacy in young people at an early age, governments have begun to develop and focus policy on embedding digital literacies in both student and educator programs, for instance, in Initial Teacher Training programs in Scotland.  The National Framework for Digital Literacies in Initial Teacher Education was developed by representatives from Higher Education institutions that offer Initial Teacher Education (ITE) programs in conjunction with the Scottish Council of Deans of Education (SCDE) with the support of Scottish Government  This policy driven approach aims to establish an academic grounding in the exploration of learning and teaching digital literacies and their impact on pedagogy as well as ensuring educators are equipped to teach in the rapidly evolving digital environment and continue their own professional development.

Demographic differences

Factors such as nationality, gender, and income contribute to the digital divide across the globe. Depending on what someone identifies as, their access to the internet can potentially decrease. According to a study conducted by the ITU in 2022, Africa has the fewest people on the internet at a 40% rate; the next lowest internet population is the Asia-Pacific region at 64%. Internet access remains a problem in Least Developing Countries and Landlocked Developing Countries. They both have 36% of people using the internet compared to a 66% average around the world.

Men generally have more access to the internet around the world. The gender parity score across the globe is 0.92. A gender parity score is calculated by the percentage of women who use the internet divided by the percentage of men who use the internet. Ideally, countries want to have gender parity scores between 0.98 and 1.02. The region with the least gender parity is Africa with a score of 0.75. The next lowest gender parity score belongs to the Arab States at 0.87. Americans, Commonwealth of Independent States, and Europe all have the highest gender parity scores with scores that do not go below 0.98 or higher than 1. Gender parity scores are often impacted by class. Low income regions have a score of 0.65 while upper-middle income and high income regions have a score of 0.99.

The difference between economic classes has been a prevalent issue with the digital divide up to this point. People who are considered to earn low income use the internet at a 26% rate followed by lower-middle income at 56%, upper-middle income at 79%, and high income at 92%. The staggering difference between low income individuals and high income individuals can be traced to the affordability of mobile products. Products are becoming more affordable as the years pass; according to the ITU, “the global median price of mobile-broadband services dropped from 1.9 percent to 1.5 percent of average gross national income (GNI) per capita.” There is still plenty of work to be done, as there is a 66% difference between low income individuals and high income individuals' access to the internet.

Facebook divide

The Facebook divide, a concept derived from the "digital divide", is the phenomenon with regard to access to, use of, and impact of Facebook on society. It was coined at the International Conference on Management Practices for the New Economy (ICMAPRANE-17) on February 10–11, 2017.

Additional concepts of Facebook Native and Facebook Immigrants were suggested at the conference. Facebook divide, Facebook native, Facebook immigrants, and Facebook left-behind are concepts for social and business management research. Facebook immigrants utilize Facebook for their accumulation of both bonding and bridging social capital. Facebook natives, Facebook immigrants, and Facebook left-behind induced the situation of Facebook inequality. In February 2018, the Facebook Divide Index was introduced at the ICMAPRANE conference in Noida, India, to illustrate the Facebook divide phenomenon.

Solutions

In the year 2000, the United Nations Volunteers (UNV) program launched its Online Volunteering service, which uses ICT as a vehicle for and in support of volunteering. It constitutes an example of a volunteering initiative that effectively contributes to bridge the digital divide. ICT-enabled volunteering has a clear added value for development. If more people collaborate online with more development institutions and initiatives, this will imply an increase in person-hours dedicated to development cooperation at essentially no additional cost. This is the most visible effect of online volunteering for human development.

Since May 17, 2006, the United Nations has raised awareness of the divide by way of the World Information Society Day. In 2001, it set up the Information and Communications Technology (ICT) Task Force. Later UN initiatives in this area are the World Summit on the Information Society since 2003, and the Internet Governance Forum, set up in 2006.

As of 2009, the borderline between ICT as a necessity good and ICT as a luxury good was roughly around US$10 per person per month, or US$120 per year, which means that people consider ICT expenditure of US$120 per year as a basic necessity. Since more than 40% of the world population lives on less than US$2 per day, and around 20% live on less than US$1 per day (or less than US$365 per year), these income segments would have to spend one third of their income on ICT (120/365 = 33%). The global average of ICT spending is at a mere 3% of income. Potential solutions include driving down the costs of ICT, which includes low-cost technologies and shared access through Telecentres.

In 2022, the US Federal Communications Commission started a proceeding "to prevent and eliminate digital discrimination and ensure that all people of the United States benefit from equal access to broadband internet access service, consistent with Congress's direction in the Infrastructure Investment and Jobs Act.

Social media websites serve as both manifestations of and means by which to combat the digital divide. The former describes phenomena such as the divided users' demographics that make up sites such as Facebook, WordPress and Instagram. Each of these sites hosts communities that engage with otherwise marginalized populations.

Libraries

A laptop lending kiosk at Texas A&M University–Commerce's Gee Library

In 2010, an "online indigenous digital library as part of public library services" was created in Durban, South Africa to narrow the digital divide by not only giving the people of the Durban area access to this digital resource, but also by incorporating the community members into the process of creating it.

In 2002, the Gates Foundation started the Gates Library Initiative which provides training assistance and guidance in libraries.

In Kenya, lack of funding, language, and technology illiteracy contributed to an overall lack of computer skills and educational advancement. This slowly began to change when foreign investment began. In the early 2000s, the Carnegie Foundation funded a revitalization project through the Kenya National Library Service. Those resources enabled public libraries to provide information and communication technologies to their patrons. In 2012, public libraries in the Busia and Kiberia communities introduced technology resources to supplement curriculum for primary schools. By 2013, the program expanded into ten schools.

Effective use

Even though individuals might be capable of accessing the Internet, many are opposed by barriers to entry, such as a lack of means to infrastructure or the inability to comprehend or limit the information that the Internet provides. Some individuals can connect, but they do not have the knowledge to use what information ICTs and Internet technologies provide them. This leads to a focus on capabilities and skills, as well as awareness to move from mere access to effective usage of ICT.

Community informatics (CI) focuses on issues of "use" rather than "access". CI is concerned with ensuring the opportunity not only for ICT access at the community level but also, according to Michael Gurstein, that the means for the "effective use" of ICTs for community betterment and empowerment are available. Gurstein has also extended the discussion of the digital divide to include issues around access to and the use of "open data" and coined the term "data divide" to refer to this issue area.

Criticism

Knowledge divide

Since gender, age, race, income, and educational digital divides have lessened compared to the past, some researchers suggest that the digital divide is shifting from a gap in access and connectivity to ICTs to a knowledge divide. A knowledge divide concerning technology presents the possibility that the gap has moved beyond the access and having the resources to connect to ICTs to interpreting and understanding information presented once connected.

Second-level digital divide

The second-level digital divide, also referred to as the production gap, describes the gap that separates the consumers of content on the Internet from the producers of content. As the technological digital divide is decreasing between those with access to the Internet and those without, the meaning of the term digital divide is evolving. Previously, digital divide research was focused on accessibility to the Internet and Internet consumption. However, with an increasing number of the population gaining access to the Internet, researchers are examining how people use the Internet to create content and what impact socioeconomics are having on user behavior.

New applications have made it possible for anyone with a computer and an Internet connection to be a creator of content, yet the majority of user-generated content available widely on the Internet, like public blogs, is created by a small portion of the Internet-using population. Web 2.0 technologies like Facebook, YouTube, Twitter, and Blogs enable users to participate online and create content without having to understand how the technology actually works, leading to an ever-increasing digital divide between those who have the skills and understanding to interact more fully with the technology and those who are passive consumers of it.

Some of the reasons for this production gap include material factors like the type of Internet connection one has and the frequency of access to the Internet. The more frequently a person has access to the Internet and the faster the connection, the more opportunities they have to gain the technology skills and the more time they have to be creative.

Other reasons include cultural factors often associated with class and socioeconomic status. Users of lower socioeconomic status are less likely to participate in content creation due to disadvantages in education and lack of the necessary free time for the work involved in blog or website creation and maintenance. Additionally, there is evidence to support the existence of the second-level digital divide at the K-12 level based on how educators' use technology for instruction. Schools' economic factors have been found to explain variation in how teachers use technology to promote higher-order thinking skills.

IBM PC compatible

From Wikipedia, the free encyclopedia
(Redirected from IBM PC–compatible)
The Compaq Portable was one of the first nearly 100% IBM-compatible PCs.

An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models. Like the original IBM PC, an IBM PC–compatible computer uses an x86-based central processing unit, sourced either from Intel or a second source like AMD, Cyrix or other vendors such as Texas Instruments, Fujitsu, OKI, Mitsubishi or NEC and is capable of using interchangeable commodity hardware such as expansion cards. Initially such computers were referred to as PC clones, IBM clones or IBM PC clones, but the term "IBM PC compatible" is now a historical description only, as the vast majority of microcomputers produced since the 1990s are IBM compatible. IBM itself no longer sells personal computers, having sold its division to Lenovo in 2005. "Wintel" is a similar description that is more commonly used for modern computers.

The designation "PC", as used in much of personal computer history, has not meant "personal computer" generally, but rather an x86 computer capable of running the same software that a contemporary IBM or Lenovo PC could. The term was initially in contrast to the variety of home computer systems available in the early 1980s, such as the Apple II, TRS-80, and Commodore 64. Later, the term was primarily used in contrast to Commodore's Amiga and Apple's Macintosh computers.

Overview

These "clones" duplicated almost all the significant features of the original IBM PC architectures. This was facilitated by IBM's choice of commodity hardware components, which were cheap, and by various manufacturers' ability to reverse-engineer the BIOS firmware using a "clean room design" technique. Columbia Data Products built the first clone of the IBM personal computer, the MPC 1600 by a clean-room reverse-engineered implementation of its BIOS. Other rival companies, Corona Data Systems, Eagle Computer, and the Handwell Corporation were threatened with legal action by IBM, who settled with them. Soon after in 1982, Compaq released the very successful Compaq Portable, also with a clean-room reverse-engineered BIOS, and also not challenged legally by IBM.

Almost all home computers since the 1990s are technically IBM PC-compatibles.

Early IBM PC compatibles used the same computer buses as their IBM counterparts, switching from the 8-bit IBM PC and XT bus to the 16-bit IBM AT bus with the release of the AT. IBM's introduction of the proprietary Micro Channel architecture (MCA) in its Personal System/2 (PS/2) series resulted in the establishment of the Extended Industry Standard Architecture bus open standard by a consortium of IBM PC compatible vendors, redefining the 16-bit IBM AT bus as the Industry Standard Architecture (ISA) bus. Additional bus standards were subsequently adopted to improve compatibility between IBM PC compatibles, including the VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), and the Accelerated Graphics Port (AGP).

Descendants of the x86 IBM PC compatibles, namely 64-bit computers based on "x86-64/AMD64" chips comprise the majority of desktop computers on the market as of 2021, with the dominant operating system being Microsoft Windows. Interoperability with the bus structure and peripherals of the original PC architecture may be limited or non-existent. Many modern computers are unable to use old software or hardware that depends on portions of the IBM PC compatible architecture which are missing or do not have equivalents in modern computers. For example, computers which boot using Unified Extensible Firmware Interface-based firmware that lack a Compatibility Support Module, or CSM, required to emulate the old BIOS-based firmware interface, or have their CSMs disabled, cannot natively run MS-DOS since MS-DOS depends on a BIOS interface to boot.

Only the Macintosh had kept significant market share without having compatibility with the IBM PC, except in the period between the transition to Intel processors and transition to Apple silicon, when Macs used standard PC components and were capable of dual-booting Windows with Boot Camp.

Origins

The original IBM PC (Model 5150) motivated the production of clones during the early 1980s.

IBM decided in 1980 to market a low-cost single-user computer as quickly as possible. On August 12, 1981, the first IBM PC went on sale. There were three operating systems (OS) available for it. The least expensive and most popular was PC DOS made by Microsoft. In a crucial concession, IBM's agreement allowed Microsoft to sell its own version, MS-DOS, for non-IBM computers. The only component of the original PC architecture exclusive to IBM was the BIOS (Basic Input/Output System).

IBM at first asked developers to avoid writing software that addressed the computer's hardware directly and to instead make standard calls to BIOS functions that carried out hardware-dependent operations. This software would run on any machine using MS-DOS or PC DOS. Software that directly addressed the hardware instead of making standard calls was faster, however; this was particularly relevant to games. Software addressing IBM PC hardware in this way would not run on MS-DOS machines with different hardware (for example, the PC-98). The IBM PC was sold in high enough volumes to justify writing software specifically for it, and this encouraged other manufacturers to produce machines that could use the same programs, expansion cards, and peripherals as the PC. The x86 computer marketplace rapidly excluded all machines which were not hardware-compatible or software-compatible with the PC. The 640 KB barrier on "conventional" system memory available to MS-DOS is a legacy of that period; other non-clone machines, while subject to a limit, could exceed 640 KB.

Rumors of "lookalike," compatible computers, created without IBM's approval, began almost immediately after the IBM PC's release. InfoWorld wrote on the first anniversary of the IBM PC that

The dark side of an open system is its imitators. If the specs are clear enough for you to design peripherals, they are clear enough for you to design imitations. Apple ... has patents on two important components of its systems ... IBM, which reportedly has no special patents on the PC, is even more vulnerable. Numerous PC-compatible machines—the grapevine says 60 or more—have begun to appear in the marketplace.

By June 1983 PC Magazine defined "PC 'clone'" as "a computer [that can] accommodate the user who takes a disk home from an IBM PC, walks across the room, and plugs it into the 'foreign' machine".[7] Demand for the PC by then was so strong that dealers received 60% or less of the inventory they wanted, and many customers purchased clones instead. Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first product, an IBM PC compatible in November 1982, the Compaq Portable. The Compaq was the first sewing machine-sized portable computer that was essentially 100% PC-compatible. The court decision in Apple v. Franklin, was that BIOS code was protected by copyright law, but it could reverse-engineer the IBM BIOS and then write its own BIOS using clean room design. Note this was over a year after Compaq released the Portable. The money and research put into reverse-engineering the BIOS was a calculated risk.

Compatibility issues

Non-compatible MS-DOS computers: Workalikes

The DEC Rainbow 100 runs MS-DOS but is not compatible with the IBM PC.

At the same time, many manufacturers such as Tandy/RadioShack, Xerox, Hewlett-Packard, Digital Equipment Corporation, Sanyo, Texas Instruments, Tulip, Wang and Olivetti introduced personal computers that supported MS-DOS, but were not completely software- or hardware-compatible with the IBM PC.

Tandy described the Tandy 2000, for example, as having a "'next generation' true 16-bit CPU", and with "More speed. More disk storage. More expansion" than the IBM PC or "other MS-DOS computers". While admitting in 1984 that many PC DOS programs did not work on the computer, the company stated that "the most popular, sophisticated software on the market" was available, either immediately or "over the next six months".

Like IBM, Microsoft's apparent intention was that application writers would write to the application programming interfaces in MS-DOS or the firmware BIOS, and that this would form what would now be termed a hardware abstraction layer. Each computer would have its own Original Equipment Manufacturer (OEM) version of MS-DOS, customized to its hardware. Any software written for MS-DOS would operate on any MS-DOS computer, despite variations in hardware design.

This expectation seemed reasonable in the computer marketplace of the time. Until then Microsoft's business was based primarily on computer languages such as BASIC. The established small system operating software was CP/M from Digital Research which was in use both at the hobbyist level and by the more professional of those using microcomputers. To achieve such widespread use, and thus make the product viable economically, the OS had to operate across a range of machines from different vendors that had widely varying hardware. Those customers who needed other applications than the starter programs could reasonably expect publishers to offer their products for a variety of computers, on suitable media for each.

Microsoft's competing OS was intended initially to operate on a similar varied spectrum of hardware, although all based on the 8086 processor. Thus, MS-DOS was for several years sold only as an OEM product. There was no Microsoft-branded MS-DOS: MS-DOS could not be purchased directly from Microsoft, and each OEM release was packaged with the trade dress of the given PC vendor. Malfunctions were to be reported to the OEM, not to Microsoft. However, as machines that were compatible with IBM hardware—thus supporting direct calls to the hardware—became widespread, it soon became clear that the OEM versions of MS-DOS were virtually identical, except perhaps for the provision of a few utility programs.

MS-DOS provided adequate functionality for character-oriented applications such as those that could have been implemented on a text-only terminal. Had the bulk of commercially important software been of this nature, low-level hardware compatibility might not have mattered. However, in order to provide maximum performance and leverage hardware features (or work around hardware bugs), PC applications quickly developed beyond the simple terminal applications that MS-DOS supported directly. Spreadsheets, WYSIWYG word processors, presentation software and remote communication software established new markets that exploited the PC's strengths, but required capabilities beyond what MS-DOS provided. Thus, from very early in the development of the MS-DOS software environment, many significant commercial software products were written directly to the hardware, for a variety of reasons:

  • MS-DOS itself did not provide any way to position the text cursor other than to advance it after displaying each letter (teletype mode). While the BIOS video interface routines were adequate for rudimentary output, they were necessarily less efficient than direct hardware addressing, as they added extra processing; they did not have "string" output, but only character-by-character teletype output, and they inserted delays to prevent CGA hardware "snow" (a display artifact of CGA cards produced when writing directly to screen memory)——an especially bad artifact since they were called by IRQs, thus making multitasking very difficult. A program that wrote directly to video memory could achieve output rates 5 to 20 times faster than making system calls. Turbo Pascal used this technique from its earliest versions.
  • Graphics capability was not taken seriously in the original IBM design brief; graphics were considered only from the perspective of generating static business graphics such as charts and graphs. MS-DOS did not have an API for graphics, and the BIOS only included the rudimentary graphics functions such as changing screen modes and plotting single points. To make a BIOS call for every point drawn or modified increased overhead considerably, making the BIOS interface notoriously slow. Because of this, line-drawing, arc-drawing, and blitting had to be performed by the application to achieve acceptable speed, which was usually done by bypassing the BIOS and accessing video memory directly. Software written to address IBM PC hardware directly would run on any IBM clone, but would have to be rewritten especially for each non-PC-compatible MS-DOS machine.
  • Video games, even early ones, mostly required a true graphics mode. They also performed any machine-dependent trick the programmers could think of in order to gain speed. Though initially the major market for the PC was for business applications, games capability became an important factor motivating PC purchases as prices decreased. The availability and quality of games could mean the difference between the purchase of a PC compatible or a different platform with the ability to exchange data like the Amiga.
  • Communications software directly accessed the UART serial port chip, because the MS-DOS API and the BIOS did not provide full support and was too slow to keep up with hardware which could transfer data at 19,200 bit/s.
  • Even for standard business applications, speed of execution was a significant competitive advantage. Integrated software Context MBA preceded Lotus 1-2-3 to market and included more functions. Context MBA was written in UCSD p-System, making it very portable but too slow to be truly usable on a PC. 1-2-3 was written in x86 assembly language and performed some machine-dependent tricks. It was so much faster that it quickly surpassed Context MBA's sales.
  • Disk copy-protection schemes, in common use at the time, worked by reading nonstandard data patterns on the diskette to verify originality. These patterns were impossible to detect using standard DOS or BIOS calls, so direct access to the disk controller hardware was necessary for the protection to work.
  • Some software was designed to run only on a true IBM PC, and checked for an actual IBM BIOS.

First-generation PC workalikes by IBM competitors

Computer name Manufacturer Date introduced CPU clock rate Max RAM Floppy disk capacity Notable features
Hyperion Dynalogic Jan 1983 8088 4.77 MHz 640 KB 320 KB Canadian, license but never sold by Commodore
Olivetti M24/AT&T 6300 / Logabax Persona 1600 Olivetti, marketed by AT&T 1983 (AT&T 6300 June 1984) 8086 8 MHz (later 10 MHz) 640 KB 360 KB (later 720 KB) true IBM compatible;optional 640x400 color graphics
Zenith Z-100 Zenith Data Systems June 1982 8088 4.77 MHz 768 KB 360 KB optional 8 color 640x255 graphics, external 8" floppy drives
HP-150 Hewlett-Packard Nov 1983 8088 8 MHz 640 KB 270 KB (later 710 KB) primitive touchscreen
Compaq Portable Compaq Jan 1983 8088 4.77 MHz 640 KB 360 KB sold as a true IBM compatible
Compaq Deskpro Compaq 1984 8086 8 MHz 640 KB 360 KB sold as true IBM XT compatible 
MPC 1600 Columbia Data Products June 1982 8088 4.77 MHz 640 KB 360 KB true IBM compatible, credited as first PC clone
Eagle PC / 1600 series Eagle Computer 1982 8086 4.77 MHz 640 KB 360 KB 750×352 mono graphics, first 8086 CPU
TI Professional Computer Texas Instruments Jan 1983 8088 5 MHz 256 KB 320 KB 720x300 color graphics
DEC Rainbow Digital Equipment Corporation 1982 8088 4.81 MHz 768 KB 400 KB 132x24 text mode, 8088 and Z80 CPUs
Wang PC Wang Laboratories Aug 1985 8086 8 MHz 512 KB 360 KB 800x300 mono graphics
MBC-550 Sanyo 1982 8088 3.6 MHz 256 KB 360 KB (later 720 KB) 640x200 8 color graphics (R, G, B bitplanes)
Apricot PC Apricot Computers 1983 8086 4.77 MHz 768 KB 720 KB 800x400 mono graphics, 132x50 text mode
TS-1603 TeleVideo Apr 1983 8088 4.77 MHz 256 KB 737 KB keyboard had palm rests, 16 function keys; built-in modem
Tava PC Tava Corporation Oct 1983 8088 4.77 MHz 640 KB 360 KB true IBM compatible, credited as first private-label clone sold by manufacturer's stores
Tandy 2000 Tandy Corporation Sep 1983 80186 8 MHz 768 KB 720 KB redefinable character set, optional 640x400 8-color or mono graphics

"Operationally Compatible"

The first thing to think about when considering an IBM-compatible computer is, "How compatible is it?"

— BYTE, September 1983

In May 1983, Future Computing defined four levels of compatibility:

  • Operationally Compatible. Can run "the top selling" IBM PC software, use PC expansion boards, and read and write PC disks. Has "complementary features" like portability or lower price that distinguish computer from the PC, which is sold in the same store. Examples: (Best) Columbia Data Products, Compaq; (Better) Corona; (Good) Eagle.
  • Functionally Compatible. Runs own version of popular PC software. Cannot use PC expansion boards but can read and write PC disks. Cannot become Operationally Compatible. Example: TI Professional.
  • Data Compatible. May not run top PC software. Can read and/or write PC disks. Can become Functionally Compatible. Examples: NCR Decision Mate, Olivetti M20, Wang PC, Zenith Z-100.
  • Incompatible. Cannot read PC disks. Can become Data Compatible. Examples: Altos 586, DEC Rainbow 100, Grid Compass, Victor 9000.
MS-DOS version 1.12 for Compaq Personal Computers

During development, Compaq engineers found that Microsoft Flight Simulator would not run because of what subLOGIC's Bruce Artwick described as "a bug in one of Intel's chips", forcing them to make their new computer bug compatible with the IBM PC. At first, few clones other than Compaq's offered truly full compatibility. Jerry Pournelle purchased an IBM PC in mid-1983, "rotten keyboard and all", because he had "four cubic feet of unevaluated software, much of which won't run on anything but an IBM PC. Although a lot of machines claim to be 100 percent IBM PC compatible, I've yet to have one arrive ... Alas, a lot of stuff doesn't run with Eagle, Z-100, Compupro, or anything else we have around here". Columbia Data Products's November 1983 sales brochure stated that during tests with retail-purchased computers in October 1983, its own and Compaq's products were compatible with all tested PC software, while Corona and Eagle's were less compatible. Columbia University reported in January 1984 that Kermit ran without modification on Compaq and Columbia Data Products clones, but not on those from Eagle or Seequa. Other MS-DOS computers also required custom code.

By December 1983 Future Computing stated that companies like Compaq, Columbia Data Products, and Corona that emphasized IBM PC compatibility had been successful, while non-compatible computers had hurt the reputations of others like TI and DEC despite superior technology. At a San Francisco meeting it warned 200 attendees, from many American and foreign computer companies as well as IBM itself, to "Jump on the IBM PC-compatible bandwagon—quickly, and as compatibly as possible". Future Computing said in February 1984 that some computers were "press-release compatible", exaggerating their actual compatibility with the IBM PC. Many companies were reluctant to have their products' PC compatibility tested. When PC Magazine requested samples from computer manufacturers that claimed to produce compatibles for an April 1984 review, 14 of 31 declined.Corona specified that "Our systems run all software that conforms to IBM PC programming standards. And the most popular software does." When a BYTE journalist asked to test Peachtext at the Spring 1983 COMDEX, Corona representatives "hemmed and hawed a bit, but they finally led me ... off in the corner where no one would see it should it fail". The magazine reported that "Their hesitancy was unnecessary. The disk booted up without a problem". Zenith Data Systems was bolder, bragging that its Z-150 ran all applications people brought to test with at the 1984 West Coast Computer Faire.

Creative Computing in 1985 stated, "we reiterate our standard line regarding the IBM PC compatibles: try the package you want to use before you buy the computer." Companies modified their computers' BIOS to work with newly discovered incompatible applications, and reviewers and users developed stress tests to measure compatibility; by 1984 the ability to operate Lotus 1-2-3 and Flight Simulator became the standard, with compatibles specifically designed to run them and prominently advertising their compatibility.

IBM believed that some companies such as Eagle, Corona, and Handwell infringed on its copyright, and after Apple Computer, Inc. v. Franklin Computer Corp. successfully forced the clone makers to stop using the BIOS. The Phoenix BIOS in 1984, however, and similar products such as AMI BIOS, permitted computer makers to legally build essentially 100%-compatible clones without having to reverse-engineer the PC BIOS themselves. A September 1985 InfoWorld chart listed seven compatibles with 256 KB RAM, two disk drives, and monochrome monitors for $1,495 to $2,320, while the equivalent IBM PC cost $2,820. The Zenith Z-150 and inexpensive Leading Edge Model D are even compatible with IBM proprietary diagnostic software, unlike the Compaq Portable. By 1986 Compute! stated that "clones are generally reliable and about 99 percent compatible", and a 1987 survey in the magazine of the clone industry did not mention software compatibility, stating that "PC by now has come to stand for a computer capable of running programs that are managed by MS-DOS".

The decreasing influence of IBM

The main reason why an IBM standard is not worrying is that it can help competition to flourish. IBM will soon be as much a prisoner of its standards as its competitors are. Once enough IBM machines have been bought, IBM cannot make sudden changes in their basic design; what might be useful for shedding competitors would shake off even more customers.

— The Economist, November 1983
The PowerPak 286, an IBM PC compatible computer running AutoCAD under MS-DOS
IBM PC compatible computer with processor Intel 80386
IBM PC compatible computer with processor Intel 80486
IBM 300 PL computer with processor Intel Pentium I and Windows 95
Dell OptiPlex with processor Intel Pentium 4

In February 1984 Byte wrote that "IBM's burgeoning influence in the PC community is stifling innovation because so many other companies are mimicking Big Blue", but The Economist stated in November 1983, "The main reason why an IBM standard is not worrying is that it can help competition to flourish".

By 1983, IBM had about 25% of sales of personal computers between $1,000 and $10,000, and computers with some PC compatibility were another 25%. As the market and competition grew IBM's influence diminished. Writing that even "IBM has to continue to be IBM compatible", in November 1985 PC Magazine stated "Now that it has created the [PC] market, the market doesn't necessarily need IBM for the machines. It may depend on IBM to set standards and to develop higher-performance machines, but IBM had better conform to existing standards so as to not hurt users". Observers noted IBM's silence when the industry that year quickly adopted the expanded memory standard, created by Lotus and Intel without IBM's participation. In January 1987, Bruce Webster wrote in Byte of rumors that IBM would introduce proprietary personal computers with a proprietary operating system: "Who cares? If IBM does it, they will most likely just isolate themselves from the largest marketplace, in which they really can't compete anymore anyway". He predicted that in 1987 the market "will complete its transition from an IBM standard to an Intel/MS-DOS/expansion bus standard ... Folks aren't so much concerned about IBM compatibility as they are about Lotus 1-2-3 compatibility". By 1992, Macworld stated that because of clones, "IBM lost control of its own market and became a minor player with its own technology".

The Economist predicted in 1983 that "IBM will soon be as much a prisoner of its standards as its competitors are", because "Once enough IBM machines have been bought, IBM cannot make sudden changes in their basic design; what might be useful for shedding competitors would shake off even more customers". After the Compaq Deskpro 386 became the first 80386-based PC, PC wrote that owners of the new computer did not need to fear that future IBM products would be incompatible with the Compaq, because such changes would also affect millions of real IBM PCs: "In sticking it to the competition, IBM would be doing the same to its own people". After IBM announced the OS/2-oriented PS/2 line in early 1987, sales of existing DOS-compatible PC compatibles rose, in part because the proprietary operating system was not available. In 1988, Gartner Group estimated that the public purchased 1.5 clones for every IBM PC. By 1989 Compaq was so influential that industry executives spoke of "Compaq compatible", with observers stating that customers saw the company as IBM's equal or superior. A 1990 American Institute of Certified Public Accountants member survey found that 23% of respondents used IBM computer hardware, and 16% used Compaq.

After 1987, IBM PC compatibles dominated both the home and business markets of commodity computers, with other notable alternative architectures being used in niche markets, like the Macintosh computers offered by Apple Inc. and used mainly for desktop publishing at the time, the aging 8-bit Commodore 64 which was selling for $150 by this time and became the world's bestselling computer, the 32-bit Commodore Amiga line used for television and video production and the 32-bit Atari ST used by the music industry. However, IBM itself lost the main role in the market for IBM PC compatibles by 1990. A few events in retrospect are important:

  • IBM designed the PC with an open architecture which permitted clone makers to use freely available non-proprietary components.
  • Microsoft included a clause in its contract with IBM which permitted the sale of the finished PC operating system (PC DOS) to other computer manufacturers. These IBM competitors licensed it, as MS-DOS, in order to offer PC compatibility for less cost.
  • The 1982 introduction of the Columbia Data Products MPC 1600, the first 100% IBM PC compatible computer.
  • The 1983 introduction of the Compaq Portable, providing portability unavailable from IBM at the time.
  • An Independent Business Unit (IBU) within IBM developed the IBM PC and XT. IBUs did not share in corporate R&D expense. After the IBU became the Entry Systems Division it lost this benefit, greatly decreasing margins.
  • The availability by 1986 of sub-$1,000 "Turbo XT" PC XT compatibles, including early offerings from Dell Computer, reducing demand for IBM's models. It was possible to buy two of these "generic" systems for less than the cost of one IBM-branded PC AT, and many companies did just that.
  • By integrating more peripherals into the computer itself, compatibles like the Model D have more free ISA slots than the PC.
  • Compaq was the first to release an Intel 80386-based computer, almost a year before IBM, with the Compaq Deskpro 386. Bill Gates later said that it was "the first time people started to get a sense that it wasn't just IBM setting the standards".
  • IBM's 1987 introduction of the incompatible and proprietary MicroChannel Architecture (MCA) computer bus, for its Personal System/2 (PS/2) line.
  • The split of the IBM-Microsoft partnership in development of OS/2. Tensions caused by the market success of Windows 3.0 ruptured the joint effort because IBM was committed to the 286's protected mode, which stunted OS/2's technical potential. Windows could take full advantage of the modern and increasingly affordable 386 / 386SX architecture. As well, there were cultural differences between the partners, and Windows was often bundled with new computers while OS/2 was only available for extra cost. The split left IBM the sole steward of OS/2 and it failed to keep pace with Windows.
  • The 1988 introduction by the "Gang of Nine" companies of a rival bus, Extended Industry Standard Architecture, intended to compete with, rather than copy, MCA.
  • The duelling expanded memory (EMS) and extended memory (XMS) standards of the late 1980s, both developed without input from IBM.

Despite popularity of its ThinkPad set of laptop PC's, IBM finally relinquished its role as a consumer PC manufacturer during April 2005, when it sold its laptop and desktop PC divisions (ThinkPad/ThinkCentre) to Lenovo for US$1.75 billion.

As of October 2007, Hewlett-Packard and Dell had the largest shares of the PC market in North America. They were also successful overseas, with Acer, Lenovo, and Toshiba also notable. Worldwide, a huge number of PCs are "white box" systems assembled by myriad local systems builders. Despite advances of computer technology, the IBM PC compatibles remained very much compatible with the original IBM PC computers, although most of the components implement the compatibility in special backward compatibility modes used only during a system boot. It was often more practical to run old software on a modern system using an emulator rather than relying on these features.

In 2014 Lenovo acquired IBM's x86-based server (System x) business for US$2.1 billion.

Expandability

One of the strengths of the PC-compatible design is its modular hardware design. End-users could readily upgrade peripherals and, to some degree, processor and memory without modifying the computer's motherboard or replacing the whole computer, as was the case with many of the microcomputers of the time. However, as processor speed and memory width increased, the limits of the original XT/AT bus design were soon reached, particularly when driving graphics video cards. IBM did introduce an upgraded bus in the IBM PS/2 computer that overcame many of the technical limits of the XT/AT bus, but this was rarely used as the basis for IBM-compatible computers since it required license payments to IBM both for the PS/2 bus and any prior AT-bus designs produced by the company seeking a license. This was unpopular with hardware manufacturers and several competing bus standards were developed by consortiums, with more agreeable license terms. Various attempts to standardize the interfaces were made, but in practice, many of these attempts were either flawed or ignored. Even so, there were many expansion options, and despite the confusion of its users, the PC compatible design advanced much faster than other competing designs of the time, even if only because of its market dominance.

"IBM PC compatible" becomes "Wintel"

During the 1990s, IBM's influence on PC architecture started to decline. "IBM PC compatible" becomes "Standard PC" in 1990s, and later "ACPI PC" in 2000s. An IBM-brand PC became the exception rather than the rule. Instead of placing importance on compatibility with the IBM PC, vendors began to emphasize compatibility with Windows. In 1993, a version of Windows NT was released that could operate on processors other than the x86 set. While it required that applications be recompiled, which most developers did not do, its hardware independence was used for Silicon Graphics (SGI) x86 workstations–thanks to NT's Hardware abstraction layer (HAL), they could operate NT (and its vast application library).

No mass-market personal computer hardware vendor dared to be incompatible with the latest version of Windows, and Microsoft's annual WinHEC conferences provided a setting in which Microsoft could lobby for—and in some cases dictate—the pace and direction of the hardware of the PC industry. Microsoft and Intel had become so important to the ongoing development of PC hardware that industry writers began using the word Wintel to refer to the combined hardware-software system.

This terminology itself is becoming a misnomer, as Intel has lost absolute control over the direction of x86 hardware development with AMD's AMD64. Additionally, non-Windows operating systems like macOS and Linux have established a presence on the x86 architecture.

Design limitations and more compatibility issues

Although the IBM PC was designed for expandability, the designers could not anticipate the hardware developments of the 1980s, nor the size of the industry they would engender. To make things worse, IBM's choice of the Intel 8088 for the CPU introduced several limitations for developing software for the PC compatible platform. For example, the 8088 processor only had a 20-bit memory addressing space. To expand PCs beyond one megabyte, Lotus, Intel, and Microsoft jointly created expanded memory (EMS), a bank-switching scheme to allow more memory provided by add-in hardware, and accessed by a set of four 16-kilobyte "windows" inside the 20-bit addressing. Later, Intel CPUs had larger address spaces and could directly address 16 MB (80286) or more, causing Microsoft to develop extended memory (XMS) which did not require additional hardware.

"Expanded" and "extended" memory have incompatible interfaces, so anyone writing software that used more than one megabyte had to provide for both systems for the greatest compatibility until MS-DOS began including EMM386, which simulated EMS memory using XMS memory. A protected mode OS can also be written for the 80286, but DOS application compatibility was more difficult than expected, not only because most DOS applications accessed the hardware directly, bypassing BIOS routines intended to ensure compatibility, but also that most BIOS requests were made by the first 32 interrupt vectors, which were marked as "reserved" for protected mode processor exceptions by Intel.

Video cards suffered from their own incompatibilities. There was no standard interface for using higher-resolution SVGA graphics modes supported by later video cards. Each manufacturer developed their own methods of accessing the screen memory, including different mode numberings and different bank switching arrangements. The latter were used to address large images within a single 64 KB segment of memory. Previously, the VGA standard had used planar video memory arrangements to the same effect, but this did not easily extend to the greater color depths and higher resolutions offered by SVGA adapters. An attempt at creating a standard named VESA BIOS Extensions (VBE) was made, but not all manufacturers used it.

When the 386 was introduced, again a protected mode OS could be written for it. This time, DOS compatibility was much easier because of virtual 8086 mode. Unfortunately programs could not switch directly between them, so eventually, some new memory-model APIs were developed, VCPI and DPMI, the latter becoming the most popular.

Because of the great number of third-party adapters and no standard for them, programming the PC could be difficult. Professional developers would operate a large test-suite of various known-to-be-popular hardware combinations.

To give consumers some idea of what sort of PC they would need to operate their software, the Multimedia PC (MPC) standard was set during 1990. A PC that met the minimum MPC standard could be marketed with the MPC logo, giving consumers an easy-to-understand specification to look for. Software that could operate on the most minimally MPC-compliant PC would be guaranteed to operate on any MPC. The MPC level 2 and MPC level 3 standards were set later, but the term "MPC compliant" never became popular. After MPC level 3 during 1996, no further MPC standards were established.

Challenges to Wintel domination

New shipments of personal computer operating systems (000s of units)
Operating system (vendor) 1990 1992
MS-DOS (Microsoft) 11,648

(of which 490 with Windows)

18,525

(of which 11,056 with Windows)

PC DOS (IBM) 3,031 2,315
DR DOS (Digital Research/Novell) 1,737 1,617
Macintosh System (Apple) 1,411 2,570
Unix (various) 357 797
OS/2 (IBM/Microsoft) 0 409
Others (NEC, Commodore etc.) 5,079 4,458

By the late 1990s, the success of Microsoft Windows had driven rival commercial operating systems into near-extinction, and had ensured that the "IBM PC compatible" computer was the dominant computing platform. This meant that if a developer made their software only for the Wintel platform, they would still be able to reach the vast majority of computer users. The only major competitor to Windows with more than a few percentage points of market share was Apple Inc.'s Macintosh. The Mac started out billed as "the computer for the rest of us", but high prices and closed architecture drove the Macintosh into an education and desktop publishing niche, from which it only emerged in the mid-2000s. By the mid-1990s the Mac's market share had dwindled to around 5% and introducing a new rival operating system had become too risky a commercial venture. Experience had shown that even if an operating system was technically superior to Windows, it would be a failure in the market (BeOS and OS/2 for example). In 1989, Steve Jobs said of his new NeXT system, "It will either be the last new hardware platform to succeed, or the first to fail." Four years later in 1993, NeXT announced it was ending production of the NeXTcube and porting NeXTSTEP to Intel processors.

Very early on in PC history, some companies introduced their own XT-compatible chipsets. For example, Chips and Technologies introduced their 82C100 XT Controller which integrated and replaced six of the original XT circuits: one 8237 DMA controller, one 8253 interrupt timer, one 8255 parallel interface controller, one 8259 interrupt controller, one 8284 clock generator, and one 8288 bus controller. Similar non-Intel chipsets appeared for the AT-compatibles, for example OPTi's 82C206 or 82C495XLC which were found in many 486 and early Pentium systems. The x86 chipset market was very volatile though. In 1993, VLSI Technology had become the dominant market player only to be virtually wiped out by Intel a year later. Intel has been the uncontested leader ever since. As the "Wintel" platform gained dominance Intel gradually abandoned the practice of licensing its technologies to other chipset makers; in 2010 Intel was involved in litigation related to their refusal to license their processor bus and related technologies to other companies like Nvidia.

Companies such as AMD and Cyrix developed alternative x86 CPUs that were functionally compatible with Intel's. Towards the end of the 1990s, AMD was taking an increasing share of the CPU market for PCs. AMD even ended up playing a significant role in directing the development of the x86 platform when its Athlon line of processors continued to develop the classic x86 architecture as Intel deviated with its NetBurst architecture for the Pentium 4 CPUs and the IA-64 architecture for the Itanium set of server CPUs. AMD developed AMD64, the first major extension not created by Intel, which Intel later adopted as x86-64. During 2006 Intel began abandoning NetBurst with the release of their set of "Core" processors that represented a development of the earlier Pentium III.

A major alternative to Wintel domination is the rise of alternative operating systems since the early 2000s, which marked as the start of the post-PC era. This would include both the rapid growth of the smartphones (using Android or iOS) as an alternative to the personal computer; and the increasing prevalence of Linux and Unix-like operating systems in the server farms of large corporations such as Google or Amazon.

The IBM PC compatible today

The term "IBM PC compatible" is not commonly used presently because many current mainstream desktop and laptop computers are based on the PC architecture, and IBM no longer makes PCs. The competing hardware architectures have either been discontinued or, like the Amiga, have been relegated to niche, enthusiast markets. The most successful exception is Apple's Macintosh platform, which has used non-Intel processors for the majority of its existence. Macintosh was initially based on the Motorola 68000 series, then transitioned to the PowerPC architecture in 1994 before transitioning to Intel processors beginning in 2006. Until the transition to the internally developed ARM-based Apple silicon in 2020, Macs shared the same system architecture as their Wintel counterparts and could boot Microsoft Windows without a DOS Compatibility Card.

The processor speed and memory capacity of modern PCs are many orders of magnitude greater than they were for the original IBM PC and yet backwards compatibility has been largely maintained – a 32-bit operating system released during the 2000s can still operate many of the simpler programs written for the OS of the early 1980s without needing an emulator, though an emulator like DOSBox now has near-native functionality at full speed (and is necessary for certain games which may run too fast on modern processors). Additionally, many modern PCs can still run DOS directly, although special options such as USB legacy mode and SATA-to-PATA emulation may need to be set in the BIOS setup utility. Computers using the UEFI might need to be set at legacy BIOS mode to be able to boot DOS. However, the BIOS/UEFI options in most mass-produced consumer-grade computers are very limited and cannot be configured to truly handle OSes such as the original variants of DOS.

The spread of the x86-64 architecture has further distanced current computers' and operating systems' internal similarity with the original IBM PC by introducing yet another processor mode with an instruction set modified for 64-bit addressing, but x86-64 capable processors also retain standard x86 compatibility.

Intellectual courage

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Intellectu...