Search This Blog

Wednesday, May 18, 2022

Wearable computer

From Wikipedia, the free encyclopedia
 
Smartwatches are an example of a wearable computer.

A wearable computer, also known as a wearable or body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.

Wearables may be for general use, in which case they are just a particularly small example of mobile computing. Alternatively, they may be for specialized purposes such as fitness trackers. They may incorporate special sensors such as accelerometers, heart rate monitors, or on the more advanced side, electrocardiogram (ECG) and blood oxygen saturation (SpO2) monitors. Under the definition of wearable computers, we also include novel user interfaces such as Google Glass, an optical head-mounted display controlled by gestures. It may be that specialized wearables will evolve into general all-in-one devices, as happened with the convergence of PDAs and mobile phones into smartphones.

Wearables are typically worn on the wrist (e.g. fitness trackers), hung from the neck (like a necklace), strapped to the arm or leg (smartphones when exercising), or on the head (as glasses or a helmet), though some have been located elsewhere (e.g. on a finger or in a shoe). Devices carried in a pocket or bag – such as smartphones and before them pocket calculators and PDAs, may or may not be regarded as 'worn'.

Wearable computers have various technical issues common to other mobile computing, such as batteries, heat dissipation, software architectures, wireless and personal area networks, and data management. Many wearable computers are active all the time, e.g. processing or recording data continuously.

Applications

Smartphones and smartwatches

Wearable computers are not only limited to computers such as fitness trackers that are worn on wrists; they also include wearables such as heart pacemakers and other prosthetics. They are used most often in research that focuses on behavioral modeling, health monitoring systems, IT and media development, where the person wearing the computer actually moves or is otherwise engaged with his or her surroundings. Wearable computers have been used for the following:

Wearable computing is the subject of active research, especially the form-factor and location on the body, with areas of study including user interface design, augmented reality, and pattern recognition. The use of wearables for specific applications, for compensating disabilities or supporting elderly people steadily increases.

Operating systems

The dominant operating systems for wearable computing are:

  • Wear OS (previously known as Android Wear) from Google
  • watchOS from Apple
  • Tizen OS from Samsung (there was an announcement in May 2021 that Wear OS and Tizen OS will merge and will be called simply Wear.)

History

Evolution of Steve Mann's WearComp wearable computer from backpack based systems of the 1980s to his current covert systems

Due to the varied definitions of wearable and computer, the first wearable computer could be as early as the first abacus on a necklace, a 16th-century abacus ring, a wristwatch and 'finger-watch' owned by Queen Elizabeth I of England, or the covert timing devices hidden in shoes to cheat at roulette by Thorp and Shannon in the 1960s and 1970s.

However, a general-purpose computer is not merely a time-keeping or calculating device, but rather a user-programmable item for arbitrary complex algorithms, interfacing, and data management. By this definition, the wearable computer was invented by Steve Mann, in the late 1970s:

Steve Mann, a professor at the University of Toronto, was hailed as the father of the wearable computer and the ISSCC's first virtual panelist, by moderator Woodward Yang of Harvard University (Cambridge Mass.).

— IEEE ISSCC 8 Feb. 2000

The development of wearable items has taken several steps of miniaturization from discrete electronics over hybrid designs to fully integrated designs, where just one processor chip, a battery and some interface conditioning items make the whole unit.

1500s

Queen Elizabeth I of England received a watch from Robert Dudley in 1571, as a New Year present; it may have been worn on the forearm rather than the wrist. She also possessed a 'finger-watch' set in a ring, with an alarm that prodded her finger.

1600s

The Qing Dynasty saw the introduction of a fully functional abacus on a ring, which could be used while it was being worn.

1960s

In 1961, mathematicians Edward O. Thorp and Claude Shannon built some computerized timing devices to help them win at a game of roulette. One such timer was concealed in a shoe and another in a pack of cigarettes. Various versions of this apparatus were built in the 1960s and 1970s.

Thorp refers to himself as the inventor of the first "wearable computer" In other variations, the system was a concealed cigarette-pack sized analog computer designed to predict the motion of roulette wheels. A data-taker would use microswitches hidden in his shoes to indicate the speed of the roulette wheel, and the computer would indicate an octant of the roulette wheel to bet on by sending musical tones via radio to a miniature speaker hidden in a collaborator's ear canal. The system was successfully tested in Las Vegas in June 1961, but hardware issues with the speaker wires prevented it from being used beyond test runs. This was not a wearable computer because it could not be re-purposed during use; rather it was an example of task-specific hardware. This work was kept secret until it was first mentioned in Thorp's book Beat the Dealer (revised ed.) in 1966 and later published in detail in 1969.

1970s

Pocket calculators became mass-market devices from 1970, starting in Japan. Programmable calculators followed in the late 1970s, being somewhat more general-purpose computers. The HP-01 algebraic calculator watch by Hewlett-Packard was released in 1977.

A camera-to-tactile vest for the blind, launched by C.C. Collins in 1977, converted images into a 1024-point, ten-inch square tactile grid on a vest.

1980s

The 1980s saw the rise of more general-purpose wearable computers. In 1981, Steve Mann designed and built a backpack-mounted 6502-based wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability (cameras and other photographic systems). Mann went on to be an early and active researcher in the wearables field, especially known for his 1994 creation of the Wearable Wireless Webcam, the first example of Lifelogging.

Seiko Epson released the RC-20 Wrist Computer in 1984. It was an early smartwatch, powered by a computer on a chip.

In 1989, Reflection Technology marketed the Private Eye head-mounted display, which scans a vertical array of LEDs across the visual field using a vibrating mirror. This display gave rise to several hobbyist and research wearables, including Gerald "Chip" Maguire's IBM/Columbia University Student Electronic Notebook, Doug Platt's Hip-PC, and Carnegie Mellon University's VuMan 1 in 1991.

The Student Electronic Notebook consisted of the Private Eye, Toshiba diskless AIX notebook computers (prototypes), a stylus based input system and a virtual keyboard. It used direct-sequence spread spectrum radio links to provide all the usual TCP/IP based services, including NFS mounted file systems and X11, which all ran in the Andrew Project environment.

The Hip-PC included an Agenda palmtop used as a chording keyboard attached to the belt and a 1.44 megabyte floppy drive. Later versions incorporated additional equipment from Park Engineering. The system debuted at "The Lap and Palmtop Expo" on 16 April 1991.

VuMan 1 was developed as part of a Summer-term course at Carnegie Mellon's Engineering Design Research Center, and was intended for viewing house blueprints. Input was through a three-button unit worn on the belt, and output was through Reflection Tech's Private Eye. The CPU was an 8 MHz 80188 processor with 0.5 MB ROM.

1990s

In the 1990s PDAs became widely used, and in 1999 were combined with mobile phones in Japan to produce the first mass-market smartphone.

Timex Datalink USB Dress edition with Invasion video game. The watch crown (icontrol) can be used to move the defender left to right and the fire control is the Start/Split button on the lower side of the face of the watch at 6 o' clock.

In 1993, the Private Eye was used in Thad Starner's wearable, based on Doug Platt's system and built from a kit from Park Enterprises, a Private Eye display on loan from Devon Sean McCullough, and the Twiddler chording keyboard made by Handykey. Many iterations later this system became the MIT "Tin Lizzy" wearable computer design, and Starner went on to become one of the founders of MIT's wearable computing project. 1993 also saw Columbia University's augmented-reality system known as KARMA (Knowledge-based Augmented Reality for Maintenance Assistance). Users would wear a Private Eye display over one eye, giving an overlay effect when the real world was viewed with both eyes open. KARMA would overlay wireframe schematics and maintenance instructions on top of whatever was being repaired. For example, graphical wireframes on top of a laser printer would explain how to change the paper tray. The system used sensors attached to objects in the physical world to determine their locations, and the entire system ran tethered from a desktop computer.

In 1994, Edgar Matias and Mike Ruicci of the University of Toronto, debuted a "wrist computer." Their system presented an alternative approach to the emerging head-up display plus chord keyboard wearable. The system was built from a modified HP 95LX palmtop computer and a Half-QWERTY one-handed keyboard. With the keyboard and display modules strapped to the operator's forearms, text could be entered by bringing the wrists together and typing. The same technology was used by IBM researchers to create the half-keyboard "belt computer. Also in 1994, Mik Lamming and Mike Flynn at Xerox EuroPARC demonstrated the Forget-Me-Not, a wearable device that would record interactions with people and devices and store this information in a database for later query. It interacted via wireless transmitters in rooms and with equipment in the area to remember who was there, who was being talked to on the telephone, and what objects were in the room, allowing queries like "Who came by my office while I was on the phone to Mark?". As with the Toronto system, Forget-Me-Not was not based on a head-mounted display.

Also in 1994, DARPA started the Smart Modules Program to develop a modular, humionic approach to wearable and carryable computers, with the goal of producing a variety of products including computers, radios, navigation systems and human-computer interfaces that have both military and commercial use. In July 1996, DARPA went on to host the "Wearables in 2005" workshop, bringing together industrial, university, and military visionaries to work on the common theme of delivering computing to the individual. A follow-up conference was hosted by Boeing in August 1996, where plans were finalized to create a new academic conference on wearable computing. In October 1997, Carnegie Mellon University, MIT, and Georgia Tech co-hosted the IEEE International Symposium on Wearables Computers (ISWC) in Cambridge, Massachusetts. The symposium was a full academic conference with published proceedings and papers ranging from sensors and new hardware to new applications for wearable computers, with 382 people registered for the event. In 1998, the Microelectronic and Computer Technology Corporation created the Wearable Electronics consortial program for industrial companies in the U.S. to rapidly develop wearable computers. The program preceded the MCC Heterogeneous Component Integration Study, an investigation of the technology, infrastructure, and business challenges surrounding the continued development and integration of micro-electro-mechanical systems (MEMS) with other system components.

In 1998, Steve Mann invented and built the world's first smartwatch. It was featured on the cover of Linux Journal in 2000, and demonstrated at ISSCC 2000.

2000s

Dr. Bruce H. Thomas and Dr. Wayne Piekarski developed the Tinmith wearable computer system to support augmented reality. This work was first published internationally in 2000 at the ISWC conference. The work was carried out at the Wearable Computer Lab in the University of South Australia.

In 2002, as part of Kevin Warwick's Project Cyborg, Warwick's wife, Irena, wore a necklace which was electronically linked to Warwick's nervous system via an implanted electrode array. The color of the necklace changed between red and blue dependent on the signals on Warwick's nervous system.

Also in 2002, Xybernaut released a wearable computer called the Xybernaut Poma Wearable PC, Poma for short. Poma stood for Personal Media Appliance. The project failed for a few reasons though the top reasons are that the equipment was expensive and clunky. The user would wear a head-mounted optical piece, a CPU that could be clipped onto clothing, and a mini keyboard that was attached to the user's arm.

GoPro released their first product, the GoPro HERO 35mm, which began a successful franchise of wearable cameras. The cameras can be worn atop the head or around the wrist and are shock and waterproof. GoPro cameras are used by many athletes and extreme sports enthusiasts, a trend that became very apparent during the early 2010s.

In the late 2000s, various Chinese companies began producing mobile phones in the form of wristwatches, the descendants of which as of 2013 include the i5 and i6, which are GSM phones with 1.8-inch displays, and the ZGPAX s5 Android wristwatch phone.

2010s

LunaTik, a machined wristband attachment for the 6th-generation iPod Nano

Standardization with IEEE, IETF, and several industry groups (e.g. Bluetooth) lead to more various interfacing under the WPAN (wireless personal area network). It also led the WBAN (Wireless body area network) to offer new classification of designs for interfacing and networking. The 6th-generation iPod Nano, released in September 2010, has a wristband attachment available to convert it into a wearable wristwatch computer.

The development of wearable computing spread to encompass rehabilitation engineering, ambulatory intervention treatment, life guard systems, and defense wearable systems.

Sony produced a wristwatch called Sony SmartWatch that must be paired with an Android phone. Once paired, it becomes an additional remote display and notification tool.

Fitbit released several wearable fitness trackers and the Fitbit Surge, a full smartwatch that is compatible with Android and iOS.

On 11 April 2012, Pebble launched a Kickstarter campaign to raise $100,000 for their initial smartwatch model. The campaign ended on 18 May with $10,266,844, over 100 times the fundraising target. Pebble released several smartwatches, including the Pebble Time and the Pebble Round.

Google Glass, Google's head-mounted display, which was launched in 2013.

Google Glass launched their optical head-mounted display (OHMD) to a test group of users in 2013, before it became available to the public on 15 May 2014. Google's mission was to produce a mass-market ubiquitous computer that displays information in a smartphone-like hands-free format that can interact with the Internet via natural language voice commands. Google Glass received criticism over privacy and safety concerns. On 15 January 2015, Google announced that it would stop producing the Google Glass prototype but would continue to develop the product. According to Google, Project Glass was ready to "graduate" from Google X, the experimental phase of the project.

Thync, a headset launched in 2014, is a wearable that stimulates the brain with mild electrical pulses, causing the wearer to feel energized or calm based on input into a phone app. The device is attached to the temple and to the back of the neck with an adhesive strip.

Macrotellect launched two portable brainwave (EEG) sensing devices, BrainLink Pro and BrainLink Lite in 2014, which allows families and meditation students to enhance the mental fitness and stress relief with 20+ brain fitness enhancement Apps on Apple and Android App Stores.

In January 2015, Intel announced the sub-miniature Intel Curie for wearable applications, based on its Intel Quark platform. As small as a button, it features a six-axis accelerometer, a DSP sensor hub, a Bluetooth LE unit, and a battery charge controller. It was scheduled to ship in the second half of the year.

On 24 April 2015, Apple released their take on the smartwatch, known as the Apple Watch. The Apple Watch features a touchscreen, many applications, and a heart-rate sensor.

Some advanced VR headsets require the user to wear a desktop-sized computer as a backpack to enable them to move around freely.

Commercialization

Image of the ZYPAD wrist wearable computer from Eurotech
 

The commercialization of general-purpose wearable computers, as led by companies such as Xybernaut, CDI and ViA, Inc. has thus far been met with limited success. Publicly traded Xybernaut tried forging alliances with companies such as IBM and Sony in order to make wearable computing widely available, and managed to get their equipment seen on such shows as The X-Files, but in 2005 their stock was delisted and the company filed for Chapter 11 bankruptcy protection amid financial scandal and federal investigation. Xybernaut emerged from bankruptcy protection in January, 2007. ViA, Inc. filed for bankruptcy in 2001 and subsequently ceased operations.

In 1998, Seiko marketed the Ruputer, a computer in a (fairly large) wristwatch, to mediocre returns. In 2001, IBM developed and publicly displayed two prototypes for a wristwatch computer running Linux. The last message about them dates to 2004, saying the device would cost about $250, but it is still under development. In 2002, Fossil, Inc. announced the Fossil Wrist PDA, which ran the Palm OS. Its release date was set for summer of 2003, but was delayed several times and was finally made available on 5 January 2005. Timex Datalink is another example of a practical wearable computer. Hitachi launched a wearable computer called Poma in 2002. Eurotech offers the ZYPAD, a wrist-wearable touch screen computer with GPS, Wi-Fi and Bluetooth connectivity and which can run a number of custom applications. In 2013, a wearable computing device on the wrist to control body temperature was developed at MIT.

Evidence of weak market acceptance was demonstrated when Panasonic Computer Solutions Company's product failed. Panasonic has specialized in mobile computing with their Toughbook line since 1996 and has extensive market research into the field of portable, wearable computing products. In 2002, Panasonic introduced a wearable brick computer coupled with a handheld or a touchscreen worn on the arm. The "Brick" Computer is the CF-07 Toughbook, dual batteries, screen used same batteries as the base, 800 x 600 resolution, optional GPS and WWAN. Has one M-PCI slot and one PCMCIA slot for expansion. CPU used is a 600 MHz Pentium 3 factory under clocked to 300 MHz so it can stay cool passively as it has no fan. Micro DIM RAM is upgradeable. The screen can be used wirelessly on other computers. The brick would communicate wirelessly to the screen, and concurrently the brick would communicate wirelessly out to the internet or other networks. The wearable brick was quietly pulled from the market in 2005, while the screen evolved to a thin client touchscreen used with a handstrap.

Google has announced that it has been working on a head-mounted display-based wearable "augmented reality" device called Google Glass. An early version of the device was available to the US public from April 2013 until January 2015. Despite ending sales of the device through their Explorer Program, Google has stated that they plan to continue developing the technology.

LG and iriver produce earbud wearables measuring heart rate and other biometrics, as well as various activity metrics.

Greater response to commercialization has been found in creating devices with designated purposes rather than all-purpose. One example is the WSS1000. The WSS1000 is a wearable computer designed to make the work of inventory employees easier and more efficient. The device allows workers to scan the barcode of items and immediately enter the information into the company system. This removed the need for carrying a clipboard, removed error and confusion from hand written notes, and allowed workers the freedom of both hands while working; the system improves accuracy as well as efficiency.

Popular culture

Many technologies for wearable computers derive their ideas from science fiction. There are many examples of ideas from popular movies that have become technologies or are technologies currently being developed.

3D user interface
Devices that display usable, tactile interfaces that can be manipulated in front of the user. Examples include the glove-operated hologram computer featured at the Pre-Crime headquarters in the beginning of Minority Report and the computers used by the gate workers at Zion in The Matrix trilogy.
Intelligent textiles or smartwear
Clothing that can relay and collect information. Examples include Tron and its sequel, and also many sci-fi military films.
Threat glasses
Scan others in vicinity and assess threat-to-self level. Examples include Terminator 2, 'Threep' Technology in Lock-In, and Kill switch.
Computerized contact lenses
Special contact lenses that are used to confirm one's identity. Used in Mission Impossible 4.
Combat suit armor
A wearable exoskeleton that provides protection to its wearer and is typically equipped with powerful weapons and a computer system. Examples include numerous Iron Man suits, the Predator suit, along with Samus Aran's Power Suit and Fusion Suit in the Metroid video game series.
Brain nano-bots to store memories in the cloud
Used in Total Recall.
Infrared headsets
Can help identify suspects and see through walls. Examples include Robocop's special eye system, as well as some more advanced visors that Samus Aran uses in the Metroid Prime trilogy.
Wrist-worn computers
Provide various abilities and information, such as data about the wearer, a vicinity map, a flashlight, a communicator, a poison detector or an enemy-tracking device. Examples included are the Pip-Boy 3000 from the Fallout games and Leela's Wrist Device from the Futurama TV sitcom.
On-chest or smart necklace
This form-factor of wearable computer has been shown in many sci-fi movies, including Prometheus and Iron Man.

Advancement with wearable technology over years

Technology has advanced with continuous change in wearable computers. Wearable technologies are increasingly used in healthcare. For instance, portable sensors are used as medical devices which helps patients with diabetes to help them keep track of exercise related data. A number of people think wearable technology as a new trend; however, companies have been trying to develop or design wearable technologies for decades. The spotlight has more recently been focused on new types of technology which are more focused on improving efficiency in the wearer's life.

The main elements of wearable computers

  • the display, which allows the user to see the work they do.
  • the computer, which allows the user to run an application or access the internet
  • the commands, which allows the user to control the machine.

Challenges with wearable computers

Wearable technology comes with many challenges, like data security, trust issues, and regulatory and ethical issues. After 2010, wearable technologies have been seen more as a technology focused mostly on fitness. They have been used with the potential to improve the operations of health and many other professions. With an increase in wearable devices, privacy and security issues can be very important, especially when it comes to health devices. Also, the FDA considers wearable devices as "general wellness products". In the US, wearable devices are not under any Federal laws, but regulatory law like Protected Health Information (PHI) is the subject to regulation which is handled by the Office for Civil Rights (OCR). The devices with sensors can create security issues as the companies have to be more alert to protect the public data. The issue with cybersecurity of these devices are the regulations are not that strict in the US. Likewise, the National Institute of Standards and Technology (NIST) has a code called NIST Cyber security Framework, but it is not mandatory.

Consequently, the lack of specific regulations for wearable devices, specifically medical devices, increases the risk of threats and other vulnerabilities. For instance, Google Glass raised major privacy risks with wearable computer technology; Congress investigated the privacy risks related to consumers using Google Glass and how they use the data. The product can be used to track not only the users of the product but others around them, particularly without them being aware. Nonetheless, all the data captured with Google Glass was then stored on Google's cloud servers, giving them access to the data. They also raised questions regarding women's security as they allowed stalkers or harassers to take intrusive pictures of women's bodies by wearing the Glass without any fear of getting caught.

Wearable technologies like smart glasses can also raise cultural and social issues. Even though wearable technologies can make life easier and more enjoyable, with the adoption of wearable technology, they allow social conventions to govern human-to-human communication. Correspondingly, wearable devices like Bluetooth headphones can make people dependent on technology more than human interaction with others nearby. Society considers these technologies luxury accessories and there is peer pressure to own similar products in order to not feel left out. These products raise challenges of social and moral discipline. Wearable devices are seen as objects of discipline and control like it mediates the cultural ideologies. For instance, wearing a smart watch can be a way to fit in with standards in male-dominated fields and where people are against feminism. Wearable technologies deal with issues of biopolitics on understanding the fact that wearables dominate humans and how they act.

Despite the fact that the demand for this technology is increasing, one of the biggest challenges is the price. For example, the price of an Apple Watch ranges from $399 to $1,399, which for a normal consumer can be prohibitively expensive.

Future innovations

Augmented reality allows a new generation of display. As opposed to virtual reality, the user does not exist in a virtual world, but information is superimposed on the real world.

These displays can be easily portable, such as the Vufine+. Other are quite massive, like the Hololens 2. Some headsets are autonomous, such as the Oculus Quest 2 and others. In contrast to a computer, they are more like a terminal module.

Single-board computers (SBC) are improving in performance and becoming cheaper. Some boards are cheap such as the Raspberry Pi Zero and Pi 4, while others are more expensive but more similar to a normal PC, like the Hackboard and LattePanda.

One main domain of future research could be the method of control. Today computers are commonly controlled through the keyboard and the mouse, which could change in the future. For example, the words per minute rate on a keyboard could be statistically improved with a BEPO layout. Ergonomics could also change the results with split keyboards and minimalist keyboards (which use one key for more than one letter or symbol). The extreme could be the Plover and steno keyboard that allow the use of very few keys, pressing more than one at the same time for a letter.

Furthermore, the pointer could be improved from a basic mouse to an accelerator pointer.

The system of gesture controls is evolving from image control (Leap Motion camera) to integrated capture (ex-prototype AI data glove from Zack Freedman.) For some people, the main idea could be to build computers integrated with the AR system which will be controlled with ergonomic controllers. It will make a universal machine that can be as portable as a mobile phone and as efficient as a computer, additionally with ergonomic controllers.

Military use

Wristband computer

The wearable computer was introduced to the US Army in 1989 as a small computer that was meant to assist soldiers in battle. Since then, the concept has grown to include the Land Warrior program and proposal for future systems. The most extensive military program in the wearables arena is the US Army's Land Warrior system, which will eventually be merged into the Future Force Warrior system. There are also researches for increasing the reliability of terrestrial navigation.

F-INSAS is an Indian military project, designed largely with wearable computing.

Phoneme

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Phoneme

In phonology and linguistics, a phoneme (/ˈfnm/) is a unit of sound that can distinguish one word from another in a particular language.

For example, in most dialects of English, with the notable exception of the West Midlands and the north-west of England, the sound patterns /sɪn/ (sin) and /sɪŋ/ (sing) are two separate words that are distinguished by the substitution of one phoneme, /n/, for another phoneme, /ŋ/. Two words like this that differ in meaning through the contrast of a single phoneme form a minimal pair. If, in another language, any two sequences differing only by pronunciation of the final sounds [n] or [ŋ] are perceived as being the same in meaning, then these two sounds are interpreted as phonetic variants of a single phoneme in that language.

Phonemes that are established by the use of minimal pairs, such as tap vs tab or pat vs bat, are written between slashes: /p/, /b/. To show pronunciation, linguists use square brackets: [pʰ] (indicating an aspirated p in pat).

There are differing views as to exactly what phonemes are and how a given language should be analyzed in phonemic (or phonematic) terms. However, a phoneme is generally regarded as an abstraction of a set (or equivalence class) of speech sounds (phones) that are perceived as equivalent to each other in a given language. For example, the English k sounds in the words kill and skill are not identical (as described below), but they are distributional variants of a single phoneme /k/. Speech sounds that differ but do not create a meaningful change in the word are known as allophones of the same phoneme. Allophonic variation may be conditioned, in which case a certain phoneme is realized as a certain allophone in particular phonological environments, or it may otherwise be free, and may vary by speaker or by dialect. Therefore, phonemes are often considered to constitute an abstract underlying representation for segments of words, while speech sounds make up the corresponding phonetic realization, or the surface form.

Notation

Phonemes are conventionally placed between slashes in transcription, whereas speech sounds (phones) are placed between square brackets. Thus, /pʊʃ/ represents a sequence of three phonemes, /p/, /ʊ/, /ʃ/ (the word push in Standard English), and [pʰʊʃ] represents the phonetic sequence of sounds [pʰ] (aspirated p), [ʊ], [ʃ] (the usual pronunciation of push). This should not be confused with the similar convention of the use of angle brackets to enclose the units of orthography, graphemes. For example, ⟨f⟩ represents the written letter (grapheme) f.

The symbols used for particular phonemes are often taken from the International Phonetic Alphabet (IPA), the same set of symbols most commonly used for phones. (For computer-typing purposes, systems such as X-SAMPA exist to represent IPA symbols using only ASCII characters.) However, descriptions of particular languages may use different conventional symbols to represent the phonemes of those languages. For languages whose writing systems employ the phonemic principle, ordinary letters may be used to denote phonemes, although this approach is often hampered by the complexity of the relationship between orthography and pronunciation (see § Correspondence between letters and phonemes below).

Assignment of speech sounds to phonemes

A simplified procedure for determining whether two sounds represent the same or different phonemes

A phoneme is a sound or a group of different sounds perceived to have the same function by speakers of the language or dialect in question. An example is the English phoneme /k/, which occurs in words such as cat, kit, scat, skit. Although most native speakers do not notice this, in most English dialects, the "c/k" sounds in these words are not identical: in kit  [kʰɪt], the sound is aspirated, but in skill  [skɪl], it is unaspirated. The words, therefore, contain different speech sounds, or phones, transcribed [kʰ] for the aspirated form and [k] for the unaspirated one. These different sounds are nonetheless considered to belong to the same phoneme, because if a speaker used one instead of the other, the meaning of the word would not change: using the aspirated form [kʰ] in skill might sound odd, but the word would still be recognized. By contrast, some other sounds would cause a change in meaning if substituted: for example, substitution of the sound [t] would produce the different word still, and that sound must therefore be considered to represent a different phoneme (the phoneme /t/).

The above shows that in English, [k] and [kʰ] are allophones of a single phoneme /k/. In some languages, however, [kʰ] and [k] are perceived by native speakers as different sounds, and substituting one for the other can change the meaning of a word. In those languages, therefore, the two sounds represent different phonemes. For example, in Icelandic, [kʰ] is the first sound of kátur, meaning "cheerful", but [k] is the first sound of gátur, meaning "riddles". Icelandic, therefore, has two separate phonemes /kʰ/ and /k/.

Minimal pairs

A pair of words like kátur and gátur (above) that differ only in one phone is called a minimal pair for the two alternative phones in question (in this case, [kʰ] and [k]). The existence of minimal pairs is a common test to decide whether two phones represent different phonemes or are allophones of the same phoneme.

To take another example, the minimal pair tip and dip illustrates that in English, [t] and [d] belong to separate phonemes, /t/ and /d/; since both words have different meanings, English-speakers must be conscious of the distinction between the two sounds.

Signed languages, such as American Sign Language (ASL), also have minimal pairs, differing only in (exactly) one of the signs' parameters: handshape, movement, location, palm orientation, and nonmanual signal or marker. A minimal pair may exist in the signed language if the basic sign stays the same, but one of the parameters changes.

However, the absence of minimal pairs for a given pair of phones does not always mean that they belong to the same phoneme: they may be so dissimilar phonetically that it is unlikely for speakers to perceive them as the same sound. For example, English has no minimal pair for the sounds [h] (as in hat) and [ŋ] (as in bang), and the fact that they can be shown to be in complementary distribution could be used to argue for their being allophones of the same phoneme. However, they are so dissimilar phonetically that they are considered separate phonemes.

Phonologists have sometimes had recourse to "near minimal pairs" to show that speakers of the language perceive two sounds as significantly different even if no exact minimal pair exists in the lexicon. It is virtually impossible to find a minimal pair to distinguish English /ʃ/ from /ʒ/, yet it seems uncontroversial to claim that the two consonants are distinct phonemes. The two words 'pressure' /ˈprɛʃər/ and 'pleasure' /ˈplɛʒər/ can serve as a near minimal pair.

Suprasegmental phonemes

Besides segmental phonemes such as vowels and consonants, there are also suprasegmental features of pronunciation (such as tone and stress, syllable boundaries and other forms of juncture, nasalization and vowel harmony), which, in many languages, can change the meaning of words and so are phonemic.

Phonemic stress is encountered in languages such as English. For example, the word invite stressed on the second syllable is a verb, but when stressed on the first syllable (without changing any of the individual sounds), it becomes a noun. The position of the stress in the word affects the meaning, so a full phonemic specification (providing enough detail to enable the word to be pronounced unambiguously) would include indication of the position of the stress: /ɪnˈvaɪt/ for the verb, /ˈɪnvaɪt/ for the noun. In other languages, such as French, word stress cannot have this function (its position is generally predictable) and is therefore not phonemic (and is not usually indicated in dictionaries).

Phonemic tones are found in languages such as Mandarin Chinese, in which a given syllable can have five different tonal pronunciations:

Minimal set for phonemic tone in Mandarin Chinese
Tone number 1 2 3 4 5
Hanzi
Pinyin ma
IPA [má] [mǎ] [mà][a] [mâ] [ma]
Gloss mother hemp horse scold question particle

The tone "phonemes" in such languages are sometimes called tonemes. Languages such as English do not have phonemic tone, although they use intonation for functions such as emphasis and attitude.

Distribution of allophones

When a phoneme has more than one allophone, the one actually heard at a given occurrence of that phoneme may be dependent on the phonetic environment (surrounding sounds) – allophones which normally cannot appear in the same environment are said to be in complementary distribution. In other cases the choice of allophone may be dependent on the individual speaker or other unpredictable factors – such allophones are said to be in free variation, but allophones are still selected in a specific phonetic context, not the other way around.

Background and related ideas

The term phonème (from Ancient Greek: φώνημα, romanizedphōnēma, "sound made, utterance, thing spoken, speech, language") was reportedly first used by A. Dufriche-Desgenettes in 1873, but it referred only to a speech sound. The term phoneme as an abstraction was developed by the Polish linguist Jan Niecisław Baudouin de Courtenay and his student Mikołaj Kruszewski during 1875–1895. The term used by these two was fonema, the basic unit of what they called psychophonetics. Daniel Jones became the first linguist in the western world to use the term phoneme in its current sense, employing the word in his article "The phonetic structure of the Sechuana Language". The concept of the phoneme was then elaborated in the works of Nikolai Trubetzkoy and others of the Prague School (during the years 1926–1935), and in those of structuralists like Ferdinand de Saussure, Edward Sapir, and Leonard Bloomfield. Some structuralists (though not Sapir) rejected the idea of a cognitive or psycholinguistic function for the phoneme.

Later, it was used and redefined in generative linguistics, most famously by Noam Chomsky and Morris Halle, and remains central to many accounts of the development of modern phonology. As a theoretical concept or model, though, it has been supplemented and even replaced by others.

Some linguists (such as Roman Jakobson and Morris Halle) proposed that phonemes may be further decomposable into features, such features being the true minimal constituents of language. Features overlap each other in time, as do suprasegmental phonemes in oral language and many phonemes in sign languages. Features could be characterized in different ways: Jakobson and colleagues defined them in acoustic terms, Chomsky and Halle used a predominantly articulatory basis, though retaining some acoustic features, while Ladefoged's system is a purely articulatory system apart from the use of the acoustic term 'sibilant'.

In the description of some languages, the term chroneme has been used to indicate contrastive length or duration of phonemes. In languages in which tones are phonemic, the tone phonemes may be called tonemes. Though not all scholars working on such languages use these terms, they are by no means obsolete.

By analogy with the phoneme, linguists have proposed other sorts of underlying objects, giving them names with the suffix -eme, such as morpheme and grapheme. These are sometimes called emic units. The latter term was first used by Kenneth Pike, who also generalized the concepts of emic and etic description (from phonemic and phonetic respectively) to applications outside linguistics.

Restrictions on occurrence

Languages do not generally allow words or syllables to be built of any arbitrary sequences of phonemes; there are phonotactic restrictions on which sequences of phonemes are possible and in which environments certain phonemes can occur. Phonemes that are significantly limited by such restrictions may be called restricted phonemes.

In English, examples of such restrictions include:

  • /ŋ/, as in sing, occurs only at the end of a syllable, never at the beginning (in many other languages, such as Māori, Swahili, Tagalog, and Thai, /ŋ/ can appear word-initially).
  • /h/ occurs only before vowels and at the beginning of a syllable, never at the end (a few languages, such as Arabic, or Romanian allow /h/ syllable-finally).
  • In non-rhotic dialects, /ɹ/ can only occur immediately before a vowel, never before a consonant.
  • /w/ and /j/ occur only before a vowel, never at the end of a syllable (except in interpretations where a word like boy is analyzed as /bɔj/).

Some phonotactic restrictions can alternatively be analyzed as cases of neutralization. See Neutralization and archiphonemes below, particularly the example of the occurrence of the three English nasals before stops.

Biuniqueness

Biuniqueness is a requirement of classic structuralist phonemics. It means that a given phone, wherever it occurs, must unambiguously be assigned to one and only one phoneme. In other words, the mapping between phones and phonemes is required to be many-to-one rather than many-to-many. The notion of biuniqueness was controversial among some pre-generative linguists and was prominently challenged by Morris Halle and Noam Chomsky in the late 1950s and early 1960s.

An example of the problems arising from the biuniqueness requirement is provided by the phenomenon of flapping in North American English. This may cause either /t/ or /d/ (in the appropriate environments) to be realized with the phone [ɾ] (an alveolar flap). For example, the same flap sound may be heard in the words hitting and bidding, although it is intended to realize the phoneme /t/ in the first word and /d/ in the second. This appears to contradict biuniqueness.

For further discussion of such cases, see the next section.

Neutralization and archiphonemes

Phonemes that are contrastive in certain environments may not be contrastive in all environments. In the environments where they do not contrast, the contrast is said to be neutralized. In these positions it may become less clear which phoneme a given phone represents. Absolute neutralization is a phenomenon in which a segment of the underlying representation is not realized in any of its phonetic representations (surface forms). The term was introduced by Paul Kiparsky (1968), and contrasts with contextual neutralization where some phonemes are not contrastive in certain environments. Some phonologists prefer not to specify a unique phoneme in such cases, since to do so would mean providing redundant or even arbitrary information – instead they use the technique of underspecification. An archiphoneme is an object sometimes used to represent an underspecified phoneme.

An example of neutralization is provided by the Russian vowels /a/ and /o/. These phonemes are contrasting in stressed syllables, but in unstressed syllables the contrast is lost, since both are reduced to the same sound, usually [ə] (for details, see vowel reduction in Russian). In order to assign such an instance of [ə] to one of the phonemes /a/ and /o/, it is necessary to consider morphological factors (such as which of the vowels occurs in other forms of the words, or which inflectional pattern is followed). In some cases even this may not provide an unambiguous answer. A description using the approach of underspecification would not attempt to assign [ə] to a specific phoneme in some or all of these cases, although it might be assigned to an archiphoneme, written something like //A//, which reflects the two neutralized phonemes in this position, or {a}, reflecting its unmerged values.

A somewhat different example is found in English, with the three nasal phonemes /m, n, ŋ/. In word-final position these all contrast, as shown by the minimal triplet sum /sʌm/, sun /sʌn/, sung /sʌŋ/. However, before a stop such as /p, t, k/ (provided there is no morpheme boundary between them), only one of the nasals is possible in any given position: /m/ before /p/, /n/ before /t/ or /d/, and /ŋ/ before /k/, as in limp, lint, link (/lɪmp/, /lɪnt/, /lɪŋk/). The nasals are therefore not contrastive in these environments, and according to some theorists this makes it inappropriate to assign the nasal phones heard here to any one of the phonemes (even though, in this case, the phonetic evidence is unambiguous). Instead they may analyze these phones as belonging to a single archiphoneme, written something like //N//, and state the underlying representations of limp, lint, link to be //lɪNp//, //lɪNt//, //lɪNk//.

This latter type of analysis is often associated with Nikolai Trubetzkoy of the Prague school. Archiphonemes are often notated with a capital letter within double virgules or pipes, as with the examples //A// and //N// given above. Other ways the second of these has been notated include |m-n-ŋ|, {m, n, ŋ} and //n*//.

Another example from English, but this time involving complete phonetic convergence as in the Russian example, is the flapping of /t/ and /d/ in some American English (described above under Biuniqueness). Here the words betting and bedding might both be pronounced [ˈbɛɾɪŋ]. Under the generative grammar theory of linguistics, if a speaker applies such flapping consistently, morphological evidence (the pronunciation of the related forms bet and bed, for example) would reveal which phoneme the flap represents, once it is known which morpheme is being used. However, other theorists would prefer not to make such a determination, and simply assign the flap in both cases to a single archiphoneme, written (for example) //D//.

Further mergers in English are plosives after /s/, where /p, t, k/ conflate with /b, d, ɡ/, as suggested by the alternative spellings sketti and sghetti. That is, there is no particular reason to transcribe spin as /ˈspɪn/ rather than as /ˈsbɪn/, other than its historical development, and it might be less ambiguously transcribed //ˈsBɪn//.

Morphophonemes

A morphophoneme is a theoretical unit at a deeper level of abstraction than traditional phonemes, and is taken to be a unit from which morphemes are built up. A morphophoneme within a morpheme can be expressed in different ways in different allomorphs of that morpheme (according to morphophonological rules). For example, the English plural morpheme -s appearing in words such as cats and dogs can be considered to be a single morphophoneme, which might be transcribed (for example) //z// or |z|, and which is realized as phonemically /s/ after most voiceless consonants (as in cats) and as /z/ in other cases (as in dogs).

Numbers of phonemes in different languages

All known languages use only a small subset of the many possible sounds that the human speech organs can produce, and, because of allophony, the number of distinct phonemes will generally be smaller than the number of identifiably different sounds. Different languages vary considerably in the number of phonemes they have in their systems (although apparent variation may sometimes result from the different approaches taken by the linguists doing the analysis). The total phonemic inventory in languages varies from as few as 11 in Rotokas and Pirahã to as many as 141 in !Xũ.

The number of phonemically distinct vowels can be as low as two, as in Ubykh and Arrernte. At the other extreme, the Bantu language Ngwe has 14 vowel qualities, 12 of which may occur long or short, making 26 oral vowels, plus six nasalized vowels, long and short, making a total of 38 vowels; while !Xóõ achieves 31 pure vowels, not counting its additional variation by vowel length, by varying the phonation. As regards consonant phonemes, Puinave and the Papuan language Tauade each have just seven, and Rotokas has only six. !Xóõ, on the other hand, has somewhere around 77, and Ubykh 81. The English language uses a rather large set of 13 to 21 vowel phonemes, including diphthongs, although its 22 to 26 consonants are close to average. Across all languages, the average number of consonant phonemes per language is about 22, while the average number of vowel phonemes is about 8.

Some languages, such as French, have no phonemic tone or stress, while Cantonese and several of the Kam–Sui languages have nine tones, and one of the Kru languages, Wobé, has been claimed to have 14, though this is disputed.

The most common vowel system consists of the five vowels /i/, /e/, /a/, /o/, /u/. The most common consonants are /p/, /t/, /k/, /m/, /n/. Relatively few languages lack any of these consonants, although it does happen: for example, Arabic lacks /p/, standard Hawaiian lacks /t/, Mohawk and Tlingit lack /p/ and /m/, Hupa lacks both /p/ and a simple /k/, colloquial Samoan lacks /t/ and /n/, while Rotokas and Quileute lack /m/ and /n/.

The non-uniqueness of phonemic solutions

During the development of phoneme theory in the mid-20th century phonologists were concerned not only with the procedures and principles involved in producing a phonemic analysis of the sounds of a given language, but also with the reality or uniqueness of the phonemic solution. These were central concerns of phonology. Some writers took the position expressed by Kenneth Pike: "There is only one accurate phonemic analysis for a given set of data", while others believed that different analyses, equally valid, could be made for the same data. Yuen Ren Chao (1934), in his article "The non-uniqueness of phonemic solutions of phonetic systems" stated "given the sounds of a language, there are usually more than one possible way of reducing them to a set of phonemes, and these different systems or solutions are not simply correct or incorrect, but may be regarded only as being good or bad for various purposes". The linguist F. W. Householder referred to this argument within linguistics as "God's Truth" (i.e. the stance that a given language has an intrinsic structure to be discovered) vs. "hocus-pocus" (i.e. the stance that any proposed, coherent structure is as good as any other).

Different analyses of the English vowel system may be used to illustrate this. The article English phonology states that "English has a particularly large number of vowel phonemes" and that "there are 20 vowel phonemes in Received Pronunciation, 14–16 in General American and 20–21 in Australian English". Although these figures are often quoted as fact, they actually reflect just one of many possible analyses, and later in the English Phonology article an alternative analysis is suggested in which some diphthongs and long vowels may be interpreted as comprising a short vowel linked to either /j/ or /w/. The fullest exposition of this approach is found in Trager and Smith (1951), where all long vowels and diphthongs ("complex nuclei") are made up of a short vowel combined with either /j/, /w/ or /h/ (plus /r/ for rhotic accents), each comprising two phonemes. The transcription for the vowel normally transcribed /aɪ/ would instead be /aj/, /aʊ/ would be /aw/ and /ɑː/ would be /ah/, or /ar/ in a rhotic accent if there is an ⟨r⟩ in the spelling. It is also possible to treat English long vowels and diphthongs as combinations of two vowel phonemes, with long vowels treated as a sequence of two short vowels, so that 'palm' would be represented as /paam/. English can thus be said to have around seven vowel phonemes, or even six if schwa were treated as an allophone of /ʌ/ or of other short vowels.

In the same period there was disagreement about the correct basis for a phonemic analysis. The structuralist position was that the analysis should be made purely on the basis of the sound elements and their distribution, with no reference to extraneous factors such as grammar, morphology or the intuitions of the native speaker; this position is strongly associated with Leonard Bloomfield. Zellig Harris claimed that it is possible to discover the phonemes of a language purely by examining the distribution of phonetic segments. Referring to mentalistic definitions of the phoneme, Twaddell (1935) stated "Such a definition is invalid because (1) we have no right to guess about the linguistic workings of an inaccessible 'mind', and (2) we can secure no advantage from such guesses. The linguistic processes of the 'mind' as such are quite simply unobservable; and introspection about linguistic processes is notoriously a fire in a wooden stove." This approach was opposed to that of Edward Sapir, who gave an important role to native speakers' intuitions about where a particular sound or group of sounds fitted into a pattern. Using English [ŋ] as an example, Sapir argued that, despite the superficial appearance that this sound belongs to a group of three nasal consonant phonemes (/m/, /n/ and /ŋ/), native speakers feel that the velar nasal is really the sequence [ŋɡ]/. The theory of generative phonology which emerged in the 1960s explicitly rejected the Structuralist approach to phonology and favoured the mentalistic or cognitive view of Sapir.

These topics are discussed further in English phonology#Controversial issues.

Correspondence between letters and phonemes

Phonemes are considered to be the basis for alphabetic writing systems. In such systems the written symbols (graphemes) represent, in principle, the phonemes of the language being written. This is most obviously the case when the alphabet was invented with a particular language in mind; for example, the Latin alphabet was devised for Classical Latin, and therefore the Latin of that period enjoyed a near one-to-one correspondence between phonemes and graphemes in most cases, though the devisers of the alphabet chose not to represent the phonemic effect of vowel length. However, because changes in the spoken language are often not accompanied by changes in the established orthography (as well as other reasons, including dialect differences, the effects of morphophonology on orthography, and the use of foreign spellings for some loanwords), the correspondence between spelling and pronunciation in a given language may be highly distorted; this is the case with English, for example.

The correspondence between symbols and phonemes in alphabetic writing systems is not necessarily a one-to-one correspondence. A phoneme might be represented by a combination of two or more letters (digraph, trigraph, etc.), like ⟨sh⟩ in English or ⟨sch⟩ in German (both representing phonemes /ʃ/). Also a single letter may represent two phonemes, as in English ⟨x⟩ representing /gz/ or /ks/. There may also exist spelling/pronunciation rules (such as those for the pronunciation of ⟨c⟩ in Italian) that further complicate the correspondence of letters to phonemes, although they need not affect the ability to predict the pronunciation from the spelling and vice versa, provided the rules are known.

In sign languages

Sign language phonemes are bundles of articulation features. Stokoe was the first scholar to describe the phonemic system of ASL. He identified the bundles tab (elements of location, from Latin tabula), dez (the handshape, from designator), sig (the motion, from signation). Some researchers also discern ori (orientation), facial expression or mouthing. Just as with spoken languages, when features are combined, they create phonemes. As in spoken languages, sign languages have minimal pairs which differ in only one phoneme. For instance, the ASL signs for father and mother differ minimally with respect to location while handshape and movement are identical; location is thus contrastive.

Stokoe's terminology and notation system are no longer used by researchers to describe the phonemes of sign languages; William Stokoe's research, while still considered seminal, has been found not to characterize American Sign Language or other sign languages sufficiently. For instance, non-manual features are not included in Stokoe's classification. More sophisticated models of sign language phonology have since been proposed by Brentari, Sandler, and Van der Kooij.

Chereme

Cherology and chereme (from Ancient Greek: χείρ "hand") are synonyms of phonology and phoneme previously used in the study of sign languages. A chereme, as the basic unit of signed communication, is functionally and psychologically equivalent to the phonemes of oral languages, and has been replaced by that term in the academic literature. Cherology, as the study of cheremes in language, is thus equivalent to phonology. The terms are not in use anymore. Instead, the terms phonology and phoneme (or distinctive feature) are used to stress the linguistic similarities between signed and spoken languages.

The terms were coined in 1960 by William Stokoe at Gallaudet University to describe sign languages as true and full languages. Once a controversial idea, the position is now universally accepted in linguistics. Stokoe's terminology, however, has been largely abandoned.

Baryogenesis

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Baryogenesis In physical cosmology , baryog...