Search This Blog

Sunday, May 3, 2026

Speech recognition

From Wikipedia, the free encyclopedia

Speech recognition (automatic speech recognition (ASR), computer speech recognition, or speech-to-text (STT)) is a sub-field of computational linguistics concerned with methods and technologies that translate spoken language into text or other interpretable forms.

Speech recognition applications include voice user interfaces, where the user speaks to a device, which "listens" and processes the audio. Common voice applications include interpreting commands for calling, call routing, home automation, and aircraft control. These applications are called direct voice input. Productivity applications include searching audio recordings, creating transcripts, and dictation.

Speech recognition can be used to analyse speaker characteristics, such as identifying native language using pronunciation assessment.

Voice recognition (speaker identification) refers to identifying the speaker, rather than speech contents. Recognizing the speaker can simplify the task of translating speech in systems trained on a specific person's voice. It can also be used to authenticate the speaker as part of a security process.

History

Applications for speech recognition developed over many decades, with progress accelerated due to advances in deep learning and the use of big data. These advances are reflected in an increase in academic papers, and greater system adoption.

Key areas of growth include vocabulary size, more accurate recognition for unfamiliar speakers (speaker independence), and faster processing speed.

Pre-1970

Raj Reddy was the first person to work on continuous speech recognition, as a graduate student at Stanford University in the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playing chess.

Around this time, Soviet researchers invented the dynamic time warping (DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary. DTW processed speech by dividing it into short frames (e.g. 10 ms segments) and treating each frame as a unit. Speaker independence, however, remained unsolved.

1970–1990

  • 1971 – DARPA funded a five-year speech recognition research project, Speech Understanding Research, seeking a minimum vocabulary size of 1,000 words. The project considered speech understanding a key to achieving progress in speech recognition, which was later disproved. BBN, IBM, Carnegie Mellon (CMU), and Stanford Research Institute participated.
  • 1972 – The IEEE Acoustics, Speech, and Signal Processing group held a conference in Newton, Massachusetts.
  • 1976 – The first ICASSP was held in Philadelphia, which became a major venue for publishing on speech recognition.

During the late 1960s, Leonard Baum developed the mathematics of Markov chains at the Institute for Defense Analysis. A decade later, at CMU, Raj Reddy's students James Baker and Janet M. Baker began using the hidden Markov model (HMM) for speech recognition. James Baker had learned about HMMs while at the Institute for Defense Analysis. HMMs enabled researchers to combine sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model.

By the mid-1980s, Fred Jelinek's team at IBM created a voice-activated typewriter called Tangora, which could handle a 20,000-word vocabulary. Jelinek's statistical approach placed less emphasis on emulating human brain processes in favor of statistical modelling. (Jelinek's group independently discovered the application of HMMs to speech.) This was controversial among linguists since HMMs are too simplistic to account for many features of human languages. However, the HMM proved to be a highly useful way for modelling speech and replaced dynamic time warping as the dominant speech recognition algorithm in the 1980s.

  • 1982 – Dragon Systems, founded by James and Janet M. Baker, was one of IBM's few competitors.

Practical speech recognition

The 1980s also saw the introduction of the n-gram language model.

  • 1987 – The back-off model enabled language models to use multiple-length n-grams, and CSELT used HMM to recognize languages (in software and hardware, e.g. RIPAC).

At the end of the DARPA program in 1976, the best computer available to researchers was the PDP-10 with 4 MB of RAM. It could take up to 100 minutes to decode 30 seconds of speech.

Practical products included:

  • 1984 – the Apricot Portable was released with up to 4096 words support, of which only 64 could be held in RAM at a time.
  • 1987 – a recognizer from Kurzweil Applied Intelligence
  • 1990 – Dragon Dictate, a consumer product released in 1990. AT&T deployed the Voice Recognition Call Processing service in 1992 to route telephone calls without a human operator. The technology was developed by Lawrence Rabiner and others at Bell Labs.

By the early 1990s, the vocabulary of the typical commercial speech recognition system had exceeded the average human vocabulary. Reddy's former student, Xuedong Huang, developed the Sphinx-II system at CMU. Sphinx-II was the first to do speaker-independent, large vocabulary, continuous speech recognition, and it won DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone. Huang later founded the speech recognition group at Microsoft in 1993. Reddy's student Kai-Fu Lee joined Apple, where, in 1992, he helped develop the Casper speech interface prototype.

Lernout & Hauspie, a Belgium-based speech recognition company, acquired other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. L&H was used in Windows XP. L&H was an industry leader until an accounting scandal destroyed it in 2001. L&H speech technology was bought by ScanSoft, which became Nuance in 2005. Apple licensed Nuance software for its digital assistant Siri.

2000s

In the 2000s, DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002, followed by Global Autonomous Language Exploitation (GALE) in 2005. Four teams participated in EARS: IBM; a team led by BBN with LIMSI and the University of Pittsburgh; Cambridge University; and a team composed of ICSI, SRI, and the University of Washington. EARS funded the collection of the Switchboard telephone speech corpus, which contained 260 hours of recorded conversations from over 500 speakers. The GALE program focused on Arabic and Mandarin broadcast news. Google's first effort at speech recognition came in 2007 after recruiting Nuance researchers. Its first product, GOOG-411, was a telephone-based directory service.

Since at least 2006, the U.S. National Security Agency has employed keyword spotting, allowing analysts to index large volumes of recorded conversations and identify speech containing "interesting" keywords. Other government research programs focused on intelligence applications, such as DARPA's EARS program and IARPA's Babel program.

In the early 2000s, speech recognition was dominated by hidden Markov models combined with feed-forward artificial neural networks (ANN). Later, speech recognition was taken over by long short-term memory (LSTM), a recurrent neural network (RNN) published by Sepp Hochreiter & Jürgen Schmidhuber in 1997. LSTM RNNs avoid the vanishing gradient problem and can learn "Very Deep Learning" tasks that require memories of events that happened thousands of discrete time steps earlier, which is important for speech.

Around 2007, LSTMs trained with Connectionist Temporal Classification (CTC) began to outperform. In 2015, Google reported a 49 percent error‑rate reduction in its speech recognition via CTC‑trained LSTM. Transformers, a type of neural network based solely on attention, were adopted in computer vision and language modelling and then to speech recognition.

Deep feed-forward (non-recurrent) networks for acoustic modelling were introduced in 2009 by Geoffrey Hinton and his students at the University of Toronto, and by Li Deng and colleagues at Microsoft Research. In contrast to the prioer incremental improvements, deep learning decreased error rates by 30%.

Both shallow and deep forms (e.g., recurrent nets) of ANNs had been explored since the 1980s. However, these methods never defeated non-uniform internal-handcrafting Gaussian mixture model/hidden Markov model (GMM-HMM) technology. Difficulties analyzed in the 1990s, included gradient diminishing and weak temporal correlation structure. All these difficulties combined with insufficient training data and computing power. Most speech recognition pursued generative modelling approaches until deep learning won the day. Hinton et al. and Deng et al.

2010s

By early the 2010s, speech recognition was differentiated from speaker recognition, and speaker independence was considered a major breakthrough. Until then, systems required a "training" period for each voice.

In 2017, Microsoft researchers reached the human parity milestone of transcribing conversational speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to improve accuracy. The error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark.

Models, methods, and algorithms

Both acoustic modeling and language modeling are important parts of statistically-based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modelling is also used in many other natural language processing applications, such as document classification or statistical machine translation.

Hidden Markov models

Speech recognition systems are based on HMMs. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale (e.g. 10 milliseconds), speech can be approximated as a stationary process. Speech can be thought of as a Markov model for many stochastic purposes.

HMMs are popular because they can be trained automatically and are simple and computationally feasible. An HMM outputs a sequence of n-dimensional real-valued vectors (where n is an integer such as 10), outputting one every 10 milliseconds. The vectors consist of cepstral coefficients, obtained by a Fourier transform of a short window of speech and decorrelating the spectrum using a cosine transform, then taking the first (most significant) coefficients. The HMM tends to have, in each state, a statistical distribution that is a mixture of diagonal covariance Gaussians, which give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each phoneme, has a different output distribution; an HMM for a sequence of words or phonemes is made by concatenating the individual trained HMMs for the separate words and phonemes.

Speech recognition systems use combinations of standard techniques to improve results. A typical large-vocabulary system applies context dependency for the phonemes (so that phonemes with different left and right context have different realizations as HMM states). It uses cepstral normalization to handle speaker and recording conditions. It might use vocal tract length normalization (VTLN) for male-female normalization and maximum likelihood linear regression (MLLR) for more general adaptation. The features use delta and delta-delta coefficients to capture speech dynamics, and in addition might use heteroscedastic linear discriminant analysis (HLDA); or might use splicing and LDA-based projection, followed by HLDA or a global semi-tied covariance transform (also known as maximum likelihood linear transform (MLLT)). Many systems use discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximum mutual information (MMI), minimum classification error (MCE), and minimum phone error (MPE).

Dynamic time warping (DTW)-based speech recognition

Dynamic time warping was historically used for speech recognition, but was later displaced by HMM.

Dynamic time warping measures similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns could be detected, even if in one video a person was walking slowly and in another was walking more quickly, or even if accelerations and decelerations came during one observation. DTW has been applied to video, audio, and graphics – any data that can be turned into a linear representation can be analyzed with DTW.

This could handle speech at different speaking speeds. In general, it allows an optimal match between two sequences (e.g., time series) with certain restrictions. The sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of HMMs.

Neural networks

Neural networks became interesting in the late 1980s before beginning to dominate in the 2010s. Neural networks have been used in many aspects of speech recognition, such as phoneme classification, phoneme classification through multi-objective evolutionary algorithms, isolated word recognition, audiovisual speech recognition, audiovisual speaker recognition, and speaker adaptation.

Neural networks make fewer explicit assumptions about feature statistical properties than HMMs. When used to estimate the probabilities of a speech segment, neural networks allow natural and efficient discriminative training. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words,  early neural networks were rarely successful for continuous recognition because of their limited ability to model temporal dependencies.

One approach was to use neural networks for feature transformation, or dimensionality reduction. However, more recently, LSTM and related recurrent neural networks (RNNs), Time Delay Neural Networks (TDNN's), and transformers demonstrated improved performance.

Deep feedforward and recurrent neural networks

Researchers are exploring deep neural networks (DNNs) and denoising autoencoders .A DNN is a type of artificial neural network that includes multiple hidden layers between the input and output. Like simpler neural networks, DNNs can model complex, non-linear relationships. However, their deeper architecture allows them to build more sophisticated representations that combine features from earlier layers. This gives them a powerful ability to learn and recognize complex patterns in speech data.

A major breakthrough in using DNNs for large vocabulary speech recognition came in 2010. In a collaboration between industry and academia, researchers used DNNs with large output layers based on context-dependent HMM states that were created using decision trees. This approach significantly improved performanc.

A core idea behind deep learning is to eliminate the need for manually designed features and instead learn directly from input data. This was first demonstrated using deep autoencoders trained on raw spectrograms or linear filter-bank features. These models outperformed traditional Mel-Cepstral features, which rely on fixed transformations. More recently, researchers showed that waveforms can achieve excellent results in large-scale speech recognition.

End-to-end learning

Since 2014, much research has considered "end-to-end" ASR. Traditional phonetic-based (i.e., all HMM-based model) approaches required separate components and training for pronunciation, acoustic, and language. End-to-end models learn from all the components at once. This simplifies the training and deployment processes. For example, an n-gram language model is required for all HMM-based systems, and a typical 2025-era n-gram language model often takes gigabytes in memory, making them impractical to deploy on mobile devices. Consequently, ASR systems from Google and Apple (as of 2017) deploy on servers and required a network connection to operate.

The first attempt at end-to-end ASR was the Connectionist Temporal Classification (CTC)-based system introduced by Alex Graves of Google DeepMind and Navdeep Jaitly of the University of Toronto in 2014. The model consisted of RNNs and a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however, it is incapable of learning the language model due to conditional independence assumptions, similar to an HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to finalize transcripts. Later, Baidu's Deep Speech 2 (2015) expanded on this approach by replacing hand-engineered pipeline components with a single end-to-end deep neural network trained on over 11,000 hours of English and 9,400 hours of Mandarin speech. The system matched or exceeded human-level transcription accuracy on several benchmarks and demonstrated that a single architecture could generalize across two linguistically distinct languages

In 2016, the University of Oxford presented LipNet, the first end-to-end sentence-level lipreading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted dataset. A large-scale convolutional-RNN-CTC architecture was presented in 2018 by Google DeepMind, achieving 6 times better performance than human experts. In 2019, Nvidia launched two CNN-CTC ASR models, Jasper and QuarzNet, with an overall performance word error rate (WER) of 3%. Similar to other deep learning applications, transfer learning and domain adaptation are important strategies for reusing and extending the capabilities of deep learning models, particularly due to the small size of available corpora in many languages and/or specific domains.

In 2018, researchers at MIT Media Lab announced preliminary work on AlterEgo, a device that uses electrodes to read the neuromuscular signals users make as they subvocalize. They trained a convolutional neural network to translate the electrode signals into words.

Attention-based models

Attention-based ASR models were introduced by Chan et al. of Carnegie Mellon University and Google Brain, and Bahdanau et al. of the University of Montreal in 2016. The model named "Listen, Attend and Spell" (LAS), literally "listens" to the acoustic signal, pays "attention" to all parts of the signal and "spells" out the transcript one character at a time. Unlike CTC-based models, attention-based models require conditional-independence assumptions and can learn all the components of a speech recognizer directly. This means that during deployment, no a priori language model is required, making it less demanding for applications with limited memory.

Attention-based models immediately outperformed CTC models (with or without an external language model) and continued improving. Latent Sequence Decomposition (LSD) was proposed by Carnegie Mellon University, MIT, and Google Brain to directly emit sub-word units that are more natural than English characters. The University of Oxford and Google DeepMind extended LAS to "Watch, Listen, Attend and Spell" (WLAS) to handle lip reading and surpassed human-level performance.

Applications

In-car systems

Voice commands may be used to initiate phone calls, select radio stations, or play music. Voice recognition capabilities vary across car make and model. Some models offer natural-language speech recognition, allowing the driver to use full sentences and common phrases in a conversational style. With such systems, fixed commands are not required.

Education

Automatic pronunciation assessment is the use of speech recognition to verify the correctness of speech, as distinguished from assessment by a person. Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, compares speech to a reference model for the words spoken, sometimes with inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress. Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams and Amira Learning. Pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia.

Assessing intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments, from words with multiple correct pronunciations, and from phoneme coding errors in digital pronunciation dictionaries. In 2022, researchers found that some newer speech to text systems, based on end-to-end reinforcement learning to map audio signals directly into words, produce word and phrase confidence scores closely correlated with listener intelligibility. In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.

Health care

Medical documentation

In the health care sector, speech recognition can be implemented in front-end or back-end medical documentation processes. In front-end speech recognition, the provider dictates into a speech-recognition engine, words are displayed as they are recognized, and the speaker is responsible for editing and signing off on the document. In back-end or deferred speech recognition the provider speaks into a digital dictation system, the voice is routed through a speech-recognition machine, and a draft document is routed along with the voice file to an editor, who edits/finalizes the draft and final report.

A major issue is that the American Recovery and Reinvestment Act of 2009 (ARRA) provides substantial financial benefits to physicians who utilize an Electronic Health Record (EHR) that complies with "Meaningful Use" standards. These standards require that substantial data be maintained by the EHR. The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary; the ergonomic gains of using speech recognition to enter structured discrete data (e.g., numeric values or codes from a list or a controlled vocabulary) are relatively minimal for people who are sighted and who can operate a keyboard and mouse.

A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of a clinician's interaction with EHR involves navigation through the user interface that is heavily dependent on keyboard and mouse; voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice "macros", where the use of certain phrases – e.g., "normal report", will automatically fill in a large number of default values and/or generate boilerplate, which vary with the type of exam – e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system.

Therapeutic use

Prolonged use of speech recognition software in conjunction with word processors has shown benefits to short-term-memory restrengthening in brain AVM patients who have been treated with resection. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.

Military

Aircraft

Substantial efforts have been devoted to the test and evaluation of speech recognition in fighter aircraft. Of particular note have been the US programme in speech recognition for the Advanced Fighter Technology Integration (AFTI)/F-16 aircraft (F-16 VISTA), the programme in France for Mirage aircraft, and UK programmes dealing with a variety of aircraft platforms. In these programmes, speech recognizers have been operated successfully, with applications including setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display.

Working with Swedish pilots flying the JAS-39 Gripen, Englund (2004) reported that recognition deteriorated with increasing g-loads. The study concluded that adaptation greatly improved the results in all cases and that the introduction of models for breathing was shown to improve recognition scores significantly. Contrary to what might have been expected, no effects of the broken English of the speakers were found. Spontaneous speech caused problems for the recognizer, as might have been expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially.

The Eurofighter Typhoon employs a speaker-dependent system, requiring each pilot to create a template. The system is not used for safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage, but is used for many cockpit functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major benefit in the reduction of pilot workload, and allows the pilot to assign targets with two voice commands or to a wingman with only five commands.

Speaker-independent systems are under test for the F-35 Lightning II (JSF) and the Alenia Aermacchi M-346 Master lead-in fighter trainer. These systems have produced word accuracy scores in excess of 98%.

Helicopters

The problems of achieving high recognition accuracy under stress and noise are particularly relevant in the helicopter environment as well as in the fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, because of the high noise levels, and because helicopter pilots, in general, do not wear a facemask, which would reduce acoustic noise in the microphone. Substantial test and evaluation programmes, notably by the U.S. Army Avionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France included speech recognition in the Puma helicopter. Voice applications include control of communication radios, navigation systems, and an automated target handover system.

The overriding issue for voice is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overall speech technology in order to consistently achieve performance improvements in operational settings.

Air traffic control

Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. Many ATC training systems currently require a trainer to act as a "pseudo-pilot", engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have with real pilots. Speech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as a pseudo-pilot, thus reducing training and support personnel.

In theory, air controller tasks are characterized by highly structured speech as the primary output, reducing the difficulty of the speech recognition task. In practice, this is rarely the case. FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000.

The USAF, USMC, US Army, US Navy, and FAA as well as international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada use ATC simulators with speech recognition.

People with disabilities

Speech recognition programs can provide many benefit to those with disabilities. For individuals who are deaf or hard of hearing, speech recognition software can be used to generate captions of conversations. Additionally, individuals who are blind (see blindness and education) or have poor vision can benefit from listening to textual content, as well as garner more functionality from a computer by issuing commands with their voice.

The use of voice recognition software, in conjunction with a digital audio recorder and a personal computer running word-processing software, has proven useful for restoring damaged short-term memory capacity in individuals who have suffered a stroke or have undergone a craniotomy.

Speech recognition has proven very useful for those who have difficulty using their hands due to causes ranging from mild repetitive stress injuries to disabilities that preclude the use of conventional computer input devices. Individuals with physical disabilities can use voice commands and transcription to navigate electronics hands-free. In fact, people who developed RSI from keyboard use became an early and urgent market for speech recognition. Speech recognition is used in deaf telephony, such as voicemail to text, relay services, and captioned telephone. Individuals with learning disabilities who struggle with thought-to-paper communication may benefit from the software, but the product's fallibility remains a significant consideration for many. In addition, speech to text technology is only an effective aid for those with intellectual disabilities if the proper training and resources are provided (e.g. in the classroom setting).

This type of technology can help those with dyslexia, but the potential benefits regarding other disabilities are still in question. Mistakes made by the software hinder its effectiveness, since misheard words take more time to fix.

Other domains

ASR is now commonplace in the field of telephony. In telephony systems, ASR is predominantly used in contact centers by integrating it with IVR systems.

It is becoming more widespread in computer gaming and simulation.

Despite the high level of integration with word processing in general personal computing, in the field of document production, ASR has not seen the expected increases in use.

The improvement of mobile processor speeds has made speech recognition practical in smartphones. Speech is used mostly as a part of a user interface, for creating predefined or custom speech commands.

Performance

The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate (WER), whereas speed is measured in elapsed time. Other measures of accuracy include Single Word Error Rate (SWER) and Command Success Rate (CSR).

Speech recognition is complicated by many properties of speech. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, dialect, nasality, pitch, volume, and speed. Speech is distorted by background noise, echoes, and recording characteristics. Accuracy of speech recognition may vary with the following:

  • Vocabulary size and confusability
  • Speaker dependence versus independence
  • Isolated, discontinuous, or continuous speech
  • Task and language constraints
  • Read versus spontaneous speech
  • Adverse conditions

Accuracy

The accuracy of speech recognition may vary depending on the following factors:

  • Error rates increase as the vocabulary size grows:
e.g. the 10 digits "zero" to "nine" can be recognized essentially perfectly, but vocabulary sizes of 200, 5000, or 100000 may have error rates of 3%, 7%, or 45% respectively.
  • Vocabulary is hard to recognize if it contains confusing letters:
e.g. the 26 letters of the English alphabet are difficult to discriminate because they are confusing words (most notoriously, the E-set: "B, C, D, E, G, P, T, V, Z (when "Z" is pronounced "zee" rather than "zed", depending on region); an 8% error rate is considered good for this vocabulary.
  • Speaker dependence vs. independence:
    • A speaker-dependent system is intended for use by a single speaker.
    • A speaker-independent system is intended for use by any speaker (more difficult).
  • Isolated, discontinuous or continuous speech
    • With isolated speech, single words are used, which is easier to recognize.
    • With discontinuous speech, full sentences separated by silence are used. The silence is easier to recognize similar to isolated speech.
    • With continuous speech naturally spoken sentences are used, which are harder to recognize.
  • Task and language constraints can inform the recognition
    • The requesting application may dismiss the hypothesis "The apple is red."
    • Constraints may be semantic; rejecting "The apple is angry."
    • Syntactic; rejecting "Red is apple the."
    • Constraints are often represented by grammar.
  • Read vs. spontaneous speech
    • When a person reads it's usually in a context that has been previously prepared.
    • When a person speaks spontaneously, recognition must deal with disfluencies such as "uh" and "um", false starts, incomplete sentences, stuttering, coughing, and laughter) and limited vocabulary.
  • Adverse conditions
    • environmental noise (e.g., in a car or factory).
    • Acoustic distortions (e.g. echoes, room acoustics)

Speech recognition is a multi-level pattern recognition task.

  • Acoustic signals are structured into a hierarchy of units, e.g. phonemes, words, phrases, and sentences;
  • Each level provides additional constraints; e.g., known word pronunciations or legal word sequences, which can compensate for errors or uncertainties at a lower level;

This hierarchy of constraints improves accuracy. By combining decisions probabilistically at all lower levels, and making ultimate decisions only at the highest level, speech recognition is broken into several phases. Computationally, it is a problem in which a sound pattern has to be recognized or classified into a category that represents a meaning to a human. Every acoustic signal can be broken into smaller sub-signals. As the more complex sound signal is divided, different levels are created, where at the top level are complex sounds made of simpler sounds on the lower level, etc. At the lowest level, simple and more probabilistic rules apply. These sounds are put together into more complex sounds on upper level, a new set of more deterministic rules predicts what the complex sound represents. The upper level of a deterministic rule should figure out the meaning of complex expressions. In order to expand our knowledge about speech recognition, we need to take into consideration neural networks. Neural network approaches use the following steps:

  • Digitize the speech – for telephone speech, 8000 samples per second are captured;
  • Compute features of spectral-domain of the speech (with Fourier transform); computed every 10ms, with one 10ms section called a frame;

Sound is produced by air (or some other medium) vibration. Sound creates a wave that has two measures: amplitude (strength), and frequency (vibrations per second). Accuracy can be computed with the help of WER, which is calculated by aligning the recognized word and referenced word using dynamic string alignment. The problem may occur while computing the WER due to the difference between the sequence lengths of the recognized word and referenced word.

The formula to compute the word error rate (WER) is:

where s is the number of substitutions, d is the number of deletions, i is the number of insertions, and n is the number of word references.

While computing, the word recognition rate (WRR) is used. The formula is:

where h is the number of correctly recognized words:

Security

Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like "Alexa" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action. Voice-controlled devices may be accessible to unauthorized users. Attackers may be able to gain access to personal information, like calendars, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases.

Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempts to send commands without people noticing. The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system.

Transcription (linguistics)

From Wikipedia, the free encyclopedia

In linguistics, transcription is the systematic representation of spoken language in written form. The source can either be utterances (speech or sign language) or preexisting text in another writing system.

Transcription should not be confused with translation, which means representing the meaning of text from a source-language in a target language, (e.g. Los Angeles (from source-language Spanish) means The Angels in the target language English); or with transliteration, which means representing the spelling of a text from one script to another.

In the academic discipline of linguistics, transcription is an essential part of the methodologies of (among others) phonetics, conversation analysis, dialectology, and sociolinguistics. It also plays an important role for several subfields of speech technology. Common examples for transcriptions outside academia are the proceedings of a court hearing such as a criminal trial (by a court reporter) or a physician's recorded voice notes (medical transcription). This article focuses on transcription in linguistics.

Phonetic and orthographic transcription

There are two main types of linguistic transcription. Phonetic transcription focuses on phonetic and phonological properties of spoken language. Systems for phonetic transcription thus furnish rules for mapping individual sounds or phones to written symbols. Systems for orthographic transcription, by contrast, consist of rules for mapping spoken words onto written forms as prescribed by the orthography of a given language. Phonetic transcription operates with specially defined character sets, usually the International Phonetic Alphabet.

The type of transcription chosen depends mostly on the context of usage. Because phonetic transcription strictly foregrounds the phonetic nature of language, it is mostly used for phonetic or phonological analyses. Orthographic transcription, however, has a morphological and a lexical component alongside the phonetic component (which aspect is represented to which degree depends on the language and orthography in question). This form of transcription is thus more convenient wherever semantic aspects of spoken language are transcribed. Phonetic transcription is more systematic in a scientific sense, but it is also more difficult to learn, more time-consuming to carry out and less widely applicable than orthographic transcription.

As a theory

Mapping spoken language onto written symbols is not as straightforward a process as may seem at first glance. Written language is an idealization, made up of a limited set of clearly distinct and discrete symbols. Spoken language, on the other hand, is a continuous (as opposed to discrete) phenomenon, made up of a potentially unlimited number of components. There is no predetermined system for distinguishing and classifying these components and, consequently, no preset way of mapping these components onto written symbols.

Literature is relatively consistent in pointing out the nonneutrality of transcription practices. There is not and cannot be a neutral transcription system. Knowledge of social culture enters directly into the making of a transcript. They are captured in the texture of the transcript (Baker, 2005).

Transcription systems

Transcription systems are sets of rules which define how spoken language is to be represented in written symbols. Most phonetic transcription systems are based on the International Phonetic Alphabet or, especially in speech technology, on its derivative SAMPA.

Examples for orthographic transcription systems (all from the field of conversation analysis or related fields) are:

CA (conversation analysis)

Arguably the first system of its kind, originally sketched in (Sacks et al. 1978), later adapted for the use in computer readable corpora as CA-CHAT by (MacWhinney 2000). The field of Conversation Analysis itself includes a number of distinct approaches to transcription and sets of transcription conventions. These include, among others, Jefferson Notation. To analyze conversation, recorded data is typically transcribed into a written form that is agreeable to analysts. There are two common approaches. The first, called narrow transcription, captures the details of conversational interaction such as which particular words are stressed, which words are spoken with increased loudness, points at which the turns-at-talk overlap, how particular words are articulated, and so on. If such detail is less important, perhaps because the analyst is more concerned with the overall gross structure of the conversation or the relative distribution of turns-at-talk amongst the participants, then a second type of transcription known as broad transcription may be sufficient (Williamson, 2009).

Jefferson Transcription System

The Jefferson Transcription System is a set of symbols, developed by Gail Jefferson, which is used for transcribing talk. Having had some previous experience in transcribing when she was hired in 1963 as a clerk typist at the UCLA Department of Public Health to transcribe sensitivity-training sessions for prison guards, Jefferson began transcribing some of the recordings that served as the materials out of which Harvey Sacks' earliest lectures were developed. Over four decades, for the majority of which she held no university position and was unsalaried, Jefferson's research into talk-in-interaction has set the standard for what became known as conversation analysis (CA). Her work has greatly influenced the sociological study of interaction, but also disciplines beyond, especially linguistics, communication, and anthropology. This system is employed universally by those working from the CA perspective and is regarded as having become a near-globalized set of instructions for transcription.

DT (discourse transcription)

A system described in (DuBois et al. 1992), used for transcription of the Santa Barbara Corpus of Spoken American English (SBCSAE), later developed further into DT2.

GAT (Gesprächsanalytisches Transkriptionssystem – Conversation analytic transcription system)

A system described in (Selting et al. 1998), later developed further into GAT2 (Selting et al. 2009), widely used in German speaking countries for prosodically oriented conversation analysis and interactional linguistics.

HIAT (Halbinterpretative Arbeitstranskriptionen – Semiinterpretative working transcriptions)

Arguably the first system of its kind, originally described in (Ehlich and Rehbein 1976) – see (Ehlich 1992) for an English reference - adapted for the use in computer readable corpora as (Rehbein et al. 2004), and widely used in functional pragmatics.[6][7][8]

Software

Transcription was originally a process carried out manually, i.e. with pencil and paper, using an analogue sound recording stored on, e.g., a Compact Cassette. Nowadays, most transcription is done on computers. Recordings are usually digital audio files or video files, and transcriptions are electronic documents. Specialized computer software exists to assist the transcriber in efficiently creating a digital transcription from a digital recording.

Two types of transcription software can be used to assist the process of transcription: one that facilitates manual transcription and the other automated transcription. For the former, the work is still very much done by a human transcriber who listens to a recording and types up what is heard in a computer, and this type of software is often a multimedia player with functionality such as playback or changing speed. For the latter, automated transcription is achieved by a speech-to-text engine which converts audio or video files into electronic text. Some of the software would also include the function of annotation.


Transcription ( Tr linguistics)

From Wikipedia, the free encyclopedia

In linguistics, transcription is the systematic representation of spoken language in written form. The source can either be utterances (speech or sign language) or preexisting text in another writing system.

Transcription should not be confused with translation, which means representing the meaning of text from a source-language in a target language, (e.g. Los Angeles (from source-language Spanish) means The Angels in the target language English); or with transliteration, which means representing the spelling of a text from one script to another.

In the academic discipline of linguistics, transcription is an essential part of the methodologies of (among others) phonetics, conversation analysis, dialectology, and sociolinguistics. It also plays an important role for several subfields of speech technology. Common examples for transcriptions outside academia are the proceedings of a court hearing such as a criminal trial (by a court reporter) or a physician's recorded voice notes (medical transcription). This article focuses on transcription in linguistics.

Phonetic and orthographic transcription

There are two main types of linguistic transcription. Phonetic transcription focuses on phonetic and phonological properties of spoken language. Systems for phonetic transcription thus furnish rules for mapping individual sounds or phones to written symbols. Systems for orthographic transcription, by contrast, consist of rules for mapping spoken words onto written forms as prescribed by the orthography of a given language. Phonetic transcription operates with specially defined character sets, usually the International Phonetic Alphabet.

The type of transcription chosen depends mostly on the context of usage. Because phonetic transcription strictly foregrounds the phonetic nature of language, it is mostly used for phonetic or phonological analyses. Orthographic transcription, however, has a morphological and a lexical component alongside the phonetic component (which aspect is represented to which degree depends on the language and orthography in question). This form of transcription is thus more convenient wherever semantic aspects of spoken language are transcribed. Phonetic transcription is more systematic in a scientific sense, but it is also more difficult to learn, more time-consuming to carry out and less widely applicable than orthographic transcription.

As a theory

Mapping spoken language onto written symbols is not as straightforward a process as may seem at first glance. Written language is an idealization, made up of a limited set of clearly distinct and discrete symbols. Spoken language, on the other hand, is a continuous (as opposed to discrete) phenomenon, made up of a potentially unlimited number of components. There is no predetermined system for distinguishing and classifying these components and, consequently, no preset way of mapping these components onto written symbols.

Literature is relatively consistent in pointing out the nonneutrality of transcription practices. There is not and cannot be a neutral transcription system. Knowledge of social culture enters directly into the making of a transcript. They are captured in the texture of the transcript (Baker, 2005).

Transcription systems

Transcription systems are sets of rules which define how spoken language is to be represented in written symbols. Most phonetic transcription systems are based on the International Phonetic Alphabet or, especially in speech technology, on its derivative SAMPA.

Examples for orthographic transcription systems (all from the field of conversation analysis or related fields) are:

CA (conversation analysis)

Arguably the first system of its kind, originally sketched in (Sacks et al. 1978), later adapted for the use in computer readable corpora as CA-CHAT by (MacWhinney 2000). The field of Conversation Analysis itself includes a number of distinct approaches to transcription and sets of transcription conventions. These include, among others, Jefferson Notation. To analyze conversation, recorded data is typically transcribed into a written form that is agreeable to analysts. There are two common approaches. The first, called narrow transcription, captures the details of conversational interaction such as which particular words are stressed, which words are spoken with increased loudness, points at which the turns-at-talk overlap, how particular words are articulated, and so on. If such detail is less important, perhaps because the analyst is more concerned with the overall gross structure of the conversation or the relative distribution of turns-at-talk amongst the participants, then a second type of transcription known as broad transcription may be sufficient (Williamson, 2009).

Jefferson Transcription System

The Jefferson Transcription System is a set of symbols, developed by Gail Jefferson, which is used for transcribing talk. Having had some previous experience in transcribing when she was hired in 1963 as a clerk typist at the UCLA Department of Public Health to transcribe sensitivity-training sessions for prison guards, Jefferson began transcribing some of the recordings that served as the materials out of which Harvey Sacks' earliest lectures were developed. Over four decades, for the majority of which she held no university position and was unsalaried, Jefferson's research into talk-in-interaction has set the standard for what became known as conversation analysis (CA). Her work has greatly influenced the sociological study of interaction, but also disciplines beyond, especially linguistics, communication, and anthropology. This system is employed universally by those working from the CA perspective and is regarded as having become a near-globalized set of instructions for transcription.

DT (discourse transcription)

A system described in (DuBois et al. 1992), used for transcription of the Santa Barbara Corpus of Spoken American English (SBCSAE), later developed further into DT2.

GAT (Gesprächsanalytisches Transkriptionssystem – Conversation analytic transcription system)

A system described in (Selting et al. 1998), later developed further into GAT2 (Selting et al. 2009), widely used in German speaking countries for prosodically oriented conversation analysis and interactional linguistics.

HIAT (Halbinterpretative Arbeitstranskriptionen – Semiinterpretative working transcriptions)

Arguably the first system of its kind, originally described in (Ehlich and Rehbein 1976) – see (Ehlich 1992) for an English reference - adapted for the use in computer readable corpora as (Rehbein et al. 2004), and widely used in functional pragmatics.

Software

Transcription was originally a process carried out manually, i.e. with pencil and paper, using an analogue sound recording stored on, e.g., a Compact Cassette. Nowadays, most transcription is done on computers. Recordings are usually digital audio files or video files, and transcriptions are electronic documents. Specialized computer software exists to assist the transcriber in efficiently creating a digital transcription from a digital recording.

Two types of transcription software can be used to assist the process of transcription: one that facilitates manual transcription and the other automated transcription. For the former, the work is still very much done by a human transcriber who listens to a recording and types up what is heard in a computer, and this type of software is often a multimedia player with functionality such as playback or changing speed. For the latter, automated transcription is achieved by a speech-to-text engine which converts audio or video files into electronic text. Some of the software would also include the function of annotation.

Friday, May 1, 2026

Uses of open science

From Wikipedia, the free encyclopedia

The open science movement has expanded the uses of scientific output beyond specialized academic circles.

Non-academic audience of journals and other scientific outputs has always been significant but was not recorded by the leading metrics of scientific reception, which favor citation data. In the late 1990s, the first open-access online publications started to attract a large number of individual visits. This transformation has renewed the theories of scientific dissemination, as direct access to publications curtailed the classic model of scientific popularization. Social impact and potential uses by lay readers have become focal points of discussion in the development of open science platforms and infrastructures.

Analysis of open science uses has required the development of new methods, including log analysis, crosslinking analysis or altmetrics, as the standard bibliometric approach failed to record the non-academic reception of scientific productions.

In the 2010s, several detailed studies were devoted to the reception of specific open science platforms due to the increasing availability of use data. Log analysis and surveys showed that professional academics do not make up the majority of the audience, as recurrent reader profiles include students, non-academic professionals (policy makers, industrial R&D, knowledge workers) and "private citizens" with various motivations (personal health, curiosity, hobby). Traffic on open science platforms is stimulated by a larger ecosystem of knowledge sharing and popularization, which includes non-academic productions like blogs. Non-academic audiences tend to prefer the use of local language, which has created new incentives in favor of linguistic diversity in science.

Concepts and definition

Bibliometrics and its limitations

After the Second World War, the reception of scientific publications has been increasingly measured by quantitative counts of citations. The field of bibliometrics coalesced in parallel to the development of the first computed search engine, the Science Citation Index, originally established by Eugene Garfield in 1962. Founding figures of the field, like the British historian of science Derek John de Solla Price, were proponents of bibliometric reductionism, i.e., the reduction of all possible bibliometric indicators to citation data and citation graphs. Bibliometric indicators, like the Impact Factor, have had a significant influence over research policy and research evaluation since the 1970s.

Academic search engine, citation data collection and the related metrics were intentionally designed to favor English-speaking journals. Until the development of open science platforms, "very little [was] actually known about the impact of Latin American journals overall". The use of standard bibliometric indicators like the impact factor yielded a very limited outlook on the breadth and diversity of the academic publishing ecosystem in this region and other non-Western areas: "Putting aside issues of equity, the underrepresentation and shear low number of journals from developing countries mean that journals that are geared towards the developing world will have less of its citations counted than one geared towards journals that are in the dataset."

In the early developments, the open science movement partly coopted the standard tools of bibliometrics and quantitative evaluation: "the fact that no reference was made to metadata in the main OA declarations (Budapest, Berlin, Bethesda) has led to a paradoxical situation (…) it was through the use of the Web of Science that OA advocates were eager to show how much accessibility led to a citation advantage compared to paywalled articles." After 2000, an important bibliometric literature was devoted to the citation advantage of open access publications.

By the end of the 2000s, the impact factor and other metrics had been increasingly held responsible for a systemic lock-in of prestigious non-accessible sources. Key figures of the open science movement, such as Stevan Harnad, called for the creation of "open access scientometrics" that would take "advantage of the wealth of usage and impact metrics enabled by the multiplication of online, full-text, open access digital archives." As the public of open science expanded beyond academic circles, new metrics should aim for "measuring the broader societal impacts of scientific research."

Non-academic audience

Academic journals have always had a significant non-academic audience, be they students, professionals, or amateurs. In 2000, one-third of these readers had never authored a scientific publication. This rate may be higher for social science journals, which may also act as intellectual periodicals. During the second half of the 20th century, the non-academic audience may have continuously expanded in Western countries, along with the increasing prevalence of high school education: "the percentage of U.S. adults with a minimal level of understanding of the meaning of scientific study has increased from 12 percent in 1957 to 21 percent in 1999".

The prevalence of non-academic audience raises additional issues on the relevance and scope of classic bibliometric measures, as they would "never appear in citation data". The infrastructures and business models put in place by leading scientific publishers do not consider non-academic uses. Following the periodical crisis of the 1980s and the inflation of subscription prices, major journals have largely become unattainable for lay readers or independent researchers not affiliated with a large research institution. Search engines and bibliographic databases developed since the 1960s and the 1970s were meant to be used by professional librarians. Leading scientific publishers tacitly rely on a "gap" model of scientific reception, where specialized scientific knowledge is not directly accessible but mediated and popularized.

The shift of academic journals to electronic publishing and open access has underlined the significant discrepancy between the measures of citation counts. By the late 1990s, online journals and archive repositories had evidently attracted a very large audience: "Within individual disciplines the change has been nearly instantaneous. As an example, in mid-1997 the number of papers downloaded from astronomy's digital library, the Smithsonian/NASA Astrophysics Data System (ADS; ads.harvard.edu) exceeded the sum of all the papers read in all of astronomy's print libraries". Log studies have regularly underlined that publication of open access has a much higher rate of use and downloading than publications behind a paywall.

The enlargement of the audience of scientific work to non-academic has always been a key objective of the open access movement: "even the earliest formulations of the concept of open access included the general public as a potential audience for open access". The Budapest Open Access Initiative of 2001 includes among the beneficiaries of open access "scientists, scholars, teachers, students, and other curious minds".

In an open science context, non-academic audience has been associated with a wider figure: the lay reader or unexpected reader. Once universally accessible, an academic work can have unplanned readers or users. In 2006, John Willinsky conjectured that "it is not difficult to imagine occasions when a dedicated history teacher, an especially keen high school student, an amateur astronomer, or an ecologically concerned citizen might welcome the opportunity to browse the current and relevant literature pertaining to their interests." Unexpected forms of reception did happen as the Editor in chief of PLOS once received a promising research on the modelling of pandemics, which turned out to be written by "a fifteen-year old high school student". The lay reader is not necessarily part of a non-academic audience, as a professional scientist may become one if "the information sought is outside his or her area of expertise". Not all unexpected readers behave similarly or have the same capacity of using academic resources. Even where they are not dealing with their main domain of expertise, academic researchers or some professionals (the knowledge workers) have acquired some generic skills for bibliographic analysis, such as following citations in the literature.

Unanticipated academic uses

Paywalled journals did not satisfy a larger range of unanticipated academic uses, as the costs of subscription access have been conditioned on the field of work or the available resources at the institutional level. In 2011, Michael Carroll introduced a typology of five "unanticipated readers" which are beyond the scope of the reading expectations of online academic journals: serendipitous readers (who discover the publication through complex reading paths), the under-resourced readers (presumably uninitiated, like high school students) interdisciplinary readers (scientists that belong to a different field) international readers (scientist that work within a different national frame) and machine readers (bots that retrieve a corpus, for instance as part of a text mining project).

The development of academic pirate platforms like Sci-Hub or Libgen highlighted structural inequalities on a global scale: "The geography of Sci-Hub usage generally looks like a map of scientific productivity, but with some of the richer and poorer science-focused nations flipped." High rates of sci-hub use have been especially found in Russia, Algeria, Brazil, Turkey, Mexico and India, which are all countries with significant local academic productions despite having fewer resources than OECD countries: "relatively to their national scientific production, middle-income countries had the more intensive use of pirated academic works". The audience of pirate academic platforms remains significant even in North American and European universities endowed with large library subscriptions, as access is commonly perceived as more straightforward than in paywalled libraries: "even for journals to which the university has access, Sci-Hub is becoming the go-to resource".

From impact factor to social impact

The development of large open science platforms and infrastructure after 2010 entailed a shift in the measurement of scientific impact, from a strong focus on highly quoted English-speaking journals to an expanded analysis of the social circulation of scientific publications. This transformation has been especially noticeable in Latin America, due to the early development of public-funded international publishing platforms like Redalyc, or Scielo: "There is a definite sense in Latin America that the investment in science will result in development in a more broadly defined sense—beyond simply innovation and economic growth."

In 2015, Juan Pablo Alperin introduced a systematic measure of social impact by relying on a diverse set of indicators (log analysis, survey and altmetrics). This approach entailed a conceptual redefinition of key concepts of scientific reception, such as impact, reach or reader:

I turn our attention to these alternative, public forms of research impact and reach by examining the Latin American case. In this study, impact will be assessed through evidence of the research literature being saved, discussed, forwarded, recommended, mentioned, or cited, both within and beyond the academic community (…) Reach refers, in this study, to the extent to which the research literature is viewed or downloaded by members of various audiences, beginning with the traditional academic readership and extending outward through related professions, and perhaps journalists, teachers, enthusiasts, and members of the public (…) By looking at a broad range of indicators of impact and reach, far beyond the typical measures of one article citing another, I argue, it is possible to gain a sense of the people that are using Latin American research, thereby opening the door for others to see the ways in which it has touched those individuals and communities.

The unprecedented focus on the social impact of science fits with alternative models of scientific popularization. In 2009, Alesia Zuccala introduced a radiant model of open science dissemination with a variety of mediated and unmediated connections between non-academic audience and academic production: "Sometimes [research] engages the lay public—this is the co-production model of science communication—and sometimes self-selected intermediaries tell members of the public what they should know—the education model of science communication".

Methods

While open science has been largely theorized to have a significant impact on academic and non-academic access to literature, research investigation in this area has proven challenging: it has "the subject of many discussions and indeed was the basis for a lot of the advocacy work and many funding agencies' OA policies, but rarely so in formal published studies" By definition, open science productions are non-transactional and as such their use leave much less traces than the distribution of commercialized scientific outputs. Overall, it is very difficult to retrieve "data on user demographics from currently available information sources (e.g., repositories and publisher platforms)".

The classic methods of bibliometric studies, including citation analysis, are largely unable to capture the new forms of reception created by open science. Alternative approaches had to be developed in the 2000s and the 2010s, and for a long time, open science advocates and policy-makers had to rely on limited evidence.

Survey

Surveys have been the primary method of analyzing scientific reception before the development of bibliometrics.

After the development of electronic publishing and open access, survey methods have also migrated online. Pop-up surveys were introduced for academic publications in the early 2000s: they made it possible to query the user at the exact moment when the resource was retrieved and could be correlated with log data. Yet, "response rates of pop-up surveys tend to be low", which may ultimately distort the representativeness of the survey.

Since 2002, large international surveys of the uses of academic resources have been conducted by Simon Inger and Tracy Gardner with the support of several major scientific organizations and publishers. While not specifically focused on open science, the survey strived to include a more diverse subset of potential users beyond academic authors.

Log Analysis

Academic publications have been among the earliest corpora used for log analysis. The first applied studies in the area long predate the web, as interconnected scientific infrastructures were already widely used in North America and Europe by the 1970s and 1980s.

In 1983, several studies, pioneered by the Online Computer Library Center, analyzed "transaction logs" left by database users. Logs were stored on magnetic tapes at the time, and a large part of the analysis was devoted to the reformatting and standardization of the data. Standard methods of log analysis were already implemented in these early studies, such as the use of probabilistic approaches based on Markov Chains, in order to identify the more regular patterns of user behavior or the comparison with more user surveys.

The use of logs and other reader metrics to measure the reception of academic work has remained marginal. Large commercial databases, like the Web of Science and Scopus, had no incentives to divulge reading statistics and mostly use them for internal purposes. Bibliometric indices based on aggregated citation counts, like the impact factor or the h-index, have been favored as the leading measures of academic impact.

Beyond the restrictions imposed by leading publishers, log analysis has raised significant methodological issues. Data logging processes differ significantly depending on the structure of the interface: "The number of full-text downloads may be artificially inflated when publishers require users to view HTML versions before accessing PDF versions or when linking mechanisms". Automated access, including search engine indexers or robots, can also largely distort aggregated visit counts. This uncertainty impedes the comparability of data: "issues such as journal interfaces continue to affect how users interact with content users, making even standardized reports difficult, if not impossible, to compare."

Log analysis has been revived in the 2010s due to technological developments and the emergence of large open science platforms. Standards for the retrieval of academic log data have been introduced in the early 2010s, such as COUNTER, PIRUS or MESUR. These standards were, by design, limited to specialized research use due to their integration into academic infrastructures.

The development of open-source web analytics software like Matomo has established an emerging standard for log collection. During the same period, publicly funded scientific platforms have started to share use data openly, as part of their enlarged commitment to open science. In Latin America, both Redalyc and SciELO "provide such usage statistics to the public", although they have remained largely underused: "It is surprising that given the availability of these data, nobody has conducted a study analyzing different dimensions of downloads, beyond the overall view counts and "top 10" lists of articles available from time to time on the respective Web portals."

In 2011, Michael J. Kurtz and Johan Bollen called for the development of usage bibliometrics, an emerging field that "provides unique opportunities to address the known shortcomings of citation analysis". Increased access to log data from open science platforms has made it possible to publish extensive case studies on SciELO and Redalyc, Érudit, OpenEdition.org, Journal.fi or The Conversation

Crosslinking

The web itself and some of its key components (such as search engines) were partly a product of bibliometrics theory. In its original form, it was derived from a bibliographic scientific infrastructure commissioned to Tim Berners-Lee by the CERN for the specific needs of high energy physics, ENQUIRE. The onset of the World Wide Web in the mid-1990s made Garfield's citationist dream more likely to come true. In the world network of hypertexts, not only is the bibliographic reference one of the possible forms taken by a hyperlink inside the electronic version of a scientific article, but the Web itself also exhibits a citation structure, links between web pages being formally similar to bibliographic citations." Consequently, bibliometrics concepts have been incorporated in major communication technologies the search algorithm of Google: "the citation-driven concept of relevance applied to the network of hyperlinks between web pages would revolutionize the way Web search engines let users quickly pick useful materials out of the anarchical universe of digital information."

While the web immediately affected reading practices, by creating seamless connections between texts, it did not transform to a similar extent the quantitative analysis of citation data, which remained mostly focused on academic connections. Global analysis of hyperlinking and backlinks makes it possible to extend the citation analysis beyond scholarly publications and recover the expanding scope of open science circulations: "We have witnessed a proliferation of means of disseminating scholarly publications via academic blogs, scientific magazines destined to a wider audience." In 2011, a log analysis of the Kyoto University website identified a highly diversified set of links to scientific publications. In 2019, a study supported by the Aix-Marseille University of crosslinkings to the French open science platform OpenEdition highlighted that "scientific literature from a largely open access hosting platform is re-appropriated and repurposed for various uses in the public arena."

Altmetrics

During the 2000s and 2010s, the web was increasingly dominated by very large social media platforms that curate and shape a significant part of the digital public sphere. The public reception of scientific literature has also largely migrated to these platforms. This evolution has prompted the development of new metrics and quantitative methods aiming to map the circulation of publications on social media: the altmetrics.

The concept of alt-metrics was introduced in 2009 by Cameron Neylon and Shirly Wu as article-level metrics. In contrast with the focus of leading metrics on journals (impact factor) or, more recently, on individual researchers (h-index), the article-level metrics makes it possible to track the circulation of individual publications: "article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero – where we can see and count it" As such they are more compatible with the diversity of publication strategies that has characterized open science: preprints, reports or even non-textual outputs like dataset or software may also have associated metrics. In their original research proposition, Neylon and Wu favored the use of data from reference management software like Zotero or Mendeley. The concept of altmetrics evolved and came to cover data extracted "from social media applications, like blogs, Twitter, ResearchGate and Mendeley." Social media sources proved especially to be more reliable on a long-term basis, as specialized academic tools like Mendeley came to be integrated into a proprietary ecosystem developed by leading scientific publishers. Major altmetrics indicators that emerged in the 2010s include Altmetric.com, PLUMx and ImpactStory.

As the meaning of altmetrics shifted, the debate over the positive impact of the metrics evolved toward their redefinition in an open science ecosystem: "Discussions on the misuse of metrics and their interpretation put metrics themselves in the center of open science practices." Social media altmetrics are limited to a specific subset of social media platforms and, within the platforms, to numeric metrics of reception let by users such as likes, shares or comments: "However, 'altmetrics' has continued in the same tradition as the older biblio/scientometrics by basing its indicators on numerical trace, i.e., computing the number of likes, posts, downloads, tweets or retweets a scholarly publication gets on the web with the result that neither of these fields provide information on the actual use of the scholarly publications cited nor the reasons for which they were cited."

While altmetrics were initially conceived for open science publications and their expanded circulation beyond academic circles, their compatibility with the emerging requirements for open metrics has been brought into question: social network data, in particular, is far from transparent and readily accessible. The conversation tracked on social media may not be that representative of the social impact of research, as researchers are overly represented in these spaces: "about half of the tweets mentioning journal articles are from academics". In 2016, Ulrich Herb published a systematic assessment of the leading publications' metrics in regard to open science principles and concluded that "neither citation-based impact metrics nor alternative metrics can be labeled open metrics. They all lack scientific foundation, transparency and verifiability."

Current uses

Most empirical information retrieved on open science use is platform-specific.

User demographics

Distribution of the user demographics of SciELO in the survey of Juan Pablo Alperin

Studies of the use of open science resources have generally highlighted the diversity of user profiles, with academic researchers only representing a minor segment of the audience. In 2015, the two leading Latin American platforms, Redalyc and SciELO, had mostly an audience of university students (with 50% and 55% respectively) and professionals in non-academic sectors (20% in SciELO and 17% in Redalyc). Once discounted from other university employees, "researchers only make up 5–6% of the total users". On the Finnish platform journal.fi, students are also the main demographic group (with 40% of users), but academic researchers still make up for a large group (36%).

Convergent estimations of lay readers have been given by the different open science platform studies: 9% of amateur/personal uses in SciELO and 6% in Redalyc, 8% of "private citizens" in the reader survey of journal.fi.

Open science platforms have a balanced gender distribution. The two Latin American platforms, Redalyc and Scielo, tend to have a relative "predominance of women users" (about 60%).

The discipline of the resources' impact has a varying impact on uses. Personal interest is more prevalent in the humanities in SciELO. In contrast, "little variability between disciplines" has been observed in Redalyc. Analysis of the bookmark data left by the readers of F1000Prime on Mendeley highlighted a significant share of uses by disciplines totally distinct from the expected audience.

User practices and motivations

Studies of user practices have mostly focused on specific user profiles. Few general surveys have been undertaken. In Japan, a 2011 poll of 800 adults showed that a "majority of respondents (55%) claimed that Open Access is useful or slightly useful to them", which suggests a rather large awareness of open science in a population with a significant share of high school education.

The issues facing medical patients have been especially highlighted. An important field of research on health-information-seeking behavior (HISB) emerged prior to the development of open science. In a 2003 survey, half of American Internet users had attempted to find qualified information about their health, but regularly faced access issues: "Many current Internet health users want to expand access to information-laden sites that are currently closed to non-subscribers". A qualitative research on English medical patients, subscription paywalls were cited as the main barrier to access to scientific knowledge, along with the complexity of scientific terminology. While the specific needs patients make a strong case for open science, they have also overshadowed the variety of potential uses of academic research: "open access is not just a public health matter: It has a much more general research-enhancing mission".

Research has also focused on professional non-academic uses, due to their potential economic impact. In 2011, a JISC report estimated that there were 1.8 million knowledge workers in the United Kingdom working in R&D, IT, and engineering services, most of whom were "unaffiliated, without corporate library or information center support." Among a representative set of English knowledge workers, 25% stated that access to the literature was fairly difficult or very difficult and 17% had a recent access problem that had never been resolved. A 2011 survey of Danish businesses highlighted a significant dependence of R&D to academic research: "Forty-eight per cent rated research articles as very or extremely important". The non-profit sector is also significantly impacted by increasing access to literature, as a survey of 101 NGOs from the United Kingdom showed that "73% reported using journal articles and 54% used conference proceedings". In 2018, a log analysis on OpenEdition highlighted corporate access as a significant source of readership, especially among "the aircraft industry, the bank, insurance, car selling and energy sectors and, even more significantly for the further circulation of science in the public sphere, media organizations." These results showed that open access had a direct commercial impact on small and large companies.

Language diversity

Scientific publications in languages other than English have been marginalized in large commercial databases: they represent less than 5% of the publications indexed in the Web of Science.

The development of open science platforms has gradually shifted the focus, with local-language publications becoming acknowledged as important actors in the social dissemination of scientific knowledge. In the 2010s, quantitative studies have started to highlight the positive impact of local languages on the reuse of open access resources in varied national contexts such as FinlandQuébecCroatia or Mexico.

Measures of social impact tend to reverse the incentives of international academic metrics like the impact factors: while they are less featured in academic indices, publications in a local language fare better on an enlarged audience. In Finland, a majority of the audience of the academic platform journal.fi favors publications in Finnish (67%). Yet, the linguistic practices of the visitors vary significantly depending on their academic status. Lay readers (private citizens) and students have a clear preference for the local language (81% and 78% of publications accessed). In contrast, professional researchers slightly favor English over Finnish (55%).

Due to the ease of access, open science platforms in a local language can also achieve a broader reach. The French-Canadian journal consortium Érudit has mostly an international audience, with less than one-third of the readers coming from Canada.

Sharing ecosystem

Open science resources are more likely to be shared in non-scientific settings such as "Twitter, News, Blogs and Policies". In 2011, a log analysis study in Japan highlighted "a remarkable variety of websites linked to these OA papers, including blogs about personal hobbies, websites by patients or their families, Q&A website and Wikipedia."

The diversity of the open science ecosystem has been hypothesized to affect the life cycle pattern. In the classic framework of bibliometrics, most publications are expected to experience an exponential decline in citations over the year (also characterized as "half-life", by assimilation with the decay of radioactive elements). In contrast, open science publications "have the feature of keeping sustained and steady downloads for a long time". This sustained reception on a longer timeframe may be partially caused by recurrent episodes of "unexpected access": where old publications attract a new wave of readers suddenly due to a newfound relevance.

Reuse of data and software

In contrast with publications, open scientific data and software frequently require a higher level of technical skills: "access is not enough to guarantee that Open Data can be reused effectively because reuse requires not only access, but other resources such as skills, money and computing power". Even firms and organizations may lack the "necessary skills such as information literacy to fully benefit from open resources".

Yet, recent developments like the growth of data analytics services across a large variety of economic sectors have created further needs for research data: "There are many other values (…) that are promoted through the longterm stewardship and open availability of research data. The rapidly expanding area of artificial intelligence (AI) relies to a great extent on saved data." In 2019, the combined data market of the 27 countries of the European Union and the United Kingdom was estimated at 400 billion euros and had a sustained growth of 7.6% per year. although no estimation was given of the specific value of research data, research institutions were identified as important stakeholders in the emerging ecosystem of "data commons".

Electron microscope

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Electron...