Search This Blog

Tuesday, April 21, 2026

Recurrent neural network

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Recurrent_neural_network

In artificial neural networks, recurrent neural networks (RNNs) are designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.

The fundamental building block of RNN is the recurrent unit, which maintains a hidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected handwriting recognitionspeech recognitionnatural language processing, and neural machine translation.

However, traditional RNNs suffer from the vanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the long short-term memory (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, gated recurrent units (GRUs) were introduced as a more computationally efficient alternative.

In recent years, transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.

History

Before modern

One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex formed by parallel fiber, Purkinje cells, and granule cells. In 1933, Lorente de Nó discovered "recurrent, reciprocal connections" by Golgi's method, and proposed that excitatory loops explain certain aspects of the vestibulo-ocular reflex. During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943), which proposed the McCulloch-Pitts neuron model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past. They were both interested in closed loops as possible explanations for e.g. epilepsy and causalgiaRecurrent inhibition was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the Macy conferences.

A close-loop cross-coupled perceptron network

Frank Rosenblatt in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule. Later, in Principles of Neurodynamics (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks, and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.

Similar networks were published by Kaoru Nakano in 1971, Shun'ichi Amari in 1972, and William A. Little [de] in 1974, who was acknowledged by Hopfield in his 1982 paper.

Another origin of RNN was statistical mechanics. The Ising model was developed by Wilhelm Lenz and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time.

The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.

Modern

Modern RNN networks are mainly based on two architectures: LSTM and BRNN.

At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets". Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.

Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture.

Bidirectional recurrent neural networks (BRNN) use two RNNs that process the same input in opposite directions. These two are often combined, giving the bidirectional LSTM architecture.

Around 2006, bidirectional LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications. They also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google voice search, and dictation on Android devices. They broke records for improved machine translationlanguage modeling and Multilingual Language Processing. Also, LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning.

The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. A seq2seq architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and were instrumental in the development of attention mechanisms and transformers.

Configurations

An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNNs can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.

Standard

Compressed (left) and unfolded (right) basic recurrent neural network

RNNs come in many variants. Abstractly speaking, an RNN is a function of type , where

  • : input vector;
  • : hidden vector;
  • : output vector;
  • : neural network parameters.

In words, it is a neural network that maps an input into an output , with the hidden vector playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing.

The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be layers are, in fact, different steps in time, "unfolded" to produce the appearance of layers.

Stacked RNN

Stacked RNN

A stacked RNN, or deep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows

  1. Layer 1 has hidden vector , parameters , and maps .
  2. Layer 2 has hidden vector , parameters , and maps .
  3. ...
  4. Layer has hidden vector , parameters , and maps .

Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.

Bidirectional

Bidirectional RNN

A bidirectional RNN (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:

  • The forward RNN processes in one direction:
  • The backward RNN processes in the opposite direction:

The two output sequences are then concatenated to give the total output: .

Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The ELMo model (2018) is a stacked bidirectional LSTM which takes character-level as inputs and produces word-level embeddings.

Encoder-decoder

A decoder without an encoder
Encoder-decoder RNN without attention mechanism
Encoder-decoder RNN with attention mechanism

Two RNNs can be run front-to-back in an encoder-decoder configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional attention mechanism. This was used to construct state of the art neural machine translators during the 2014–2017 period. This was an instrumental step towards the development of transformers.

PixelRNN

An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions. For example, the row-by-row direction processes an grid of vectors in the following order: The diagonal BiLSTM uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes depending on its hidden state and cell state on the top and the left side: and . The other processes it from the top-right corner to the bottom-left.

Architectures

Fully recurrent

A fully connected RNN with 4 neurons

Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a fully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.

A simple Elman network where

Hopfield

The Hopfield network is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using Hebbian learning, then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration.

Elman networks and Jordan networks

The Elman network

An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of context units (u in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one. At each time step, the input is fed forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standard multilayer perceptron.

Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves.

Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).

Elman network
Jordan network

Variables and functions

  • : input vector
  • : hidden layer vector
  • : "state" vector,
  • : output vector
  • , and : parameter matrices and vector
  • : Activation functions

Long short-term memory

Long short-term memory unit

Long short-term memory (LSTM) is the most widely used RNN architecture. It was designed to solve the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components.

Many applications use stacks of LSTMs, for which it is called "deep LSTM". LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts.

Gated recurrent unit

Gated recurrent unit

Gated recurrent unit (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants. They have fewer parameters than LSTM, as they lack an output gate.

Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. There does not appear to be particular performance difference between LSTM and GRU.

Bidirectional associative memory

Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping are optimized for increased network stability and relevance to real-world applications.

A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.

Echo state

Echo state networks (ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series. A variant for spiking neurons is known as a liquid state machine.

Recursive

A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing. The recursive neural tensor network uses a tensor-based composition function for all nodes in the tree.

Neural Turing machines

Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources with which they interact. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.

Differentiable neural computers (DNCs) are an extension of neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.

Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs).

Recurrent neural networks are Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.

Training

Teacher forcing

Encoder-decoder RNN without attention mechanism. Teacher forcing is shown in red.

An RNN can be trained into a conditionally generative model of sequences, aka autoregression.

Concretely, let us consider the problem of machine translation, that is, given a sequence of English words, the model is to produce a sequence of French words. It is to be solved by a seq2seq model.

Now, during training, the encoder half of the model would first ingest , then the decoder half would start generating a sequence . The problem is that if the model makes a mistake early on, say at , then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift towards , but not the others.

Teacher forcing makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see in order to generate .

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable.

The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm of backpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.

In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.

For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time.

A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problems.

The online algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.

One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.

Connectionist temporal classification

The connectionist temporal classification (CTC) is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.

Global optimization methods

Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.

The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.

Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:

  • Each weight encoded in the chromosome is assigned to the respective weight link of the network.
  • The training set is presented to the network which propagates the input signals forward.
  • The mean-squared error is returned to the fitness function.
  • This function drives the genetic selection process.

Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme can be:

  • When the neural network has learned a certain percentage of the training data.
  • When the minimum value of the mean-squared-error is satisfied.
  • When the maximum number of training generations has been reached.

The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.

Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization.

Other architectures

Independently RNN (IndRNN)

The independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections.

Neural history compressor

The neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.

The system effectively minimizes the description length or the negative logarithm of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.

It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.

A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.

Second order RNNs

Second-order RNNs use higher order weights instead of the standard weights, and states can be a product. This allows a direct mapping to a finite-state machine both in training, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability.

Hierarchical recurrent neural network

Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models.

Hierarchical recurrent neural networks are useful in forecasting, helping to predict disaggregated inflation components of the consumer price index (CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established inflation prediction methods.

Recurrent multilayer perceptron network

Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.

Multiple timescales model

A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher Henri Bergson, which have been incorporated into an MTRNN model.

Memristive networks

Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices. The memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems. Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of neuromorphic engineering in which the device behavior depends on the circuit wiring or topology. The evolution of these networks can be studied analytically using variations of the Caravelli-Traversa-Di Ventra equation.

Continuous-time

A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming inputs. They are typically analyzed by dynamical systems theory. Many RNN models in neuroscience are continuous-time.

For a neuron in the network with activation , the rate of change of activation is given by:

Where:

  •  : Time constant of postsynaptic node
  •  : Activation of postsynaptic node
  •  : Rate of change of activation of postsynaptic node
  •  : Weight of connection from pre to postsynaptic node
  •  : Sigmoid of x e.g. .
  •  : Activation of presynaptic node
  •  : Bias of presynaptic node
  •  : Input (if any) to node

CTRNNs have been applied to evolutionary robotics where they have been used to address vision, co-operation, and minimal cognitive behaviour.

Note that, by the Shannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions have been low-pass filtered but prior to sampling.

They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.

From a time-series perspective, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX). RNN has infinite impulse response whereas convolutional neural network has finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled.

The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity.

Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN).

Communication studies

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Communication_studies

Communication studies (or communication science) is an academic discipline that deals with processes of human communication and behavior, patterns of communication in interpersonal relationships, social interactions and communication in different culturesCommunication is commonly defined as giving, receiving or exchanging ideas, information, signals or messages through appropriate media, enabling individuals or groups to persuade, to seek information, to give information or to express emotions effectively. Communication studies is a social science that uses various methods of empirical investigation and critical analysis to develop a body of knowledge that encompasses a range of topics, from face-to-face conversation at a level of individual agency and interaction to social and cultural communication systems at a macro level.

Scholarly communication theorists focus primarily on refining the theoretical understanding of communication, examining statistics to help substantiate claims. The range of social scientific methods to study communication has been expanding. Communication researchers draw upon a variety of qualitative and quantitative techniques. The linguistic and cultural turns of the mid-20th century led to increasingly interpretative, hermeneutic, and philosophic approaches towards the analysis of communication. Conversely, the end of the 1990s and the beginning of the 2000s have seen the rise of new analytically, mathematically, and computationally focused techniques.

As a field of study, communication is applied to journalism, business, mass media, public relations, marketing, news and television broadcasting, interpersonal and intercultural communication, education, public administration, the problem of media-adequacy—and beyond. As all spheres of human activity and conveyance are affected by the interplay between social communication structure and individual agency, communication studies has gradually expanded its focus to other domains, such as health, medicine, economy, military and penal institutions, the Internet, social capital, and the role of communicative activity in the development of scientific knowledge.

History

Origins

Communication, a natural human behavior, became a topic of study in the 20th century. As communication technologies developed, so did the serious study of communication. During this time, a renewed interest in the study of rhetoric, including persuasion and public address, emerged, ultimately laying the foundation for several forms of communication studies we know today. The focus of communication studies developed further in the 20th century, eventually including means of communication such as mass communication, interpersonal communication, and oral interpretation. When World War I ended, the interest in studying communication intensified. The methods of communication used during the war challenged many people's beliefs about the limits of war that existed before these events. During this period, innovations were invented that no one had ever seen before, such as aircraft, telephones, and throat microphones. However, new ways of communicating that had been discovered, especially the use of morse code through portable morse code machines, helped troops to communicate in a much more rapid pace than ever before. This then sparked ideas for even more advanced ways of communication to be later created and discovered.

The social sciences were fully recognized as legitimate disciplines after World War II. Before being established as its own discipline, communication studies, was formed from three other major studies no: psychology, sociology, and political science. Communication studies focus on communication as central to the human experience, which involves understanding how people behave in creating, exchanging, and interpreting messages. Today, this accepted discipline now also encompasses more modern forms of communication studies as well, such as gender and communication, intercultural communication, political communication, health communication, and organizational communication.

Foundations of the academic discipline

The institutionalization of communication studies in U.S. higher education and research has often been traced to Columbia University, the University of Chicago, and the University of Illinois Urbana-Champaign, where early pioneers of the field worked after the Second World War.

Wilbur Schramm is considered the founder of the field of communication studies in the United States. Schramm was hugely influential in establishing communication as a field of study and in forming departments of communication studies across universities in the United States. He was the first individual to identify himself as a communication scholar; he created the first academic degree-granting programs with communication in their name; and he trained the first generation of communication scholars. Schramm had a background in English literature and developed communication studies partly by merging existing programs in speech communication, rhetoric, and journalism. He also edited a textbook, The Process and Effects of Mass Communication (1954), that helped define the field, partly by claiming Paul Lazarsfeld, Harold Lasswell, Carl Hovland, and Kurt Lewin as its founding fathers.

Schramm established three important communication institutes: the Institute of Communications Research (University of Illinois at Urbana-Champaign), the Institute for Communication Research (Stanford University), and the East-West Communication Institute (Honolulu). The patterns of scholarly work in communication studies that were set in motion at these institutes continue to this day. Many of Schramm's students, such as Everett Rogers and David Berlo went on to make important contributions of their own.

The first college of communication was founded at Michigan State University in 1958, led by scholars from Schramm's original ICR and dedicated to studying communication scientifically using a quantitative approach.[22][26] MSU was soon followed by important departments of communication at Purdue University, University of Texas-Austin, Stanford University, University of Iowa, University of Illinois, University of Pennsylvania, The University of Southern California, and Northwestern University.

Associations related to Communication Studies were founded or expanded during the 1950s. The National Society for the Study of Communication (NSSC) was founded in 1950 to encourage scholars to pursue communication research as a social science. This Association launched the Journal of Communication in the same year as its founding. Like many communication associations founded in this decade, the association's name changed as the field evolved. In 1968 the name changed to the International Communication Association (ICA).

In the United States

Undergraduate curricula aim to prepare students to interrogate the nature of communication in society and the development of communication as a specific field.

The National Communication Association (NCA) recognizes several distinct but often overlapping specializations within the broader communication discipline, including: technology, critical-cultural, health, intercultural, interpersonal-small group, mass communication, organizational, political, rhetorical, and environmental communication. Students take courses in these subject areas. Other programs and courses often integrated in communication programs include journalism, rhetoric, film criticism, theatre, public relations, political science (e.g., political campaign strategies, public speaking, effects of media on elections), as well as radio, television, computer-mediated communication, film production, and new media.

Many colleges in the United States offer a variety of majors within communication studies, including programs of study in the areas mentioned above. Communication studies is often perceived by many in society as primarily centered on the media arts; however, graduates of communication studies can pursue careers in areas ranging from media arts to public advocacy, marketing, and non-profit organizations.

In Canada

With the early influence of federal institutional inquiries, notably the 1951 Massey Commission, which "investigated the overall state of culture in Canada", the study of communication in Canada has frequently focused on the development of a cohesive national culture, and on infrastructural empires of social and material circulation. Although influenced by the American Communication tradition and British Cultural Studies, Communication studies in Canada has been more directly oriented toward the state and the policy apparatus, for example the Canadian Radio-television and Telecommunications Commission. Influential thinkers from the Canadian communication tradition include Harold Innis, Marshall McLuhan, Florian Sauvageau, Gertrude Robinson, Marc Raboy, Dallas Smythe, James R. Taylor, François Cooren, Gail Guthrie Valaskakis and George Grant.

Communication studies within Canada are a relatively new discipline, however, there are programs and departments to support and teach this topic in about 13 Canadian universities and many colleges as well. The Communication et information from Laval, and the Canadian Journal of Communication from McGill University in Montréal, are two journals that exist in Canada. There are also organizations and associations, both national and in Québec, that appeal to the specific interests that are targeted towards these academics. These specific journals consist of representatives from the industry of communication, the government, and members of the public as a whole.

Scope and topics

Communication studies integrates aspects of both social sciences and the humanities. As a social science, the discipline overlaps with sociology, psychology, anthropology, biology, political science, economics, and public policy. From a humanities perspective, communication is concerned with rhetoric and persuasion (traditional graduate programs in communication studies trace their history to the rhetoricians of Ancient Greece). Humanities approaches to communication often overlap with history, philosophy, English, and cultural studies.

Communication research informs politicians and policy makers, educators, strategists, legislators, business magnates, managers, social workers, non-governmental organizations, non-profit organizations, and people interested in resolving communication issues in general. There is often a great deal of crossover between social research, cultural research, market research, and other statistical fields.

Recent critiques have focused on the homogeneity of communication scholarship. For example, Chakravartty, et al. (2018) find that white scholars comprise the vast majority of publications, citations, and editorial positions. From a post-colonial perspective, this state is problematic because communication studies engage with a wide range of social justice concerns.

Business

Business communication emerged as a field of study in the late 20th century, due to the centrality of communication within business relationships. The scope of the field is difficult to define because communication is used in various ways among employers, employees, consumers, and brands. Because of this, the focus of the field is usually placed on the demands of employers, which is more universally understood by the revision of the American Assembly of Collegiate Schools of business standards to emphasize written and oral communication as an important characteristic in the curriculum. Business communication studies, therefore, revolve around the ever changing aspects of written and oral communication directly related to the field of business. Implementation of modern business communication curriculums are enhancing the study of business communication as a whole, while further preparing those to be able to communicate in the business community effectively.

Healthcare

Health communication is a multidisciplinary field that applies "communication evidence, strategy, theory, and creativity" to advance the well-being of people and populations. The term was first coined in 1975 by the International Communication Association and, in 1997, Health communication was officially recognized in the broader fields of Public Health Education and Health Promotion by the American Public Health Association. The discipline integrates components of various theories and models, with a focus on social marketing. It uses marketing to develop "activities and interventions designed to positively change behaviors." This emergence affected several dynamics of the healthcare system. It raised awareness of various avenues, including promotional activities and communication with health professionals' employees, patients, and constituents. "Efforts to create marketing-oriented organizations called for the widespread dissemination of information", putting a spotlight on theories of "communication, the communication process, and the techniques that were being utilized to communicate in other settings." Now, health care organizations of all types are using things like social media. "Uses include communicating with the community and patients; enhancing organizational visibility; marketing products and services; establishing a venue for acquiring news about activities, promotions, and fund-raising; providing a channel for patient resources and education; and providing customer service and support."

Digital library

From Wikipedia, the free encyclopedia
 
The Biodiversity Heritage Library website, an example of a digital library

A digital library, also referred to as an online library, an internet library, a digital repository, a library without walls, or a digital collection, is an online database of digital resources that can include text, still images, audio, video, digital documents, or other digital media formats or a library accessible through the internet. Objects can consist of digitized content, such as print or photographs, as well as originally produced digital content, including word processor files or social media posts. In addition to storing content, digital libraries provide mechanisms for the organization, searching, and retrieving of content from the collection. Digital libraries can vary immensely in size and scope, and can be maintained by individuals or organizations. The digital content may be stored locally, or accessed remotely via computer networks. These information retrieval systems are able to exchange information with each other through interoperability and sustainability.

History

The early history of digital libraries is not well documented, but several key thinkers are connected to the emergence of the concept. Predecessors include Paul Otlet and Henri La Fontaine's Mundaneum, an attempt begun in 1895 to gather and systematically catalogue the world's knowledge, with the hope of bringing about world peace. The visions of the digital library were largely realized a century later during the great expansion of the Internet.[5]

Vannevar Bush and J. C. R. Licklider were two pioneers who advanced this idea into contemporary technology. Bush had supported research that led to the bomb that was dropped on Hiroshima. After seeing the city's destruction, he wanted to create a machine that would show how technology can lead to understanding instead of destruction. This machine would include a desk with two screens, switches and buttons, and a keyboard. He named this the "Memex". This way individuals would be able to access stored books and files at a rapid speed. In 1956, Ford Foundation funded Licklider to analyze how libraries could be improved with technology. Almost a decade later, his book, entitled "Libraries of the Future" included his vision. He wanted to create a system that would use computers and networks, thereby ensuring the accessibility of human knowledge for human needs and ensuring automatic feedback for machine purposes. This system contained three components: the corpus of knowledge, the question, and the answer. Licklider called it a procognitive system.

In 1980, the role of the library in an electronic society was the focus of a clinic on library applications of data processing. Participants included Frederick Wilfrid Lancaster, Derek De Solla Price, Gerard Salton, and Michael Gorman).

Early projects centered on the creation of an electronic card catalogue known as Online Public Access Catalog (OPAC). By the 1980s, the success of these endeavors resulted in OPAC replacing the traditional card catalog in many academic, public and special libraries. This permitted libraries to undertake additional rewarding co-operative efforts to support resource sharing and expand access to library materials beyond an individual library.

An early example of a digital library is the Education Resources Information Center (ERIC), a database of education citations, abstracts and texts that was created in 1964 and made available online through DIALOG in 1969.

In 1994, digital libraries became widely visible in the research community due to a $24.4 million NSF managed program supported jointly by DARPA's Intelligent Integration of Information (I3) program, NASA, and NSF itself. Successful research proposals came from six U.S. universities. The universities included Carnegie Mellon University, University of California-Berkeley, University of Michigan, University of Illinois, University of California-Santa Barbara, and Stanford University. Articles from the projects summarized their progress at their halfway point in May 1996. Stanford research, by Sergey Brin and Larry Page, led to the founding of Google.

Early attempts at creating a model for digital libraries included the DELOS Digital Library Reference Model and the 5S Framework.

Terminology

The term digital library was first popularized by the NSF/DARPA/NASA Digital Libraries Initiative in 1994. With the availability of the computer networks the information resources are expected to stay distributed and accessed as needed, whereas in Vannevar Bush's essay As We May Think (1945) they were to be collected and kept within the researcher's Memex.

The term virtual library was initially used interchangeably with digital library, but is now primarily used for libraries that are virtual in other senses (such as libraries which aggregate distributed content). In the early days of digital libraries, there was discussion of the similarities and differences among the terms digital, virtual, and electronic.

A distinction is often made between content that was created in a digital format, known as born-digital, and information that has been converted from a physical medium, e.g. paper, through digitization. Not all electronic content is in digital data format. The term hybrid library is sometimes used for libraries that have both physical collections and electronic collections. For example, American Memory is a digital library within the Library of Congress.

Some important digital libraries also serve as long term archives, such as arXiv and the Internet Archive. Others, such as the Digital Public Library of America, seek to make digital information from various institutions widely accessible online.

Types of digital libraries

Institutional repositories

Many academic libraries are actively involved in building repositories of their institution's books, papers, theses, and other works that can be digitized or were 'born digital'. Many of these repositories are made available to the general public with few restrictions, in accordance with the goals of open access, in contrast to the publication of research in commercial journals, where the publishers usually limit access rights. Irrespective of access rights, institutional, truly free, and corporate repositories can be referred to as digital libraries. Institutional repository software is designed for archiving, organizing, and searching a library's content. Popular open-source solutions include DSpace, Greenstone Digital Library (GSDL), EPrints, Digital Commons, and the Fedora Commons-based systems Islandora and Samvera.

National library collections

Legal deposit is often covered by copyright legislation and sometimes by laws specific to legal deposit, and requires that one or more copies of all material published in a country should be submitted for preservation in an institution, typically the national library. Since the advent of electronic documents, legislation has had to be amended to cover the new formats, such as the 2016 amendment to the Copyright Act 1968 in Australia.

Since then various types of electronic depositories have been built. The British Library's Publisher Submission Portal and the German model at the Deutsche Nationalbibliothek have one deposit point for a network of libraries, but public access is only available in the reading rooms in the libraries. The Australian National edeposit system has the same features, but also allows for remote access by the general public for most of the content.

Digital archives

Physical archives differ from physical libraries in several ways. Traditionally, archives are defined as:

  1. Containing primary sources of information (typically letters and papers directly produced by an individual or organization) rather than the secondary sources found in a library (books, periodicals, etc.).
  2. Having their contents organized in groups rather than individual items.
  3. Having unique contents.

The technology used to create digital libraries is even more revolutionary for archives since it breaks down the second and third of these general rules. In other words, "digital archives" or "online archives" will still generally contain primary sources, but they are likely to be described individually rather than (or in addition to) in groups or collections. Further, because they are digital, their contents are easily reproducible and may indeed have been reproduced from elsewhere. The Oxford Text Archive is generally considered to be the oldest digital archive of academic physical primary source materials.

Archives differ from libraries in the nature of the materials held. Libraries collect individual published books and serials, or bounded sets of individual items. The books and journals held by libraries are not unique, since multiple copies exist and any given copy will generally prove as satisfactory as any other copy. The material in archives and manuscript libraries are "the unique records of corporate bodies and the papers of individuals and families".

A fundamental characteristic of archives is that they have to keep the context in which their records have been created and the network of relationships between them in order to preserve their informative content and provide understandable and useful information over time. The fundamental characteristic of archives resides in their hierarchical organization expressing the context by means of the archival bond.

Archival descriptions are the fundamental means to describe, understand, retrieve and access archival material. At the digital level, archival descriptions are usually encoded by means of the Encoded Archival Description XML format. The EAD is a standardized electronic representation of archival description which makes it possible to provide union access to detailed archival descriptions and resources in repositories distributed throughout the world.

Given the importance of archives, a dedicated formal model, called NEsted SeTs for Object Hierarchies (NESTOR), built around their peculiar constituents, has been defined. NESTOR is based on the idea of expressing the hierarchical relationships between objects through the inclusion property between sets, in contrast to the binary relation between nodes exploited by the tree. NESTOR has been used to formally extend the 5S model to define a digital archive as a specific case of digital library able to take into consideration the peculiar features of archives.

CAD library

A computer-aided design library or CAD library is a cloud based repository of 3D models or parts for computer-aided design (CAD), computer-aided engineering (CAE), computer-aided manufacturing (CAM), or Building information modeling (BIM). The models can be free and open source or proprietary and have to pay a subscription to have access to the CAD library 3D models. Generative AI CAD libraries are being developed using linked open data of schematics and diagrams.

Assets

CAD libraries can have assets such as 3D models, materials/textures, bump maps, trees/plants, HDRIs, and different Computer graphics lighting sources to be rendered.

CAD and 3D Model Libraries

2D graphics repository of digital art

A 2D graphics repository/library are vector graphics or raster graphics images/icons that can be free use or proprietary.

In-game book libraries

In-game book libraries are virtual collections of written works that players can read, share, or interact with inside a video game. Unlike programming libraries, these libraries function as narrtive or educational spaces, often mirroring real-world libraries and sometimes providing access to text that are censored or unavailable in the player's region. The most prominent example is The Uncensored Library, a minecraft map that distributes banned books worldwide. Another example is NaNa-Library known for its good novels and is designed to make small writers known.

Features of digital libraries

The advantages of digital libraries as a means of easily and rapidly accessing books, archives and images of various types are now widely recognized by commercial interests and public bodies alike.

Traditional libraries are limited by storage space; digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain it. As such, the cost of maintaining a digital library can be much lower than that of a traditional library. A physical library must spend large sums of money paying for staff, book maintenance, rent, and additional books. Digital libraries may reduce or, in some instances, do away with these fees. Both types of library require cataloging input to allow users to locate and retrieve material. Digital libraries may be more willing to adopt innovations in technology providing users with improvements in electronic and audio book technology as well as presenting new forms of communication such as wikis and blogs; conventional libraries may consider that providing online access to their OP AC catalog is sufficient. An important advantage to digital conversion is increased accessibility to users. They also increase availability to individuals who may not be traditional patrons of a library, due to geographic location or organizational affiliation.

  • No physical boundary: The user of a digital library need not to go to the library physically; people from all over the world can gain access to the same information, as long as an Internet connection is available.
  • Round the clock availability: A major advantage of digital libraries is that people can gain access 24/7 to the information.
  • Multiple access: The same resources can be used simultaneously by a number of institutions and patrons. This may not be the case for copyrighted material: a library may have a license for "lending out" only one copy at a time; this is achieved with a system of digital rights management where a resource can become inaccessible after expiration of the lending period or after the lender chooses to make it inaccessible (equivalent to returning the resource).
  • Information retrieval: The user is able to use any search term (word, phrase, title, name, subject) to search the entire collection. Digital libraries can provide very user-friendly interfaces, giving click able access to its resources.
  • Preservation and conservation: Digitization is not a long-term preservation solution for physical collections, but does succeed in providing access copies for materials that would otherwise fall to degradation from repeated use. Digitized collections and born-digital objects pose many preservation and conservation concerns that analog materials do not. See § Digital preservation for examples.
  • Space: Whereas traditional libraries are limited by storage space, digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain them and media storage technologies are more affordable than ever before.
  • Added value: Certain characteristics of objects, primarily the quality of images, may be improved. Digitization can enhance legibility and remove visible flaws such as stains and discoloration.

Software

Digital libraries offer a variety of software packages, including those tailored for kids' educational games. Institutional repository software, which focuses primarily on ingest, preservation and access of locally produced documents, particularly locally produced academic outputs, can be found in Institutional repository software. This software may be proprietary, as is the case with the Library of Congress which uses Digiboard and CTS to manage digital content.

The design and implementation in digital libraries are constructed so computer systems and software can make use of the information when it is exchanged. These are referred to as semantic digital libraries. Semantic libraries are also used to socialize with different communities from a mass of social networks. DjDL is a type of semantic digital library. Keywords-based and semantic search are the two main types of searches. A tool is provided in the semantic search that create a group for augmentation and refinement for keywords-based search. Conceptual knowledge used in DjDL is centered around two forms; the subject ontology and the set of concept search patterns based on the ontology. The three type of ontologies that are associated to this search are bibliographic ontologies, community-aware ontologies, and subject ontologies.

Metadata

In traditional libraries, the ability to find works of interest is directly related to how well they were cataloged. While cataloging electronic works digitized from a library's existing holding may be as simple as copying or moving a record from the print to the electronic form, complex and born-digital works require substantially more effort. To handle the growing volume of electronic publications, new tools and technologies have to be designed to allow effective automated semantic classification and searching. While full-text search can be used for some items, there are many common catalog searches which cannot be performed using full text, including:

  • finding texts which are translations of other texts
  • differentiating between editions/volumes of a text/periodical
  • inconsistent descriptors (especially subject headings)
  • missing, deficient or poor quality taxonomy practices
  • linking texts published under pseudonyms to the real authors (Samuel Clemens and Mark Twain, for example)
  • differentiating non-fiction from parody (The Onion from The New York Times)

Searching

Most digital libraries provide a search interface which allows resources to be found. These resources are typically deep web (or invisible web) resources since they frequently cannot be located by search engine crawlers. Some digital libraries create special pages or sitemaps to allow search engines to find all their resources. Digital libraries frequently use the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to expose their metadata to other digital libraries, and search engines like Google Scholar, Yahoo! and Scirus can also use OAI-PMH to find these deep web resources. As with physical libraries, very relatively little is known about how users actually select books.

There are two general strategies for searching a federation of digital libraries: distributed searching and searching previously harvested metadata.

Distributed searching typically involves a client sending multiple search requests in parallel to a number of servers in the federation. The results are gathered, duplicates are eliminated or clustered, and the remaining items are sorted and presented back to the client. Protocols like Z39.50 are frequently used in distributed searching. A benefit to this approach is that the resource-intensive tasks of indexing and storage are left to the respective servers in the federation. A drawback to this approach is that the search mechanism is limited by the different indexing and ranking capabilities of each database; therefore, making it difficult to assemble a combined result consisting of the most relevant found items.

Searching over previously harvested metadata involves searching a locally stored index of information that has previously been collected from the libraries in the federation. When a search is performed, the search mechanism does not need to make connections with the digital libraries it is searching—it already has a local representation of the information. This approach requires the creation of an indexing and harvesting mechanism which operates regularly, connecting to all the digital libraries and querying the whole collection in order to discover new and updated resources. OAI-PMH is frequently used by digital libraries for allowing metadata to be harvested. A benefit to this approach is that the search mechanism has full control over indexing and ranking algorithms, possibly allowing more consistent results. A drawback is that harvesting and indexing systems are more resource-intensive and therefore expensive.

Digital preservation

Digital preservation aims to ensure that digital media and information systems are still interpretable into the indefinite future. Each necessary component of this must be migrated, preserved or emulated. Typically lower levels of systems (floppy disks for example) are emulated, bit-streams (the actual files stored in the disks) are preserved and operating systems are emulated as a virtual machine. Only where the meaning and content of digital media and information systems are well understood is migration possible, as is the case for office documents. However, at least one organization, the Wider Net Project, has created an offline digital library, the eGranary, by reproducing materials on a 6 TB hard drive. Instead of a bit-stream environment, the digital library contains a built-in proxy server and search engine so the digital materials can be accessed using a web browser. Also, the materials are not preserved for the future. The eGranary is intended for use in places or situations where Internet connectivity is very slow, non-existent, unreliable, unsuitable or too expensive.

In the past few years, procedures for digitizing books at high speed and comparatively low cost have improved considerably with the result that it is now possible to digitize millions of books per year. The Google book-scanning project is also working with libraries to offer digitize books pushing forward on the digitize book realm.

Digital libraries are hampered by copyright law because, unlike with traditional printed works, the laws of digital copyright are still being formed. The republication of material on the web by libraries may require permission from rights holders, and there is a conflict of interest between libraries and the publishers who may wish to create online versions of their acquired content for commercial purposes. In 2010, it was estimated that twenty-three percent of books in existence were created before 1923 and thus out of copyright. Of those printed after this date, only five percent were still in print as of 2010. Thus, approximately seventy-two percent of books were not available to the public.

There is a dilution of responsibility that occurs as a result of the distributed nature of digital resources. Complex intellectual property matters may become involved since digital material is not always owned by a library. The content is, in many cases, public domain or self-generated content only. Some digital libraries, such as Project Gutenberg, work to digitize out-of-copyright works and make them freely available to the public. An estimate of the number of distinct books still existent in library catalogues from 2000 BC to 1960, has been made.

The Fair Use Provisions (17 USC § 107) under the Copyright Act of 1976 provide specific guidelines under which circumstances libraries are allowed to copy digital resources. Four factors that constitute fair use are "Purpose of the use, Nature of the work, Amount or substantiality used and Market impact".

Some digital libraries acquire a license to lend their resources. This may involve the restriction of lending out only one copy at a time for each license, and applying a system of digital rights management for this purpose.

The Digital Millennium Copyright Act of 1998 was an act created in the United States to attempt to deal with the introduction of digital works. This Act incorporates two treaties from the year 1996. It criminalizes the attempt to circumvent measures which limit access to copyrighted materials. It also criminalizes the act of attempting to circumvent access control. This act provides an exemption for nonprofit libraries and archives which allows up to three copies to be made, one of which may be digital. This may not be made public or distributed on the web, however. Further, it allows libraries and archives to copy a work if its format becomes obsolete.

Copyright issues persist. As such, proposals have been put forward suggesting that digital libraries be exempt from copyright law. Although this would be very beneficial to the public, it may have a negative economic effect and authors may be less inclined to create new works.

Another issue that complicates matters is the desire of some publishing houses to restrict the use of digit materials such as e-books purchased by libraries. Whereas with printed books, the library owns the book until it can no longer be circulated, publishers want to limit the number of times an e-book can be checked out before the library would need to repurchase that book. "[HarperCollins] began licensing use of each e-book copy for a maximum of 26 loans. This affects only the most popular titles and has no practical effect on others. After the limit is reached, the library can repurchase access rights at a lower cost than the original price." While from a publishing perspective, this sounds like a good balance of library lending and protecting themselves from a feared decrease in book sales, libraries are not set up to monitor their collections as such. They acknowledge the increased demand of digital materials available to patrons and the desire of a digital library to become expanded to include best sellers, but publisher licensing may hinder the process.

Recommendation systems

Many digital libraries offer recommender systems to reduce information overload and help their users discovering relevant literature. Some examples of digital libraries offering recommender systems are IEEE Xplore, Europeana, and GESIS Sowiport. The recommender systems work mostly based on content-based filtering but also other approaches are used such as collaborative filtering and citation-based recommendations. Beel et al. report that there are more than 90 different recommendation approaches for digital libraries, presented in more than 200 research articles.

Typically, digital libraries develop and maintain their own recommender systems based on existing search and recommendation frameworks such as Apache Lucene or Apache Mahout.

Drawbacks of digital libraries

Digital libraries, or at least their digital collections, also have brought their own problems and challenges in areas such as:

There are many large scale digitisation projects that perpetuate these problems.

Future development

Large scale digitization projects are underway at Google, the Million Book Project, and Internet Archive. With continued improvements in book handling and presentation technologies such as optical character recognition and development of alternative depositories and business models, digital libraries are rapidly growing in popularity. Just as libraries have ventured into audio and video collections, so have digital libraries such as the Internet Archive. In 2016, Google Books project received a court victory on proceeding with their book-scanning project that was halted by the Authors' Guild. This helped open the road for libraries to work with Google to better reach patrons who are accustomed to computerized information.

According to Larry Lannom, Director of Information Management Technology at the nonprofit Corporation for National Research Initiatives (CNRI), "all the problems associated with digital libraries are wrapped up in archiving". He goes on to state, "If in 100 years people can still read your article, we'll have solved the problem." Daniel Akst, author of The Webster Chronicle, proposes that "the future of libraries—and of information—is digital". Peter Lyman and Hal Variant, information scientists at the University of California, Berkeley, estimate that "the world's total yearly production of print, film, optical, and magnetic content would require roughly 1.5 billion gigabytes of storage". Therefore, they believe that "soon it will be technologically possible for an average person to access virtually all recorded information".

Digital archives are an evolving medium and they develop under various circumstances. Alongside large scale repositories, other digital archiving projects have also evolved in response to needs in research and research communication on various institutional levels. For example, during the COVID-19 pandemic, libraries and higher education institutions have launched digital archiving projects to document life during the pandemic, thus creating a digital, cultural record of collective memories from the period. Researchers have also utilized digital archiving to create specialized research databases. These databases compile digital records for use on international and interdisciplinary levels. COVID CORPUS, launched in October 2020, is an example of such a database, built in response to scientific communication needs in light of the pandemic. Beyond academia, digital collections have also recently been developed to appeal to a more general audience, as is the case with the Selected General Audience Content of the Internet-First University Press developed by Cornell University. This general-audience database contains specialized research information but is digitally organized for accessibility. The establishment of these archives has facilitated specialized forms of digital recordkeeping to fulfill various niches in online, research-based communication.

Interplanetary Internet

From Wikipedia, the free encyclopedia The speed of light, illustrated here by a beam of light traveling ...