In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.
Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference,
and memory ownership, and combinations thereof. Garbage collection may
take a significant proportion of a program's total processing time, and
affect performance as a result.
Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also.
Overview
Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be garbage-collected languages. Other languages, such as C and C++,
were designed for use with manual memory management, but have
garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D,
are garbage-collected but allow the user to manually delete objects or
even disable garbage collection entirely when speed is required.
Although many languages integrate GC into their compiler and runtime system, post-hoc GC systems also exist, such as Automatic Reference Counting (ARC). Some of these post-hoc GC systems do not require recompilation. Post-hoc GC is sometimes called litter collection, to distinguish it from ordinary GC.
Advantages
GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors:
Dangling pointers, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is dereferenced. By then the memory may have been reassigned to another use, with unpredictable results.
Double free bugs, which occur when the program tries to free a
region of memory that has already been freed, and perhaps already been
allocated again.
Certain kinds of memory leaks, in which a program fails to free memory occupied by objects that have become unreachable, which can lead to memory exhaustion.
Disadvantages
GC
uses computing resources to decide which memory to free. Therefore, the
penalty for the convenience of not annotating object lifetime manually
in the source code is overhead, which can impair program performance.
A peer-reviewed paper from 2005 concluded that GC needs five times the
memory to compensate for this overhead and to perform as fast as the
same program using idealised explicit memory management. The comparison
however is made to a program generated by inserting deallocation calls
using an oracle,
implemented by collecting traces from programs run under a profiler,
and the program is only correct for one particular execution of the
program.
Interaction with memory hierarchy effects can make this overhead
intolerable in circumstances that are hard to predict or to detect in
routine testing. The impact on performance was given by Apple as a
reason for not adopting garbage collection in iOS, despite it being the most desired feature.
The moment when the garbage is actually collected can be
unpredictable, resulting in stalls (pauses to shift/free memory)
scattered throughout a session. Unpredictable stalls can be unacceptable
in real-time environments, in transaction processing,
or in interactive programs. Incremental, concurrent, and real-time
garbage collectors address these problems, with varying trade-offs.
Tracing garbage collection
is the most common type of garbage collection, so much so that "garbage
collection" often refers to tracing garbage collection, rather than
other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are reachable
by a chain of references from certain root objects, and considering the
rest as garbage and collecting them. However, there are a large number
of algorithms used in implementation, with widely varying complexity and
performance characteristics.
Reference counting garbage collection is where each object has a
count of the number of references to it. Garbage is identified by having
a reference count of zero. An object's reference count is incremented
when a reference to it is created, and decremented when a reference is
destroyed. When the count reaches zero, the object's memory is
reclaimed.
As with manual memory management, and unlike tracing garbage
collection, reference counting guarantees that objects are destroyed as
soon as their last reference is destroyed, and usually only accesses
memory which is either in CPU caches,
in objects to be freed, or directly pointed to by those, and thus tends
to not have significant negative side effects on CPU cache and virtual memory operation.
There are a number of disadvantages to reference counting; this
can generally be solved or mitigated by more sophisticated algorithms:
Cycles
If two or more objects refer to each other, they can create a cycle
whereby neither will be collected as their mutual references never let
their reference counts become zero. Some garbage collection systems
using reference counting (like the one in CPython) use specific cycle-detecting algorithms to deal with this issue. Another strategy is to use weak references
for the "backpointers" which create cycles. Under reference counting, a
weak reference is similar to a weak reference under a tracing garbage
collector. It is a special reference object whose existence does not
increment the reference count of the referent object. Furthermore, a
weak reference is safe in that when the referent object becomes garbage,
any weak reference to it lapses, rather than being permitted to remain dangling, meaning that it turns into a predictable value, such as a null reference.
Space overhead (reference count)
Reference counting requires space to be allocated for each object to
store its reference count. The count may be stored adjacent to the
object's memory or in a side table somewhere else, but in either case,
every single reference-counted object requires additional storage for
its reference count. Memory space with the size of an unsigned pointer
is commonly used for this task, meaning that 32 or 64 bits of reference
count storage must be allocated for each object. On some systems, it may
be possible to mitigate this overhead by using a tagged pointer
to store the reference count in unused areas of the object's memory.
Often, an architecture does not actually allow programs to access the
full range of memory addresses that could be stored in its native
pointer size; certain number of high bits in the address is either
ignored or required to be zero. If an object reliably has a pointer at a
certain location, the reference count can be stored in the unused bits
of the pointer. For example, each object in Objective-C has a pointer to its class at the beginning of its memory; on the ARM64 architecture using iOS 7, 19 unused bits of this class pointer are used to store the object's reference count.
Speed overhead (increment/decrement)
In naive implementations, each assignment of a reference and each
reference falling out of scope often require modifications of one or
more reference counters. However, in a common case when a reference is
copied from an outer scope variable into an inner scope variable, such
that the lifetime of the inner variable is bounded by the lifetime of
the outer one, the reference incrementing can be eliminated. The outer
variable "owns" the reference. In the programming language C++, this
technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers"
whose constructors, destructors and assignment operators manage the
references. A smart pointer can be passed by reference to a function,
which avoids the need to copy-construct a new smart pointer (which would
increase the reference count on entry into the function and decrease it
on exit). Instead the function receives a reference to the smart
pointer which is produced inexpensively. The Deutsch-Bobrow method of
reference counting capitalizes on the fact that most reference count
updates are in fact generated by references stored in local variables.
It ignores these references, only counting references in the heap, but
before an object with reference count zero can be deleted, the system
must verify with a scan of the stack and registers that no other
reference to it still exists. A further substantial decrease in the
overhead on counter updates can be obtained by update coalescing
introduced by Levanoni and Petrank. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++.
But most of these updates are redundant. In order to have the reference
count properly evaluated at the end of the interval it is enough to
perform rc(O1)-- and rc(On)++. Levanoni and Petrank measured an elimination of more than 99% of the counter updates in typical Java benchmarks.
Requires atomicity
When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap,
at least for any objects which are shared, or potentially shared among
multiple threads. Atomic operations are expensive on a multiprocessor,
and even more expensive if they have to be emulated with software
algorithms. It is possible to avoid this issue by adding per-thread or
per-CPU reference counts and only accessing the global reference count
when the local reference counts become or are no longer zero (or,
alternatively, using a binary tree of reference counts, or even giving
up deterministic destruction in exchange for not having a global
reference count at all), but this adds significant memory overhead and
thus tends to be only useful in special cases (it is used, for example,
in the reference counting of Linux kernel modules). Update coalescing by
Levanoni and Petrank can be used to eliminate all atomic operations from the write-barrier.
Counters are never updated by the program threads in the course of
program execution. They are only modified by the collector which
executes as a single additional thread with no synchronization. This
method can be used as a stop-the-world mechanism for parallel programs,
and also with a concurrent reference counting collector.
Not real-time
Naive implementations of reference counting do not generally provide
real-time behavior, because any pointer assignment can potentially
cause a number of objects bounded only by total allocated memory size to
be recursively freed while the thread is unable to perform other work.
It is possible to avoid this issue by delegating the freeing of
unreferenced objects to other threads, at the cost of extra overhead.
Escape analysis is a compile-time technique that can convert heap allocations to stack allocations,
thereby reducing the amount of garbage collection to be done. This
analysis determines whether an object allocated inside a function is
accessible outside of it. If a function-local allocation is found to be
accessible to another function or thread, the allocation is said to
"escape" and cannot be done on the stack. Otherwise, the object may be
allocated directly on the stack and released when the function returns,
bypassing the heap and associated memory management costs.
Availability
Generally speaking, higher-level programming languages
are more likely to have garbage collection as a standard feature. In
some languages lacking built in garbage collection, it can be added
through a library, as with the Boehm garbage collector for C and C++.
BASIC and Logo
have often used garbage collection for variable-length data types, such
as strings and lists, so as not to burden programmers with memory
management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC
interpreter's garbage collection algorithm repeatedly scans the string
descriptors for the string having the highest address in order to
compact it toward high memory, resulting in performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.System, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster.
Objective-C
While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector.
However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbids the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC.
Limited environments
Garbage collection is rarely used on embedded
or real-time systems because of the usual need for very tight control
over the use of limited resources. However, garbage collectors
compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.
Java
Garbage collectors available in Java JDKs include:
Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman.
In Baker's algorithm, the allocation is done in either half of a
single region of memory. When it becomes half full, a garbage collection
is performed which moves the live objects into the other half and the
remaining objects are implicitly deallocated. The running program (the
'mutator') has to check that any object it references is in the correct
half, and if not move it across, while a background task is finding all
of the objects.
Generational garbage collection
schemes are based on the empirical observation that most objects die
young. In generational garbage collection two or more allocation regions
(generations) are kept, which are kept separate based on object's age.
New objects are created in the "young" generation that is regularly
collected, and when a generation is full, the objects that are still
referenced from older regions are copied into the next oldest
generation. Occasionally a full scan is performed.
Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system.
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy,
which puts fast but expensive and small storage options close to the
CPU and slower but less expensive and larger options further away.
Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's
Analytical Machine, clearly distinguished between processing and memory
(Babbage stored numbers as rotations of gears, while Ludgate stored
numbers as displacements of rods in shuttles). This distinction was
extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without
a significant amount of memory, a computer would merely be able to
perform fixed operations and immediately output the result. It would
have to be reconfigured to change its behavior. This is acceptable for
devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data.
Such computers are more versatile in that they do not need to have
their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte,
equal to 8 bits. A piece of information can be handled by any computer
or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
By adding bits to each encoded unit, redundancy allows the
computer to both detect errors in coded data and correct them based on
mathematical algorithms. Errors generally occur in low probabilities due
to random
bit value flipping, or "physical bit fatigue", loss of the physical bit
in the storage of its ability to maintain a distinguishable value
(0 or 1), or due to errors in inter or intra-computer communication. A
random bit flip (e.g. due to random radiation)
is typically corrected upon detection. A bit or a group of
malfunctioning physical bits (the specific defective bit is not always
known; group definition depends on the specific storage device) is
typically automatically fenced out, taken out of use by the device, and
replaced with another functioning equivalent group in the device, where
the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression
methods allow in many cases (such as a database) to represent a string
of bits by a shorter bit string ("compress") and reconstruct the
original string ("decompress") when needed. This utilizes substantially
less storage (tens of percents) for many types of data at the cost of
more computation (compress and decompress when needed). Analysis of the
trade-off between storage cost saving and costs of related computations
and possible delays in data availability is done before deciding whether
to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency
is from the CPU. This traditional division of storage to primary,
secondary, tertiary, and off-line storage is also guided by cost per
bit.
In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory,
is the only one directly accessible to the CPU. The CPU continuously
reads instructions stored there and executes them as required. Any data
actively operated on is also stored there in uniform manner.
This led to modern random-access memory
(RAM). It is small-sized, light, but quite expensive at the same time.
The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer
to improve both reading and writing performance. Operating systems
borrow RAM capacity for caching so long as not needed by running
software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit
to perform various calculations or other operations on this data (or
with the help of it). Registers are the fastest of all forms of computer
data storage.
Processor cache
is an intermediate stage between ultra-fast registers and much slower
main memory. It was introduced solely to improve the performance of
computers. Most actively used information in the main memory is just
duplicated in the cache memory, which is faster, but of much lesser
capacity. On the other hand, main memory is much slower, but has a much
greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile
(uninitialized at start up), a computer containing only such storage
would not have a source to read instructions from, in order to start the
computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as
updates to them are possible; however it is slow and memory must be
erased in large portions before it can be re-written. Some embedded systems
run programs directly from ROM (or similar), because such programs are
rarely changed. Standard computers do not store non-rudimentary programs
in ROM, and rather, use large capacities of secondary storage, which is
non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary storage
Secondary storage (also known as external memory or auxiliary storage)
differs from primary storage in that it is not directly accessible by
the CPU. The computer usually uses its input/output channels to access
secondary storage and transfer the desired data to primary storage.
Secondary storage is non-volatile (retaining data when its power is shut
off). Modern computer systems typically have two orders of magnitude
more secondary storage than primary storage because secondary storage is
less expensive.
Once the disk read/write head
on HDDs reaches the proper placement and the data, subsequent data on
the track are very fast to access. To reduce the seek time and
rotational latency, data are transferred to and from disks in large
contiguous blocks. Sequential or block access on disks is orders of
magnitude faster than random access, and many sophisticated paradigms
have been developed to design efficient algorithms based upon sequential
and block access. Another way to reduce the I/O bottleneck is to use
multiple disks in parallel in order to increase the bandwidth between
primary and secondary memory.[5]
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory,
allowing utilization of more primary storage capacity than is
physically available in the system. As the primary memory fills up, the
system moves the least-used chunks (pages)
to a swap file or page file on secondary storage, retrieving them later
when needed. If a lot of pages are moved to slower secondary storage,
the system performance is degraded.
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount
removable mass storage media into a storage device according to the
system's demands; such data are often copied to secondary storage before
use. It is primarily used for archiving rarely accessed information
since it is much slower than secondary storage (e.g. 5–60 seconds vs.
1–10 milliseconds). This is primarily useful for extraordinarily large
data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm
to fetch the medium and place it in a drive. When the computer has
finished reading the information, the robotic arm will return the medium
to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
Online storage is immediately available for I/O.
Nearline storage is not immediately available, but can be made online quickly without human intervention.
Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage,
while spinning drives that spin down automatically, such as in massive
arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage
Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit.
The medium is recorded, usually in a secondary or tertiary storage
device, and then physically removed or disconnected. It must be inserted
or connected by a human operator before a computer can access it again.
Unlike tertiary storage, it cannot be accessed without human
interaction.
Off-line storage is used to transfer information,
since the detached medium can easily be physically transported.
Additionally, it is useful for cases of disaster, where, for example, a
fire destroys the original data, a medium in a remote location will be
unaffected, enabling disaster recovery. Off-line storage increases general information security,
since it is physically inaccessible from a computer, and data
confidentiality or integrity cannot be affected by computer-based attack
techniques. Also, if the information stored for archival purposes is
rarely accessed, off-line storage is less expensive than tertiary
storage.
In modern personal computers, most secondary and tertiary storage
media are also used for off-line storage. Optical discs and flash
memory devices are the most popular, and to a much lesser extent
removable hard disk drives. In enterprise uses, magnetic tape is
predominant. Older examples are floppy disks, Zip disks, or punched
cards.
Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be
differentiated by evaluating certain core characteristics as well as
measuring characteristics specific to a particular implementation. These
core characteristics are volatility, mutability, accessibility, and
addressability. For any particular implementation of any storage
technology, the characteristics worth measuring are capacity and
performance.
Non-volatile memory
retains the stored information even if not constantly supplied with
electric power. It is suitable for long-term storage of information. Volatile memory
requires constant power to maintain the stored information. The fastest
memory technologies are volatile ones, although that is not a universal
rule. Since the primary storage is required to be very fast, it
predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory
is a form of volatile memory similar to DRAM with the exception that it
never needs to be refreshed as long as power is applied; it loses its
content when the power supply is lost.
An uninterruptible power supply
(UPS) can be used to give a computer a brief window of time to move
information from primary volatile storage into non-volatile storage
before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Mutability
Read/write storage or mutable storage
Allows information to be overwritten at any time. A computer without
some amount of read/write storage for primary storage purposes would be
useless for many tasks. Modern computers typically use read/write
storage also for secondary storage.
Slow write, fast read storage
Read/write storage which allows information to be overwritten
multiple times, but with the write operation being much slower than the
read operation. Examples include CD-RW and SSD.
Any location in storage can be accessed at any moment in
approximately the same amount of time. Such characteristic is well
suited for primary and secondary storage. Most semiconductor memories, flash memories and hard disk drives provide random access, though both semiconductor and flash memories have minimal latency when compared to hard disk drives, as no mechanical parts need to be moved.
The accessing of pieces of information will be in a serial order,
one after the other; therefore the time to access a particular piece of
information depends upon which piece of information was last accessed.
Such characteristic is typical of off-line storage.
Addressability
Location-addressable
Each individually accessible unit of information in storage is selected with its numerical memory address.
In modern computers, location-addressable storage usually limits to
primary storage, accessed internally by computer programs, since
location-addressability is very efficient, but burdensome for humans.
Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware
(computer device), with hardware being faster but more expensive
option. Hardware content addressable memory is often used in a
computer's CPU cache.
Capacity
Raw capacity
The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
The compactness of stored information. It is the storage capacity of
a medium divided with a unit of length, area or volume (e.g. 1.2
megabytes per square inch).
The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second
for tertiary storage. It may make sense to separate read latency and
write latency (especially for non-volatile memory) and in case of
sequential access storage, minimum, maximum and average latency.
The rate at which information can be read from or written to the
storage. In computer data storage, throughput is usually expressed in
terms of megabytes per second (MB/s), though bit rate
may also be used. As with latency, read rate and write rate may need to
be differentiated. Also accessing media sequentially, as opposed to
randomly, typically yields maximum throughput.
Granularity
The size of the largest "chunk" of data that can be efficiently
accessed as a single unit, e.g. without introducing additional latency.
Reliability
The probability of spontaneous bit value change under various conditions, or overall failure rate.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Energy use
Storage
devices that reduce fan usage automatically shut-down during
inactivity, and low power hard drives can reduce energy consumption by
90 percent.
2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.
Hardware memory encryption is available in Intel Architecture,
supporting Total Memory Encryption (TME) and page granular memory
encryption with multiple keys (MKTME).[17][18] and in SPARC M7 generation since October 2015.
Vulnerability and reliability
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors,
of which high counts signify deteriorating and/or low-quality media.
Too many consecutive minor errors can lead to data corruption. Not all
vendors and models of optical drives support error scanning.
Storage media
As of 2011,
the most commonly used data storage media are semiconductor, magnetic,
and optical, while paper still sees some limited usage. Some other
fundamental storage technologies, such as all-flash arrays (AFAs) are
proposed for development.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory
has steadily gained share as off-line storage for home computers.
Non-volatile semiconductor memory is also used for secondary storage in
various advanced electronic devices and specialized computers that are
designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile.
The information is accessed using one or more read/write heads which
may contain one or more recording transducers. A read/write head only
covers a part of the surface so that the head or medium or both must be
moved relative to another in order to access data. In modern computers,
magnetic storage will take these forms:
Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
Magnetic tape was then often used for secondary storage.
Magnetic storage does not have a definite limit of rewriting cycles
like flash storage and re-writeable optical media, as altering magnetic
fields causes no physical wear. Rather, their life span is limited by
mechanical parts.
Optical
Optical storage, the typical optical disc,
stores information in deformities on the surface of a circular disc and
reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile.
The deformities may be permanent (read only media), formed once (write
once media) or reversible (recordable or read/write media). The
following forms are currently in common use:
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic
surface stores information. The information is read optically and
written by combining magnetic and optical methods. Magneto-optical disc
storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
Light induced magnetization melting in magnetic photoconductors
has also been proposed for high-speed low-energy consumption
magneto-optical storage.
Paper
Paper data storage, typically in the form of paper tape or punched cards,
has long been used to store information for automatic processing,
particularly before general-purpose computers existed. Information was
recorded by punching holes into the paper or cardboard medium and was
read mechanically (or later optically) to determine whether a particular
location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Other storage media or substrates
Vacuum-tube memory
A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube
to store information. These primary storage devices were short-lived in
the market, since the Williams tube was unreliable, and the Selectron
tube was expensive.
Electro-acoustic memory
Delay-line memory used sound waves in a substance such as mercury
to store information. Delay-line memory was dynamic volatile, cycle
sequential read/write storage, and was used for primary storage.
is a medium for optical storage, generally consisting of a long and
narrow strip of plastic, onto which patterns can be written and from
which the patterns can be read back. It shares some technologies with
cinema film stock and optical discs, but is compatible with neither. The
motivation behind developing this technology was the possibility of far
greater storage capacities than either magnetic tape or optical discs.
uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance
of the material. Phase-change memory would be non-volatile,
random-access read/write storage, and might be used for primary,
secondary and off-line storage. Most rewritable and many write-once
optical disks already use phase-change material to store information.
stores information optically inside crystals or photopolymers.
Holographic storage can utilize the whole volume of the storage medium,
unlike optical disc storage, which is limited to a small number of
surface layers. Holographic storage would be non-volatile,
sequential-access, and either write-once or read/write storage. It might
be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
stores information in polymer
that can store electric charge. Molecular memory might be especially
suited for primary storage. The theoretical storage capacity of
molecular memory is 10 terabits per square inch (16 Gbit/mm2).
Magnetic photoconductors
store magnetic information, which can be modified by low-light illumination.
stores information in DNA nucleotides.
It was first done in 2012, when researchers achieved a ratio of
1.28 petabytes per gram of DNA. In March 2017 scientists reported that a
new algorithm called a DNA fountain achieved 85% of the theoretical
limit, at 215 petabytes per gram of DNA.
While a group of bits malfunction may be resolved by error detection
and correction mechanisms (see above), storage device malfunction
requires different solutions. The following solutions are commonly used
and valid for most storage devices:
Device mirroring (replication)
– A common solution to the problem is constantly maintaining an
identical copy of device content on another device (typically of a same
type). The downside is that this doubles the storage, and both devices
(copies) need to be updated simultaneously with some overhead and
possibly some delays. The upside is possible concurrent read of a same
data group by two independent processes, which increases performance.
When one of the replicated devices is detected to be defective, the
other copy is still operational, and is being utilized to generate a new
copy on another device (usually available operational in a pool of
stand-by devices for this purpose).
Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of ndevices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when comparing with n=2,
at the cost of more processing during both regular operation (with
often reduced performance) and defective device replacement.
Device mirroring and typical RAID are designed to handle a single
device failure in the RAID group of devices. However, if a second
failure occurs before the RAID group is completely repaired from the
first failure, then data can be lost. The probability of a single
failure is typically small. Thus the probability of two failures in a
same RAID group in time proximity is much smaller (approximately the
probability squared, i.e., multiplied by itself). If a database cannot
tolerate even such smaller probability of data loss, then the RAID group
itself is replicated (mirrored). In many cases such mirroring is done
geographically remotely, in a different storage array, to handle also
recovery from disasters (see disaster recovery above).
Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
Storage area network
(SAN) is a specialized network, that provides other computers with
storage capacity. The crucial difference between NAS and SAN, is that
NAS presents and manages file systems to client computers, while SAN
provides access at block-addressing (raw) level, leaving it to attaching
systems to manage data or file systems within the provided capacity.
SAN is commonly associated with Fibre Channel networks.
Robotic storage
Large
quantities of individual magnetic tapes, and optical or magneto-optical
discs may be stored in robotic tertiary storage devices. In tape
storage field they are known as tape libraries, and in optical storage field optical jukeboxes,
or optical disk libraries per analogy. The smallest forms of either
technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each
holding individual media, and usually one or more picking robots that
traverse the slots and load media to built-in drives. The arrangement of
the slots and picking devices affects performance. Important
characteristics of such storage are possible expansion options: adding
slots, modules, drives, robots. Tape libraries may have from 10 to more
than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.