Search This Blog

Saturday, October 14, 2023

Solid-state drive

From Wikipedia, the free encyclopedia
 
Solid-state drive
A 2.5-inch Serial ATA solid-state drive
Usage of flash memory
Introduced by:SanDisk
Introduction date:1991; 32 years ago
Capacity:20 MB (2.5-in form factor)
Original concept
By:Storage Technology Corporation
Conceived:1978; 45 years ago
Capacity:45 MB
As of 2023
Capacity:Up to 100 TB 
An Intel mSATA SSD
Samsung M.2 NVMe SSD

A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage. It is also sometimes called a semiconductor storage device, a solid-state device or a solid-state disk, even though SSDs lack the physical spinning disks and movable read–write heads used in hard disk drives (HDDs) and floppy disks. SSD also has rich internal parallelism for data processing.

In comparison to hard disk drives and similar electromechanical media which use moving parts, SSDs are typically more resistant to physical shock, run silently, and have higher input/output rates and lower latency. SSDs store data in semiconductor cells. As of 2019, cells can contain between 1 and 4 bits of data. SSD storage devices vary in their properties according to the number of bits stored in each cell, with single-bit cells ("Single Level Cells" or "SLC") being generally the most reliable, durable, fast, and expensive type, compared with 2- and 3-bit cells ("Multi-Level Cells/MLC" and "Triple-Level Cells/TLC"), and finally, quad-bit cells ("QLC") being used for consumer devices that do not require such extreme properties and are the cheapest per gigabyte (GB) of the four. In addition, 3D XPoint memory (sold by Intel under the Optane brand) stores data by changing the electrical resistance of cells instead of storing electrical charges in cells, and SSDs made from RAM can be used for high speed, when data persistence after power loss is not required, or may use battery power to retain data when its usual power source is unavailable. Hybrid drives or solid-state hybrid drives (SSHDs), such as Intel's Hystor and Apple's Fusion Drive, combine features of SSDs and HDDs in the same unit using both flash memory and spinning magnetic disks in order to improve the performance of frequently-accessed data. Bcache achieves a similar effect purely in software, using combinations of dedicated regular SSDs and HDDs.

SSDs based on NAND flash will slowly leak charge over time if left for long periods without power. This causes worn-out drives (that have exceeded their endurance rating) to start losing data typically after one year (if stored at 30 °C) to two years (at 25 °C) in storage; for new drives it takes longer. Therefore, SSDs are not suitable for archival storage. 3D XPoint is a possible exception to this rule; it is a relatively new technology with unknown long-term data-retention characteristics.

SSDs can use traditional HDD interfaces and form factors, or newer interfaces and form factors that exploit specific advantages of the flash memory in SSDs. Traditional interfaces (e.g. SATA and SAS) and standard HDD form factors allow such SSDs to be used as drop-in replacements for HDDs in computers and other devices. Newer form factors such as mSATA, M.2, U.2, NF1/M.3/NGSFF, XFM Express (Crossover Flash Memory, form factor XT2) and EDSFF (formerly known as Ruler SSD) and higher speed interfaces such as NVM Express (NVMe) over PCI Express (PCIe) can further increase performance over HDD performance. SSDs have a limited lifetime number of writes, and also slow down as they reach their full storage capacity.

Development and history

Early SSDs using RAM and similar technology

An early—if not the first—semiconductor storage device compatible with a hard drive interface (e.g. an SSD as defined) was the 1978 StorageTek STC 4305, a plug-compatible replacement for the IBM 2305 fixed head disk drive. It initially used charge-coupled devices (CCDs) for storage (later switched to DRAM), and consequently was reported to be seven times faster than the IBM product at about half the price ($400,000 for 45 MB capacity). Before the StorageTek SSD there were many DRAM and core (e.g. DATARAM BULK Core, 1976) products sold as alternatives to HDDs but they typically had memory interfaces and were not SSDs as defined.

In the late 1980s, Zitel offered a family of DRAM-based SSD products under the trade name "RAMDisk", for use on systems by UNIVAC and Perkin-Elmer, among others.

Flash-based SSDs

SSD evolution
Parameter Started with Developed to Improvement
Capacity 20 MB (Sandisk, 1991) 100 TB (Enterprise Nimbus Data DC100, 2018)
(As of 2023 up to 15.3 TB available for consumers)
5-million-to-one
(400,000-to-one)
Sequential read speed 49.3 MB/s (Samsung MCAQE32G5APP-0XA, 2007) 15 GB/s (Gigabyte demonstration, 2019)
(As of 2020 up to 6.795 GB/s available for consumers)
304.25-to-one (138-to-one)
Sequential write speed 80 MB/s (Samsung enterprise SSD, 2008) 15.200 GB/s (Gigabyte demonstration, 2019)
(As of 2020 up to 4.397 GB/s available for consumers)
190-to-one (55-to-one)
IOPS 79 (Samsung MCAQE32G5APP-0XA, 2007) 2,500,000 (Enterprise Micron X100, 2019)
(As of 2020 up to 736,270 read IOPS and 702,210 write IOPS available for consumers)
31,645.56-to-one (Consumer: read IOPS: 9,319.87-to-one, write IOPS: 8,888.73-to-one)
Access time (in milliseconds, ms) 0.5 (Samsung MCAQE32G5APP-0XA, 2007) 0.045 read, 0.013 write (lowest values, WD Black SN850 1TB, 2020) Read:11-to-one, Write: 38-to-one
Price US$50,000 per gigabyte (Sandisk, 1991) US$0.10 per gigabyte (Crucial MX500, July 2020) 555,555-to-one

The basis for flash-based SSDs, flash memory, was invented by Fujio Masuoka at Toshiba in 1980 and commercialized by Toshiba in 1987. SanDisk Corporation (then SanDisk) founders Eli Harari and Sanjay Mehrotra, along with Robert D. Norman, saw the potential of flash memory as an alternative to existing hard drives, and filed a patent for a flash-based SSD in 1989. The first commercial flash-based SSD was shipped by SanDisk in 1991. It was a 20 MB SSD in a PCMCIA configuration, and sold OEM for around $1,000 and was used by IBM in a ThinkPad laptop. In 1998, SanDisk introduced SSDs in 2.5-inch and 3.5-inch form factors with PATA interfaces.

In 1995, STEC, Inc. entered the flash memory business for consumer electronic devices.

In 1995, M-Systems introduced flash-based solid-state drives as HDD replacements for the military and aerospace industries, as well as for other mission-critical applications. These applications require the SSD's ability to withstand extreme shock, vibration, and temperature ranges.

In 1999, BiTMICRO made a number of introductions and announcements about flash-based SSDs, including an 18 GB 3.5-inch SSD. In 2007, Fusion-io announced a PCIe-based Solid state drive with 100,000 input/output operations per second (IOPS) of performance in a single card, with capacities up to 320 GB.

At Cebit 2009, OCZ Technology demonstrated a 1 TB flash SSD using a PCI Express ×8 interface. It achieved a maximum write speed of 0.654 gigabytes per second (GB/s) and maximum read speed of 0.712 GB/s. In December 2009, Micron Technology announced an SSD using a 6 gigabits per second (Gbit/s) SATA interface.

In 2016, Seagate demonstrated 10 GB/s sequential read and write speeds from a 16-lane PCIe 3.0 SSD, and a 60 TB SSD in a 3.5-inch form factor. Samsung also launched to market a 15.36 TB SSD with a price tag of US$10,000 using a SAS interface, using a 2.5-inch form factor but with the thickness of 3.5-inch drives. This was the first time a commercially available SSD had more capacity than the largest currently available HDD.

In 2018, both Samsung and Toshiba launched 30.72 TB SSDs using the same 2.5-inch form factor but with 3.5-inch drive thickness using a SAS interface. Nimbus Data announced and reportedly shipped 100 TB drives using a SATA interface, a capacity HDDs are not expected to reach until 2025. Samsung introduced an M.2 NVMe SSD with read speeds of 3.5 GB/s and write speeds of 3.3 GB/s. A new version of the 100 TB SSD was launched in 2020 at a price of US$40,000, with the 50 TB version costing US$12,500.

In 2019, Gigabyte Technology demonstrated an 8 TB 16-lane PCIe 4.0 SSD with 15.0 GB/s sequential read and 15.2 GB/s sequential write speeds at Computex 2019. It included a fan, as new, high-speed SSDs run at high temperatures.  Also in 2019, NVMe M.2 SSDs using the PCIe 4.0 interface were launched. These SSDs have read speeds of up to 5.0 GB/s and write speeds of up to 4.4 GB/s. Due to their high-speed operation, these SSDs use large heatsinks and, without sufficient cooling airflow, will typically thermally throttle down after roughly 15 minutes of continuous operation at full speed. Samsung also introduced SSDs capable of 8 GB/s sequential read and write speeds and 1.5 million IOPS, capable of moving data from damaged chips to undamaged chips, to allow the SSD to continue working normally, albeit at a lower capacity.

Enterprise flash drives

Top and bottom views of a 2.5-inch 100 GB SATA 3.0 (6 Gbit/s) model of the Intel DC S3700 series

Enterprise flash drives (EFDs) are designed for applications requiring high I/O performance (IOPS), reliability, energy efficiency and, more recently, consistent performance. In most cases, an EFD is an SSD with a higher set of specifications, compared with SSDs that would typically be used in notebook computers. The term was first used by EMC in January 2008, to identify SSD manufacturers who would provide products meeting these higher standards. There are no standards bodies who control the definition of EFDs, so any SSD manufacturer may claim to produce EFDs when in fact the product may not meet any particular requirements.

An example is the Intel DC S3700 series of drives introduced in the fourth quarter of 2012, which focuses on achieving consistent performance, an area that had not received much attention but which Intel claimed was important for the enterprise market; In particular, Intel claims that, at a steady state, the S3700 drives would not vary their IOPS by more than 10–15%, and that 99.9% of all 4 KB random I/Os are serviced in less than 500 µs.

Another example is the Toshiba PX02SS enterprise SSD series announced in 2016, optimized for use in server and storage platforms requiring high endurance from write-intensive applications such as write caching, I/O acceleration, and online transaction processing (OLTP). The PX02SS series uses 12 Gbit/s SAS interface, featuring MLC NAND flash memory and achieving random write speeds of up to 42,000 IOPS, random read speeds of up to 130,000 IOPS, and endurance rating of 30 drive writes per day (DWPD).

Drives using other persistent memory technologies

In 2017, the first products with 3D XPoint memory were released under Intel's Optane brand; 3D Xpoint is entirely different from NAND flash and stores data using different principles. SSDs based on 3D XPoint have higher IOPS (up to 2.5 million) but lower sequential read/write speeds than their NAND-flash counterparts.

Architecture and function

The key components of an SSD are the controller and the memory to store the data. The primary memory component in an SSD was traditionally DRAM volatile memory, but since 2009, it is more commonly NAND flash non-volatile memory.

Controller

Every SSD includes a controller that incorporates the electronics that bridge the NAND memory components to the host computer. The controller is an embedded processor that executes firmware-level code and is one of the most important factors of SSD performance. Some of the functions performed by the controller include:

The performance of an SSD can scale with the number of parallel NAND flash chips used in the device. A single NAND chip is relatively slow, due to the narrow (8/16 bit) asynchronous I/O interface, and additional high latency of basic I/O operations (typical for SLC NAND, ~25 μs to fetch a 4 KiB page from the array to the I/O buffer on a read, ~250 μs to commit a 4 KiB page from the IO buffer to the array on a write, ~2 ms to erase a 256 KiB block). When multiple NAND devices operate in parallel inside an SSD, the bandwidth scales, and the high latencies can be hidden, as long as enough outstanding operations are pending and the load is evenly distributed between devices.

Micron and Intel initially made faster SSDs by implementing data striping (similar to RAID 0) and interleaving in their architecture. This enabled the creation of SSDs with 250 MB/s effective read/write speeds with the SATA 3 Gbit/s interface in 2009. Two years later, SandForce continued to leverage this parallel flash connectivity, releasing consumer-grade SATA 6 Gbit/s SSD controllers which supported 500 MB/s read/write speeds. SandForce controllers compress the data before sending it to the flash memory. This process may result in less writing and higher logical throughput, depending on the compressibility of the data.

Wear leveling

If a particular block is programmed and erased repeatedly without writing to any other blocks, that block will wear out before all the other blocks—thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD. In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time.

The process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data that is changing more frequently (hot data) can be written into those blocks. Relocating data increases write amplification and adds to the wear of flash memory so a balance must be struck between these performance considerations and wear leveling effectiveness.

Memory

Flash memory

Comparison of architectures
Comparison characteristics MLC : SLC NAND : NOR
Persistence ratio 1 : 10 1 : 10
Sequential write ratio 1 : 3 1 : 4
Sequential read ratio 1 : 1 1 : 5
Price ratio 1 : 1.3 1 : 0.7

Most SSD manufacturers use non-volatile NAND flash memory in the construction of their SSDs because of the lower cost compared with DRAM and the ability to retain the data without a constant power supply, ensuring data persistence through sudden power outages. Flash memory SSDs were initially slower than DRAM solutions, and some early designs were even slower than HDDs after continued use. This problem was resolved by controllers that came out in 2009 and later.

Flash-based SSDs store data in metal–oxide–semiconductor (MOS) integrated circuit chips which contain non-volatile floating-gate memory cells. Flash memory-based solutions are typically packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch), but also in smaller more compact form factors, such as the M.2 form factor, made possible by the small size of flash memory.

Lower-priced drives usually use quad-level cell (QLC), triple-level cell (TLC) or multi-level cell (MLC) flash memory, which is slower and less reliable than single-level cell (SLC) flash memory. This can be mitigated or even reversed by the internal design structure of the SSD, such as interleaving, changes to writing algorithms, and higher over-provisioning (more excess capacity) with which the wear-leveling algorithms can work.

Solid-state drives that rely on V-NAND technology, in which layers of cells are stacked vertically, have been introduced.

DRAM

SSDs based on volatile memory such as DRAM are characterized by very fast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of flash SSDs or traditional HDDs.

DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation (similar to the hibernate function used in modern operating systems).

SSDs of this type are usually fitted with DRAM modules of the same type used in regular PCs and servers, which can be swapped out and replaced by larger modules. Such as i-RAM, HyperOs HyperDrive, DDRdrive X1, etc. Some manufacturers of DRAM SSDs solder the DRAM chips directly to the drive, and do not intend the chips to be swapped out—such as ZeusRAM, Aeon Drive, etc.

A remote, indirect memory-access disk (RIndMA Disk) uses a secondary computer with a fast network or (direct) Infiniband connection to act like a RAM-based SSD, but the new, faster, flash-memory based, SSDs already available in 2009 are making this option not as cost effective.

While the price of DRAM continues to fall, the price of Flash memory falls even faster. The "Flash becomes cheaper than DRAM" crossover point occurred approximately 2004.

3D XPoint

In 2015, Intel and Micron announced 3D XPoint as a new non-volatile memory technology. Intel released the first 3D XPoint-based drive (branded as Intel Optane SSD) in March 2017 starting with a data center product, Intel Optane SSD DC P4800X Series, and following with the client version, Intel Optane SSD 900P Series, in October 2017. Both products operate faster and with higher endurance than NAND-based SSDs, while the areal density is comparable at 128 gigabits per chip.For the price per bit, 3D XPoint is more expensive than NAND, but cheaper than DRAM.

Other

Some SSDs, called NVDIMM or Hyper DIMM devices, use both DRAM and flash memory. When the power goes down, the SSD copies all the data from its DRAM to flash; when the power comes back up, the SSD copies all the data from its flash to its DRAM. In a somewhat similar way, some SSDs use form factors and buses actually designed for DIMM modules, while using only flash memory and making it appear as if it were DRAM. Such SSDs are usually known as ULLtraDIMM devices.

Drives known as hybrid drives or solid-state hybrid drives (SSHDs) use a hybrid of spinning disks and flash memory. Some SSDs use magnetoresistive random-access memory (MRAM) for storing data.

Cache or buffer

A flash-based SSD typically uses a small amount of DRAM as a volatile cache, similar to the buffers in hard disk drives. A directory of block placement and wear leveling data is also kept in the cache while the drive is operating. One SSD controller manufacturer, SandForce, does not use an external DRAM cache on their designs but still achieves high performance. Such an elimination of the external DRAM reduces the power consumption and enables further size reduction of SSDs.

Battery or supercapacitor

Another component in higher-performing SSDs is a capacitor or some form of battery, which are necessary to maintain data integrity so the data in the cache can be flushed to the drive when power is lost; some may even hold power long enough to maintain data in the cache until power is resumed. In the case of MLC flash memory, a problem called lower page corruption can occur when MLC flash memory loses power while programming an upper page. The result is that data written previously and presumed safe can be corrupted if the memory is not supported by a supercapacitor in the event of a sudden power loss. This problem does not exist with SLC flash memory.

Many consumer-class SSDs have built-in capacitors to save at least the FTL mapping table on unexpected power loss; among the examples are the Crucial M500 and MX100 series, the Intel 320 series, and the more expensive Intel 710 and 730 series. Enterprise-class SSDs, such as the Intel DC S3700 series, usually have built-in batteries or supercapacitors.

Host interface

An M.2 (2242) solid-state-drive (SSD) connected into USB 3.0 adapter and connected to computer.
An SSD with 1.2 TB of MLC NAND, using PCI Express as the host interface

The host interface is physically a connector with the signalling managed by the SSD's controller. It is most often one of the interfaces found in HDDs. They include:

  • Serial attached SCSI (SAS-3, 12.0 Gbit/s) – generally found on servers
  • Serial ATA and mSATA variant (SATA 3.0, 6.0 Gbit/s)
  • PCI Express (PCIe 3.0 ×4, 31.5 Gbit/s)
  • M.2 (6.0 Gbit/s for SATA 3.0 logical device interface, 31.5 Gbit/s for PCIe 3.0 ×4)
  • U.2 (PCIe 3.0 ×4)
  • Fibre Channel (128 Gbit/s) – almost exclusively found on servers
  • USB (10 Gbit/s)
  • Parallel ATA (UDMA, 1064 Mbit/s) – mostly replaced by SATA
  • (Parallel) SCSI ( 40 Mbit/s- 2560 Mbit/s) – generally found on servers, mostly replaced by SAS; last SCSI-based SSD was introduced in 2004

SSDs support various logical device interfaces, such as Advanced Host Controller Interface (AHCI) and NVMe. Logical device interfaces define the command sets used by operating systems to communicate with SSDs and host bus adapters (HBAs).

Configurations

The size and shape of any device are largely driven by the size and shape of the components used to make that device. Traditional HDDs and optical drives are designed around the rotating platter(s) or optical disc along with the spindle motor inside. Since an SSD is made up of various interconnected integrated circuits (ICs) and an interface connector, its shape is no longer limited to the shape of rotating media drives. Some solid-state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector.

For general computer use, the 2.5-inch form factor (typically found in laptops) is the most popular. For desktop computers with 3.5-inch hard disk drive slots, a simple adapter plate can be used to make such a drive fit. Other types of form factors are more common in enterprise applications. An SSD can also be completely integrated in the other circuitry of the device, as in the Apple MacBook Air (starting with the fall 2010 model). As of 2014, mSATA and M.2 form factors also gained popularity, primarily in laptops.

Standard HDD form factors

An SSD with a 2.5-inch HDD form factor, opened to show solid-state electronics. Empty spaces next to the NAND chips are for additional NAND chips, allowing the same circuit board design to be used on several drive models with different capacities; other drives may instead use a circuit board whose size increases along with drive capacity, leaving the rest of the drive empty

The benefit of using a current HDD form factor would be to take advantage of the extensive infrastructure already in place to mount and connect the drives to the host system. These traditional form factors are known by the size of the rotating media (i.e., 5.25-inch, 3.5-inch, 2.5-inch or 1.8-inch) and not the dimensions of the drive casing.

Standard card form factors

For applications where space is at a premium, like for ultrabooks or tablet computers, a few compact form factors were standardized for flash-based SSDs.

There is the mSATA form factor, which uses the PCI Express Mini Card physical layout. It remains electrically compatible with the PCI Express Mini Card interface specification while requiring an additional connection to the SATA host controller through the same connector.

M.2 form factor, formerly known as the Next Generation Form Factor (NGFF), is a natural transition from the mSATA and physical layout it used, to a more usable and more advanced form factor. While mSATA took advantage of an existing form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. The M.2 standard allows both SATA and PCI Express SSDs to be fitted onto M.2 modules.

Some high performance, high capacity drives uses standard PCI Express add-in card form factor to house additional memory chips, permit the use of higher power levels, and allow the use of a large heat sink. There are also adapter boards that converts other form factors, especially M.2 drives with PCIe interface, into regular add-in cards.

Disk-on-a-module form factors

A 2 GB disk-on-a-module with PATA interface

A disk-on-a-module (DOM) is a flash drive with either 40/44-pin Parallel ATA (PATA) or SATA interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). DOM devices emulate a traditional hard disk drive, resulting in no need for special drivers or other specific operating system support. DOMs are usually used in embedded systems, which are often deployed in harsh environments where mechanical HDDs would simply fail, or in thin clients because of small size, low power consumption, and silent operation.

As of 2016, storage capacities range from 4 MB to 128 GB with different variations in physical layouts, including vertical or horizontal orientation.

Box form factors

Many of the DRAM-based solutions use a box that is often designed to fit in a rack-mount system. The number of DRAM components required to get sufficient capacity to store the data along with the backup power supplies requires a larger space than traditional HDD form factors.

Bare-board form factors

Form factors which were more common to memory modules are now being used by SSDs to take advantage of their flexibility in laying out the components. Some of these include PCIe, mini PCIe, mini-DIMM, MO-297, and many more. The SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide the data connection back to the computer. The result is an easy-to-install SSD with a capacity equal to drives that typically take a full 2.5-inch drive bay. At least one manufacturer, Innodisk, has produced a drive that sits directly on the SATA connector (SATADOM) on the motherboard without any need for a power cable. Some SSDs are based on the PCIe form factor and connect both the data interface and power through the PCIe connector to the host. These drives can use either direct PCIe flash controllers or a PCIe-to-SATA bridge device which then connects to SATA flash controllers.

Ball grid array form factors

In the early 2000s, a few companies introduced SSDs in Ball Grid Array (BGA) form factors, such as M-Systems' (now SanDisk) DiskOnChip and Silicon Storage Technology's NANDrive (now produced by Greenliant Systems), and Memoright's M1000 for use in embedded systems. The main benefits of BGA SSDs are their low power consumption, small chip package size to fit into compact subsystems, and that they can be soldered directly onto a system motherboard to reduce adverse effects from vibration and shock.

Such embedded drives often adhere to the eMMC and eUFS standards.

Comparison with other technologies

Hard disk drives

SSD benchmark, showing about 230 MB/s reading speed (blue), 210 MB/s writing speed (red) and about 0.1 ms seek time (green), all independent from the accessed disk location.

Making a comparison between SSDs and ordinary (spinning) HDDs is difficult. Traditional HDD benchmarks tend to focus on the performance characteristics that are poor with HDDs, such as rotational latency and seek time. As SSDs do not need to spin or seek to locate data, they may prove vastly superior to HDDs in such tests. However, SSDs have challenges with mixed reads and writes, and their performance may degrade over time. SSD testing must start from the (in use) full drive, as the new and empty (fresh, out-of-the-box) drive may have much better write performance than it would show after only weeks of use.

Most of the advantages of solid-state drives over traditional hard drives are due to their ability to access data completely electronically instead of electromechanically, resulting in superior transfer speeds and mechanical ruggedness. On the other hand, hard disk drives offer significantly higher capacity for their price.

Some field failure rates indicate that SSDs are significantly more reliable than HDDs but others do not. However, SSDs are uniquely sensitive to sudden power interruption, resulting in aborted writes or even cases of the complete loss of the drive. The reliability of both HDDs and SSDs varies greatly among models.

As with HDDs, there is a tradeoff between cost and performance of different SSDs. Single-level cell (SLC) SSDs, while significantly more expensive than multi-level (MLC) SSDs, offer a significant speed advantage. At the same time, DRAM-based solid-state storage is currently considered the fastest and most costly, with average response times of 10 microseconds instead of the average 100 microseconds of other SSDs. Enterprise flash devices (EFDs) are designed to handle the demands of tier-1 application with performance and response times similar to less-expensive SSDs.

In traditional HDDs, a rewritten file will generally occupy the same location on the disk surface as the original file, whereas in SSDs the new copy will often be written to different NAND cells for the purpose of wear leveling. The wear-leveling algorithms are complex and difficult to test exhaustively; as a result, one major cause of data loss in SSDs is firmware bugs.

The following table shows a detailed overview of the advantages and disadvantages of both technologies. Comparisons reflect typical characteristics, and may not hold for a specific device.

Comparison of NAND-based SSD and HDD
Attribute or characteristic Solid-state drive Hard disk drive
Price per capacity SSDs generally are more expensive than HDDs and expected to remain so into the 2020s.

SSD price as of first quarter 2018 is around 30 cents (US) per gigabyte based on 4 TB models.

Prices have generally declined annually and as of 2018 are expected to continue to do so.


HDD price as of first quarter 2018 is around 2 to 3 cents (US) per gigabyte based on 1 TB models.

Prices have generally declined annually and as of 2018 are expected to continue to do so.

Storage capacity In 2018, SSDs were available in sizes up to 100 TB, but less costly, 120 to 512 GB models were more common. In 2018, HDDs of up to 16 TB were available.
Reliability – data retention If left without power, worn out SSDs typically start to lose data after about one to two years in storage, depending on temperature. New drives are supposed to retain data for about ten years. MLC and TLC based devices tend to lose data earlier than SLC-based devices. SSDs are not suited for archival use. If kept in a dry environment at low temperatures, HDDs can retain their data for a very long period of time even without power. However, the mechanical parts tend to become clotted over time and the drive fails to spin up after a few years in storage.
Reliability – longevity SSDs have no moving parts to fail mechanically so in theory, should be more reliable than HDDs. However, in practice this is unclear.

Each block of a flash-based SSD can be erased (and therefore written) only a limited number of times before it fails. The controllers manage this limitation so that drives can last for many years under normal use. SSDs based on DRAM do not have a limited number of writes. However the failure of a controller can make an SSD unusable. Reliability varies significantly across different SSD manufacturers and models with return rates reaching 40% for specific drives. Many SSDs critically fail on power outages; a December 2013 survey of many SSDs found that only some of them are able to survive multiple power outages. A Facebook study found that sparse data layout across an SSD's physical address space (e.g., non-contiguously allocated data), dense data layout (e.g., contiguous data) and higher operating temperature (which correlates with the power used to transmit data) each lead to increased failure rates among SSDs.

However, SSDs have undergone many revisions that have made them more reliable and long lasting. As of 2018, SSDs in the market use power loss protection circuits, wear leveling techniques and thermal throttling to ensure longevity.

HDDs have moving parts, and are subject to potential mechanical failures from the resulting wear and tear so in theory, should be less reliable than SSDs. However, in practice this is unclear.

The storage medium itself (magnetic platter) does not essentially degrade from reading and write operations.

According to a study performed by Carnegie Mellon University for both consumer and enterprise-grade HDDs, their average failure rate is 6 years, and life expectancy is 9–11 years. However the risk of a sudden, catastrophic data loss can be lower for HDDs.

When stored offline (unpowered on the shelf) in long term, the magnetic medium of HDD retains data significantly longer than flash memory used in SSDs.

Start-up time Almost instantaneous; no mechanical components to prepare. May need a few milliseconds to come out of an automatic power-saving mode. Drive spin-up may take several seconds. A system with many drives may need to stagger spin-up to limit peak power drawn, which is briefly high when an HDD is first started.
Sequential access performance In consumer products the maximum transfer rate typically ranges from about 200 MB/s to 3500 MB/s, depending on the drive. Enterprise SSDs can have multi-gigabyte per second throughput. Once the head is positioned, when reading or writing a continuous track, a modern HDD can transfer data at about 200 MB/s. Data transfer rate depends also upon rotational speed, which can range from 3,600 to 15,000 rpm and also upon the track (reading from the outer tracks is faster). Data transfer speed can be up to 480 MB/s(experimental).
Random access performance Random access time typically under 0.1 ms. As data can be retrieved directly from various locations of the flash memory, access time is usually not a big performance bottleneck. Read performance does not change based on where data is stored. In applications, where hard disk drive seeks are the limiting factor, this results in faster boot and application launch times (see Amdahl's law).

SSD technology can deliver rather consistent read/write speed, but when many individual smaller blocks are accessed, performance is reduced. Flash memory must be erased before it can be rewritten to. This requires an excess number of write operations over and above that intended (a phenomenon known as write amplification), which negatively impacts performance. SSDs typically exhibit a small, steady reduction in write performance over their lifetime, although the average write speed of some drives can improve with age.

Read latency time is much higher than SSDs. Random access time ranges from 2.9 (high end server drive) to 12 ms (laptop HDD) due to the need to move the heads and wait for the data to rotate under the magnetic head. Read time is different for every different seek, since the location of the data and the location of the head are likely different. If data from different areas of the platter must be accessed, as with fragmented files, response times will be increased by the need to seek each fragment.
Impact of file system fragmentation There is limited benefit to reading data sequentially (beyond typical FS block sizes, say 4 KiB), making fragmentation negligible for SSDs. Defragmentation would cause wear by making additional writes of the NAND flash cells, which have a limited cycle life.However, even with SSDs there is a practical limit on how much fragmentation certain file systems can sustain; once that limit is reached, subsequent file allocations fail. Consequently, defragmentation may still be necessary, although to a lesser degree. Some file systems, like NTFS, become fragmented over time if frequently written; periodic defragmentation is required to maintain optimum performance. This is usually not an issue in modern file systems like ext4, as they implement techniques such as allocate-on-flush to reduce file fragmentation as long as sufficient disk space is left free.
Acoustic noise SSDs have no moving parts and therefore are silent, although, on some SSDs, high pitch noise from the high voltage generator (for erasing blocks) may occur. HDDs have moving parts (heads, actuator, and spindle motor) and make characteristic sounds of whirring and clicking; noise levels vary depending on the RPM, but can be significant (while often much lower than the sound from the cooling fans). Laptop hard drives are relatively quiet.
Temperature control A Facebook study found that at operating temperatures above 40 °C (104 °F), the failure rate among SSDs increases with temperature. However, this was not the case with newer drives that employ thermal throttling, albeit at a potential cost to performance. In practice, SSDs usually do not require any special cooling and can tolerate higher temperatures than HDDs. Some SSDs, including high-end enterprise models installed as add-on cards or 2.5-inch bay devices, may ship with heat sinks to dissipate generated heat, requiring certain volumes of airflow to operate. Ambient temperatures above 35 °C (95 °F) can shorten the life of a hard disk, and reliability will be compromised at drive temperatures above 55 °C (131 °F). Fan cooling may be required if temperatures would otherwise exceed these values. In practice, modern HDDs may be used with no special arrangements for cooling.
Lowest operating temperature SSDs can operate at −55 °C (−67 °F). Most modern HDDs can operate at 0 °C (32 °F).
Highest altitude when operating SSDs have no issues on this. HDDs can operate safely at an altitude of at most 3,000 meters (10,000 ft). HDDs will fail to operate at altitudes above 12,000 meters (40,000 ft). With the introduction of helium-filled (sealed) HDDs, this is expected to be less of an issue.
Moving from a cold environment to a warmer environment SSDs have no issues with this. Due to the thermal throttling mechanism SSDs are kept secure and prevented from the temperature imbalance. A certain amount of acclimation time may be needed when moving some HDDs from a cold environment to a warmer environment before operating them; depending upon humidity, condensation could occur on heads and/or disks and operating it immediately will result in damage to such components. Modern helium HDDs are sealed and do not have such a problem.
Breather hole SSDs do not require a breather hole. Most modern HDDs require a breather hole in order to function properly. Helium-filled devices are sealed and do not have a hole.
Susceptibility to environmental factors No moving parts, very resistant to shock, vibration, movement, and contamination. Heads flying above rapidly rotating platters are susceptible to shock, vibration, movement, and contamination which could damage the medium.
Installation and mounting Not sensitive to orientation, vibration, or shock. Usually no exposed circuitry. Circuitry may be exposed in a card form device and it must not be short-circuited by conductive materials. Circuitry may be exposed, and it must not be short-circuited by conductive materials (such as the metal chassis of a computer). Should be mounted to protect against vibration and shock. Some HDDs should not be installed in a tilted position.
Susceptibility to magnetic fields Low impact on flash memory, but an electromagnetic pulse will damage any electrical system, especially integrated circuits. In general, magnets or magnetic surges may result in data corruption or mechanical damage to the drive internals. Drive's metal case provides a low level of shielding to the magnetic platters.
Weight and size SSDs, essentially semiconductor memory devices mounted on a circuit board, are small and lightweight. They often follow the same form factors as HDDs (2.5-inch or 1.8-inch) or are bare PCBs (M.2 and mSATA). The enclosures on most mainstream models, if any, are made mostly of plastic or lightweight metal. High performance models often have heatsinks attached to the device, or have bulky cases that serves as its heatsink, increasing its weight. HDDs are generally heavier than SSDs, as the enclosures are made mostly of metal, and they contain heavy objects such as motors and large magnets. 3.5-inch drives typically weigh around 700 grams (1.5 lb).
Secure writing limitations NAND flash memory cannot be overwritten, but has to be rewritten to previously erased blocks. If a software encryption program encrypts data already on the SSD, the overwritten data is still unsecured, unencrypted, and accessible (drive-based hardware encryption does not have this problem). Also data cannot be securely erased by overwriting the original file without special "Secure Erase" procedures built into the drive. HDDs can overwrite data directly on the drive in any particular sector. However, the drive's firmware may exchange damaged blocks with spare areas, so bits and pieces may still be present. Some manufacturers' HDDs fill the entire drive with zeroes, including relocated sectors, on ATA Secure Erase Enhanced Erase command.
Read/write performance symmetry Less expensive SSDs typically have write speeds significantly lower than their read speeds. Higher performing SSDs have similar read and write speeds. HDDs generally have slightly longer (worse) seek times for writing than for reading.
Free block availability and TRIM SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free blocks cause slower performance. HDDs are not affected by free blocks and may not benefit from TRIM.
Power consumption High performance flash-based SSDs generally require half to a third of the power of HDDs. High-performance DRAM SSDs generally require as much power as HDDs, and must be connected to power even when the rest of the system is shut down. Emerging technologies like DevSlp can minimize power requirements of idle drives. The lowest-power HDDs (1.8-inch size) can use as little as 0.35 watts when idle. 2.5-inch drives typically use 2 to 5 watts. The highest-performance 3.5-inch drives can use up to about 20 watts.
Maximum areal storage density (Terabits per square inch) 2.8 1.2

Memory cards

CompactFlash card used as an SSD

While both memory cards and most SSDs use flash memory, they serve very different markets and purposes. Each has a number of different attributes which are optimized and adjusted to best meet the needs of particular users. Some of these characteristics include power consumption, performance, size, and reliability.

SSDs were originally designed for use in a computer system. The first units were intended to replace or augment hard disk drives, so the operating system recognized them as a hard drive. Originally, solid state drives were even shaped and mounted in the computer like hard drives. Later SSDs became smaller and more compact, eventually developing their own unique form factors such as the M.2 form factor. The SSD was designed to be installed permanently inside a computer.

In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF), and many others) were originally designed for digital cameras and later found their way into cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and designed to be inserted and removed repeatedly.

SSD failure

SSDs have very different failure modes from traditional magnetic hard drives. Because solid-state drives contain no moving parts, they are generally not subject to mechanical failures. Instead, other kinds of failure are possible (for example, incomplete or failed writes due to sudden power failure can be more of a problem than with HDDs, and if a chip fails then all the data on it is lost, a scenario not applicable to magnetic drives). On the whole, however, studies have shown that SSDs are generally highly reliable, and often continue working far beyond the expected lifetime as stated by their manufacturer.

The endurance of an SSD should be provided on its datasheet in one of two forms:

  • either n DW/D (n drive writes per day)
  • or m TBW (max terabytes written), short TBW.

So for example a Samsung 970 EVO NVMe M.2 SSD (2018) with 1 TB has an endurance of 600 TBW.

SSD reliability and failure modes

An early investigation by Techreport.com that ran from 2013 to 2015 involved a number of flash-based SSDs being tested to destruction to identify how and at what point they failed. The website found that all of the drives "surpassed their official endurance specifications by writing hundreds of terabytes without issue"—volumes of that order being in excess of typical consumer needs. The first SSD to fail was TLC-based, with the drive succeeding in writing over 800 TB. Three SSDs in the test wrote three times that amount (almost 2.5 PB) before they too failed. The test demonstrated the remarkable reliability of even consumer-market SSDs.

A 2016 field study based on data collected over six years in Google's data centres and spanning "millions" of drive days found that the proportion of flash-based SSDs requiring replacement in their first four years of use ranged from 4% to 10% depending on the model. The authors concluded that SSDs fail at a significantly lower rate than hard disk drives. (In contrast, a 2016 evaluation of 71,940 HDDs found failure rates comparable to those of Google's SSDs: the HDDs had on average an annualized failure rate of 1.95%.) The study also showed, on the down-side, that SSDs experience significantly higher rates of uncorrectable errors (which cause data loss) than do HDDs. It also led to some unexpected results and implications:

  • In the real world, MLC-based designs – believed less reliable than SLC designs – are often as reliable as SLC. (The findings state that "SLC [is] not generally more reliable than MLC".) But generally it is said, that the write endurance is the following:
    • SLC NAND: 100,000 erases per block
    • MLC NAND: 5,000 to 10,000 erases per block for medium-capacity applications, and 1,000 to 3,000 for high-capacity applications
    • TLC NAND: 1,000 erases per block
  • Device age, measured by days in use, is the main factor in SSD reliability and not amount of data read or written, which are measured by terabytes written or drive writes per day. This suggests that other aging mechanisms, such as "silicon aging", are at play. The correlation is significant (around 0.2–0.4).
  • Raw bit error rates (RBER) grow slowly with wear-out—and not exponentially as is often assumed. RBER is not a good predictor of other errors or SSD failure.
  • The uncorrectable bit error rate (UBER) is widely used but is not a good predictor of failure either. However SSD UBER rates are higher than those for HDDs, so although they do not predict failure, they can lead to data loss due to unreadable blocks being more common on SSDs than HDDs. The conclusion states that although more reliable overall, the rate of uncorrectable errors able to impact a user is larger.
  • "Bad blocks in new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to Flash die or chip failure. 30–80% of SSDs develop at least one bad block and 2–7% develop at least one bad chip in the first four years of deployment."
  • There is no sharp increase in errors after the expected lifetime is reached.
  • Most SSDs develop no more than a few bad blocks, perhaps 2–4. SSDs that develop many bad blocks often go on to develop far more (perhaps hundreds), and may be prone to failure. However most drives (99%+) are shipped with bad blocks from manufacture. The finding overall was that bad blocks are common and 30–80% of drives will develop at least one in use, but even a few bad blocks (2–4) is a predictor of up to hundreds of bad blocks at a later time. The bad block count at manufacture correlates with later development of further bad blocks. The report conclusion added that SSDs tended to either have "less than a handful" of bad blocks or "a large number", and suggested that this might be a basis for predicting eventual failure.
  • Around 2–7% of SSDs will develop bad chips in their first four years of use. Over two thirds of these chips will have breached their manufacturers' tolerances and specifications, which typically guarantee that no more than 2% of blocks on a chip will fail within its expected write lifetime.
  • 96% of those SSDs that need repair (warranty servicing), need repair only once in their life. Days between repair vary from "a couple of thousand days" to "nearly 15,000 days" depending on the model.

Data recovery and secure deletion

Solid-state drives have set new challenges for data recovery companies, as the method of storing data is non-linear and much more complex than that of hard disk drives. The strategy by which the drive operates internally can vary largely between manufacturers, and the TRIM command zeroes the whole range of a deleted file. Wear leveling also means that the physical address of the data and the address exposed to the operating system are different.

As for secure deletion of data, ATA Secure Erase command could be used. A program such as hdparm can be used for this purpose.

Reliability metrics

The JEDEC Solid State Technology Association (JEDEC) has published standards for reliability metrics:

  • Unrecoverable Bit Error Ratio (UBER)
  • Terabytes Written (TBW) – the number of terabytes that can be written to a drive within its warranty
  • Drive Writes Per Day (DWPD) – the number of times the total capacity of the drive may be written to per day within its warranty

Applications

Due to their generally prohibitive cost versus HDDs at the time, until 2009, SSDs were mainly used in those aspects of mission critical applications where the speed of the storage system needed to be as high as possible. Since flash memory has become a common component of SSDs, the falling prices and increased densities have made it more cost-effective for many other applications. For instance, in the distributed computing environment, SSDs can be used as the building block for a distributed cache layer that temporarily absorbs the large volume of user requests to the slower HDD based backend storage system. This layer provides much higher bandwidth and lower latency than the storage system, and can be managed in a number of forms, such as distributed key-value database and distributed file system. On supercomputers, this layer is typically referred to as burst buffer. With this fast layer, users often experience shorter system response time. Organizations that can benefit from faster access of system data include equity trading companies, telecommunication corporations, and streaming media and video editing firms. The list of applications which could benefit from faster storage is vast.

Flash-based solid-state drives can be used to create network appliances from general-purpose personal computer hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.

SSDs based on an SD card with a live SD operating system are easily write-locked. Combined with a cloud computing environment or other writable medium, to maintain persistence, an OS booted from a write-locked SD card is robust, rugged, reliable, and impervious to permanent corruption. If the running OS degrades, simply turning the machine off and then on returns it back to its initial uncorrupted state and thus is particularly solid. The SD card installed OS does not require removal of corrupted components since it was write-locked though any written media may need to be restored.

Hard-drive cache

In 2011, Intel introduced a caching mechanism for their Z68 chipset (and mobile derivatives) called Smart Response Technology, which allows a SATA SSD to be used as a cache (configurable as write-through or write-back) for a conventional, magnetic hard disk drive. A similar technology is available on HighPoint's RocketHybrid PCIe card.

Solid-state hybrid drives (SSHDs) are based on the same principle, but integrate some amount of flash memory on board of a conventional drive instead of using a separate SSD. The flash layer in these drives can be accessed independently from the magnetic storage by the host using ATA-8 commands, allowing the operating system to manage it. For example, Microsoft's ReadyDrive technology explicitly stores portions of the hibernation file in the cache of these drives when the system hibernates, making the subsequent resume faster.

Dual-drive hybrid systems are combining the usage of separate SSD and HDD devices installed in the same computer, with overall performance optimization managed by the computer user, or by the computer's operating system software. Examples of this type of system are bcache and dm-cache on Linux, and Apple's Fusion Drive.

File-system support for SSDs

Typically the same file systems used on hard disk drives can also be used on solid state drives. It is usually expected for the file system to support the TRIM command which helps the SSD to recycle discarded data (support for TRIM arrived some years after SSDs themselves but is now nearly universal). This means that the file system does not need to manage wear leveling or other flash memory characteristics, as they are handled internally by the SSD. Some log-structured file systems (e.g. F2FS, JFFS2) help to reduce write amplification on SSDs, especially in situations where only very small amounts of data are changed, such as when updating file-system metadata.

While not a native feature of file systems, operating systems should also aim to align partitions correctly, which avoids excessive read-modify-write cycles. A typical practice for personal computers is to have each partition aligned to start at a 1 MiB (= 1,048,576 bytes) mark, which covers all common SSD page and block size scenarios, as it is divisible by all commonly used sizes - 1 MiB, 512 KiB, 128 KiB, 4 KiB, and 512 B. Modern operating system installation software and disk tools handle this automatically.

Linux

Initial support for the TRIM command has been added to version 2.6.28 of the Linux kernel mainline.

The ext4, Btrfs, XFS, JFS, and F2FS file systems include support for the discard (TRIM or UNMAP) function.

Kernel support for the TRIM operation was introduced in version 2.6.33 of the Linux kernel mainline, released on 24 February 2010. To make use of it, a file system must be mounted using the discard parameter. Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM, with the possibility to turn them off, or to select between one-time or continuous discard operations. Support for queued TRIM, which is a SATA 3.1 feature that results in TRIM commands not disrupting the command queues, was introduced in Linux kernel 3.12, released on November 2, 2013.

An alternative to the kernel-level TRIM operation is to use a user-space utility called fstrim that goes through all of the unused blocks in a filesystem and dispatches TRIM commands for those areas. fstrim utility is usually run by cron as a scheduled task. As of November 2013, it is used by the Ubuntu Linux distribution, in which it is enabled only for Intel and Samsung solid-state drives for reliability reasons; vendor check can be disabled by editing file /etc/cron.weekly/fstrim using instructions contained within the file itself.

Since 2010, standard Linux drive utilities have taken care of appropriate partition alignment by default.

Linux performance considerations

An SSD that uses NVM Express as the logical device interface, in the form of a PCI Express 3.0 ×4 expansion card

During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab file requires manual modifications. This is because of the notion that the current Linux TRIM command implementation might not be optimal. It has been proven to cause a performance degradation instead of a performance increase under certain circumstances. As of January 2014, Linux sends an individual TRIM command to each sector, instead of a vectorized list defining a TRIM range as recommended by the TRIM specification.

For performance reasons, it is recommended to switch the I/O scheduler from the default CFQ (Completely Fair Queuing) to NOOP or Deadline. CFQ was designed for traditional magnetic media and seek optimization, thus many of those I/O scheduling efforts are wasted when used with SSDs. As part of their designs, SSDs offer much bigger levels of parallelism for I/O operations, so it is preferable to leave scheduling decisions to their internal logic – especially for high-end SSDs.

A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVMe, by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization. As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), device mapper framework, loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases.

macOS

Versions since Mac OS X 10.6.8 (Snow Leopard) support TRIM but only when used with an Apple-purchased SSD. TRIM is not automatically enabled for third-party drives, although it can be enabled by using third-party utilities such as Trim Enabler. The status of TRIM can be checked in the System Information application or in the system_profiler command-line tool.

Versions since OS X 10.10.4 (Yosemite) include sudo trimforce enable as a Terminal command that enables TRIM on non-Apple SSDs. There is also a technique to enable TRIM in versions earlier than Mac OS X 10.6.8, although it remains uncertain whether TRIM is actually utilized properly in those cases.

Microsoft Windows

Prior to version 7, Microsoft Windows did not take any specific measures to support solid state drives. From Windows 7, the standard NTFS file system provides support for the TRIM command. (Other file systems on Windows 7 do not support TRIM.)

By default, Windows 7 and newer versions execute TRIM commands automatically if the device is detected to be a solid-state drive. However, because TRIM irreversibly resets all freed space, it may be desirable to disable support where enabling data recovery is preferred over wear leveling. To change the behavior, in the Registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem the value DisableDeleteNotification can be set to 1. This prevents the mass storage driver issuing the TRIM command.

Windows implements TRIM command for more than just file-delete operations. The TRIM operation is fully integrated with partition- and volume-level commands such as format and delete, with file-system commands relating to truncate and compression, and with the System Restore (also known as Volume Snapshot) feature.

Windows Vista

Windows Vista generally expects hard disk drives rather than SSDs. Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices, but for SSDs it only improves the default partition alignment to prevent read-modify-write operations that reduce the speed of SSDs. Most SSDs are typically split into 4 KiB sectors, while earlier systems may be based on 512 byte sectors with their default partition setups unaligned to the 4 KiB boundaries.

Defragmentation

Defragmentation should be disabled on solid-state drives because the location of the file components on an SSD does not significantly impact its performance, but moving the files to make them contiguous using the Windows Defrag routine will cause unnecessary write wear on the limited number of P/E cycles on the SSD. The Superfetch feature will not materially improve performance and causes additional overhead in the system and SSD. Windows Vista does not send the TRIM command to solid-state drives, but some third-party utilities such as SSD Doctor will periodically scan the drive and TRIM the appropriate entries.

Windows 7

Windows 7 and later versions have native support for SSDs. The operating system detects the presence of an SSD and optimizes operation accordingly. For SSD devices, Windows 7 disables ReadyBoost and automatic defragmentation. Despite the initial statement by Steven Sinofsky before the release of Windows 7, however, defragmentation is not disabled, even though its behavior on SSDs differs. One reason is the low performance of Volume Shadow Copy Service on fragmented SSDs. The second reason is to avoid reaching the practical maximum number of file fragments that a volume can handle. If this maximum is reached, subsequent attempts to write to the drive will fail with an error message.

Windows 7 also includes support for the TRIM command to reduce garbage collection for data that the operating system has already determined is no longer valid. Without support for TRIM, the SSD would be unaware of this data being invalid and would unnecessarily continue to rewrite it during garbage collection causing further wear on the SSD. It is beneficial to make some changes that prevent SSDs from being treated more like HDDs, for example cancelling defragmentation, not filling them to more than about 75% of capacity, not storing frequently written-to files such as log and temporary files on them if a hard drive is available, and enabling the TRIM process.

Windows 8.1 and later

Windows 8.1 and later Windows systems also support automatic TRIM for PCI Express SSDs based on NVMe. For Windows 7, the KB2990941 update is required for this functionality and needs to be integrated into Windows Setup using DISM if Windows 7 has to be installed on the NVMe SSD. Windows 8/8.1 also support the SCSI unmap command for USB-attached SSDs or SATA-to-USB enclosures. SCSI Unmap is a full analog of the SATA TRIM command. It is also supported over USB Attached SCSI Protocol (UASP).

The graphical Windows Disk Defragmenter in Windows 8.1 also recognizes SSDs distinctly from hard disk drives in a separate Media Type column. While Windows 7 supported automatic TRIM for internal SATA SSDs, Windows 8.1 and Windows 10 support manual TRIM (via an "Optimize" function in Disk Defragmenter) as well as automatic TRIM for SATA, NVMe and USB-attached SSDs. Disk Defragmenter in Windows 10 and 11 may execute TRIM to optimize an SSD.

ZFS

Solaris as of version 10 Update 6 (released in October 2008), and recent versions of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux, and FreeBSD all can use SSDs as a performance booster for ZFS. A low-latency SSD can be used for the ZFS Intent Log (ZIL), where it is named the SLOG. This is used every time a synchronous write to the drive occurs. An SSD (not necessarily with a low-latency) may also be used for the level 2 Adaptive Replacement Cache (L2ARC), which is used to cache data for reading. When used either alone or in combination, large increases in performance are generally seen.

FreeBSD

ZFS for FreeBSD introduced support for TRIM on September 23, 2012. The code builds a map of regions of data that were freed; on every write the code consults the map and eventually removes ranges that were freed before, but are now overwritten. There is a low-priority thread that TRIMs ranges when the time comes.

Also the Unix File System (UFS) supports the TRIM command.

Swap partitions

  • According to Microsoft's former Windows division president Steven Sinofsky, "there are few files better than the pagefile to place on an SSD". According to collected telemetry data, Microsoft had found the pagefile.sys to be an ideal match for SSD storage.
  • Linux swap partitions are by default performing TRIM operations when the underlying block device supports TRIM, with the possibility to turn them off, or to select between one-time or continuous TRIM operations.
  • If an operating system does not support using TRIM on discrete swap partitions, it might be possible to use swap files inside an ordinary file system instead. For example, OS X does not support swap partitions; it only swaps to files within a file system, so it can use TRIM when, for example, swap files are deleted.
  • DragonFly BSD allows SSD-configured swap to also be used as file-system cache. This can be used to boost performance on both desktop and server workloads. The bcache, dm-cache, and Flashcache projects provide a similar concept for the Linux kernel.

Standardization organizations

The following are noted standardization organizations and bodies that work to create standards for solid-state drives (and other computer storage devices). The table below also includes organizations which promote the use of solid-state drives. This is not necessarily an exhaustive list.

Organization or committee Subcommittee of: Purpose
INCITS Coordinates technical standards activity between ANSI in the US and joint ISO/IEC committees worldwide
T10 INCITS SCSI
T11 INCITS FC
T13 INCITS ATA
JEDEC Develops open standards and publications for the microelectronics industry
JC-64.8 JEDEC Focuses on solid-state drive standards and publications
NVMHCI Provides standard software and hardware programming interfaces for nonvolatile memory subsystems
SATA-IO Provides the industry with guidance and support for implementing the SATA specification
SFF Committee Works on storage industry standards needing attention when not addressed by other standards committees
SNIA Develops and promotes standards, technologies, and educational services in the management of information
SSSI SNIA Fosters the growth and success of solid state storage

Commercialization

Availability

Solid-state drive technology has been marketed to the military and niche industrial markets since the mid-1990s.

Along with the emerging enterprise market, SSDs have been appearing in ultra-mobile PCs and a few lightweight laptop systems, adding significantly to the price of the laptop, depending on the capacity, form factor and transfer speeds. For low-end applications, a USB flash drive may be obtainable for anywhere from $10 to $100 or so, depending on capacity and speed; alternatively, a CompactFlash card may be paired with a CF-to-IDE or CF-to-SATA converter at a similar cost. Either of these requires that write-cycle endurance issues be managed, either by refraining from storing frequently written files on the drive or by using a flash file system. Standard CompactFlash cards usually have write speeds of 7 to 15 MB/s while the more expensive upmarket cards claim speeds of up to 60 MB/s.

The first flash-memory SSD based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began shipping in Japan on 3 July 2006 with a 16 GB flash memory hard drive. In late September 2006 Sony upgraded the SSD in the Vaio UX90 to 32 GB.

One of the first mainstream releases of SSD was the XO Laptop, built as part of the One Laptop Per Child project. Mass production of these computers, built for children in developing countries, began in December 2007. These machines use 1,024 MiB SLC NAND flash as primary storage which is considered more suitable for the harsher than normal conditions in which they are expected to be used. Dell began shipping ultra-portable laptops with SanDisk SSDs on April 26, 2007. Asus released the Eee PC netbook on October 16, 2007, with 2, 4 or 8 gigabytes of flash memory. In 2008 two manufacturers released the ultrathin laptops with SSD options instead of uncommon 1.8" HDD: this was a MacBook Air, released by the Apple in a January 31, with an optional 64 GB SSD (The Apple Store cost was $999 more for this option, as compared with that of an 80 GB 4200 RPM HDD), And the Lenovo ThinkPad X300 with a similar 64 gigabyte SSD, announced in February 2008 and upgraded to 128 GB SSD option on August 26, 2008, with release of ThinkPad X301 model (an upgrade which added approximately $200 US).

In 2008, low-end netbooks appeared with SSDs. In 2009, SSDs began to appear in laptops.

On January 14, 2008, EMC Corporation (EMC) became the first enterprise storage vendor to ship flash-based SSDs into its product portfolio when it announced it had selected STEC, Inc.'s Zeus-IOPS SSDs for its Symmetrix DMX systems. In 2008, Sun released the Sun Storage 7000 Unified Storage Systems (codenamed Amber Road), which use both solid state drives and conventional hard drives to take advantage of the speed offered by SSDs and the economy and capacity offered by conventional HDDs.

Dell began to offer optional 256 GB solid state drives on select notebook models in January 2009. In May 2009, Toshiba launched a laptop with a 512 GB SSD.

Since October 2010, Apple's MacBook Air line has used a solid state drive as standard. In December 2010, OCZ RevoDrive X2 PCIe SSD was available in 100 GB to 960 GB capacities delivering speeds over 740 MB/s sequential speeds and random small file writes up to 120,000 IOPS. In November 2010, Fusion-io released its highest performing SSD drive named ioDrive Octal utilising PCI-Express x16 Gen 2.0 interface with storage space of 5.12 TB, read speed of 6.0 GB/s, write speed of 4.4 GB/s and a low latency of 30 microseconds. It has 1.19 M Read 512 byte IOPS and 1.18 M Write 512 byte IOPS.

In 2011, computers based on Intel's Ultrabook specifications became available. These specifications dictate that Ultrabooks use an SSD. These are consumer-level devices (unlike many previous flash offerings aimed at enterprise users), and represent the first widely available consumer computers using SSDs aside from the MacBook Air. At CES 2012, OCZ Technology demonstrated the R4 CloudServ PCIe SSDs capable of reaching transfer speeds of 6.5 GB/s and 1.4 million IOPS. Also announced was the Z-Drive R5 which is available in capacities up to 12 TB, capable of reaching transfer speeds of 7.2 GB/s and 2.52 million IOPS using the PCI Express x16 Gen 3.0.

In December 2013, Samsung introduced and launched the industry's first 1 TB mSATA SSD. In August 2015, Samsung announced a 16 TB SSD, at the time the world's highest-capacity single storage device of any type.

While a number of companies offer SSD devices as of 2018 only five of the companies that offer them actually manufacture the NAND flash devices that are the storage element in SSDs.

Quality and performance

In general, performance of any particular device can vary significantly in different operating conditions. For example, the number of parallel threads accessing the storage device, the I/O block size, and the amount of free space remaining can all dramatically change the performance (i.e. transfer rates) of the device.

SSD technology has been developing rapidly. Most of the performance measurements used on disk drives with rotating media are also used on SSDs. Performance of flash-based SSDs is difficult to benchmark because of the wide range of possible conditions. In a test performed in 2010 by Xssist, using IOmeter, 4 kB random 70% read/30% write, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10,000 IOPs, and dropped sharply after 8 minutes to 4,000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3,000 and 4,000 from around 50 minutes onwards for the rest of the 8+ hour test run.

Designers of enterprise-grade flash drives try to extend longevity by increasing over-provisioning and by employing wear leveling.

Sales

SSD shipments were 11 million units in 2009, 17.3 million units in 2011 for a total of US$5 billion, 39 million units in 2012, and were expected to rise to 83 million units in 2013 to 201.4 million units in 2016 and to 227 million units in 2017.

Revenues for the SSD market (including low-cost PC solutions) worldwide totalled $585 million in 2008, rising over 100% from $259 million in 2007.

Fugitive slaves in the United States

From Wikipedia, the free encyclopedia
Eastman Johnson's A Ride for Liberty – The Fugitive Slaves, 1863, Brooklyn Museum

In the United States, fugitive slaves or runaway slaves were terms used in the 18th and 19th centuries to describe people who fled slavery. The term also refers to the federal Fugitive Slave Acts of 1793 and 1850. Such people are also called freedom seekers to avoid implying that the enslaved person had committed a crime and that the slaveholder was the injured party.

Generally, they tried to reach states or territories where slavery was banned, including Canada, or, until 1821, Spanish Florida. Most slave laws tried to control slave travel by requiring them to carry official passes if traveling without an enslaver.

Passage of the Fugitive Slave Act of 1850 increased penalties against runaway slaves and those who aided them. Because of this, some freedom seekers left the United States altogether, traveling to Canada or Mexico. Approximately 100,000 enslaved Americans escaped to freedom.

Beginning in 1643, slave laws were enacted in Colonial America, initially among the New England Confederation and then by several of the original Thirteen Colonies. In 1705, the Province of New York passed a measure to keep bondspeople from escaping north into Canada.

An animation showing the free/slave status of U.S. states and territories, 1789–1861 (see separate yearly maps below). The American Civil War began in 1861. The 13th Amendment, effective December 1865, abolished slavery in the U.S.

Over time, the states began to divide into slave states and free states. Maryland and Virginia passed laws to reward people who captured and returned enslaved people to their enslavers. Slavery was abolished in five states by the time of the Constitutional Convention in 1787. At that time, New Hampshire, Vermont, Massachusetts, Connecticut and Rhode Island had become free states.

Constitution

Legislators from the Southern United States were concerned that free states would protect people who fled slavery. The United States Constitution, ratified in 1788, never uses the words "slave" or "slavery" but recognized its existence in the so-called fugitive slave clause (Article IV, Section 2, Clause 3), the three-fifths clause, and the prohibition on prohibiting the importation of "such Persons as any of the States now existing shall think proper to admit" (Article I, Section 9).

Fugitive Slave Act of 1793

The Fugitive Slave Act of 1793 is the first of two federal laws that allowed for runaway slaves to be captured and returned to their enslavers. Congress passed the measure in 1793 to enable agents for enslavers and state governments, including free states, to track and capture bondspeople. They were also able to penalize individuals with a $500 (equivalent to $10,940 in 2022) fine if they assisted slaves in their escape. Slave hunters were obligated to obtain a court-approved affidavit in order to apprehend an enslaved individual, giving rise to the formation of an intricate network of safe houses commonly known as the Underground Railroad.

Fugitive Slave Act of 1850

The Fugitive Slave Act of 1850, part of the Compromise of 1850, was a federal law that declared that all fugitive slaves should be returned to their enslavers. Because the slave states agreed to have California enter as a free state, the free states agreed to pass the Fugitive Slave Act of 1850. Congress passed the act on September 18, 1850, and repealed it on June 28, 1864. The act strengthened the federal government's authority in capturing fugitive slaves. The act authorized federal marshals to require free state citizen bystanders to aid in the capturing of runaway slaves. Many free state citizens perceived the legislation as a way in which the federal government overstepped its authority because the legislation could be used to force them to act against abolitionist beliefs. Many free states eventually passed "personal liberty laws", which prevented the kidnapping of alleged runaway slaves; however, in the court case known as Prigg v. Pennsylvania, the personal liberty laws were ruled unconstitutional because the capturing of fugitive slaves was a federal matter in which states did not have the power to interfere.

Many free state citizens were outraged at the criminalization of actions by Underground Railroad operators and abolitionists who helped people escape slavery. It is considered one of the causes of the American Civil War (1861–1865). Congress repealed the Fugitive Acts of 1793 and 1850 on June 28, 1864.

State laws

Many states tried to nullify the acts or prevent the capture of escaped enslaved people by setting up laws to protect their rights. The most notable is the Massachusetts Liberty Act. This act was passed to keep escaped slaves from being returned to their enslavers through abduction by federal marshals or bounty hunters. Wisconsin and Vermont also enacted legislation to bypass the federal law. Abolitionists became more involved in Underground Railroad operations.

Pursuit

Advertisements and rewards

Runaway slave poster

Enslavers were outraged when an enslaved person was found missing, many of them believing that slavery was good for the enslaved person, and if they ran away, it was the work of abolitionists, with one enslaver arguing that "They are indeed happy, and if let alone would still remain so". (A new name was invented for the supposed mental illness of an enslaved person that made them want to run away: drapetomania.) Enslavers would put up flyers, place advertisements in newspapers, offer rewards, and send out posses to find them. Under the Fugitive Slave Act, enslavers could send federal marshals into free states to kidnap them. The law also brought bounty hunters into the business of returning enslaved people to their enslavers; a former enslaved person could be brought back into a slave state to be sold back into slavery if they were without freedom papers. In 1851, there was a case of a black coffeehouse waiter who federal marshals kidnapped on behalf of John Debree, who claimed to be the man's enslaver.

Capture

Fugitive slave Gordon during his 1863 medical examination in a U.S. Army camp.

Enslavers often harshly punished those they successfully recaptured, such as by amputating limbs, whipping, branding, and hobbling.

Individuals who aided fugitive slaves were charged and punished under this law. In the case of Ableman v. Booth, the latter was charged with aiding Joshua Glover's escape in Wisconsin by preventing his capture by federal marshals. The Wisconsin Supreme Court ruled that the Fugitive Slave Act of 1850 was unconstitutional, requiring states to violate their laws. Ableman v. Booth was appealed by the federal government to the US Supreme Court, which upheld the act's constitutionality.

The Underground Railroad

The Underground Railroad was a network of black and white abolitionists between the late 18th century and the end of the American Civil War who helped fugitive slaves escape to freedom. Members of the Religious Society of Friends (Quakers), African Methodist Episcopal Church, Baptists, Methodists, and other religious sects helped in operating the Underground Railroad.

In 1786, George Washington complained that a Quaker tried to free one of his slaves. In the early 1800s, Isaac T. Hopper, a Quaker from Philadelphia, and a group of people from North Carolina established a network of stations in their local area. In 1831, when Tice David was captured going into Ohio from Kentucky, his enslaver blamed an "Underground Railroad" who helped in the escape. Eight years later, while being tortured for his escape, a man named Jim said he was going north along the "underground railroad to Boston."

Fellow enslaved people often helped those who had run away. They gave signals, such as the lighting of a particular number of lamps, or the singing of a particular song on Sunday, to let escaping people know if it was safe to be in the area or if there were slave hunters nearby. If the freedom seeker stayed in a slave cabin, they would likely get food and learn good hiding places in the woods as they made their way north.

Hiding places called "stations" were set up in private homes, churches, and schoolhouses in border states between slave and free states. John Brown had a secret room in his tannery to give escaped enslaved people places to stay on their way. People who maintained the stations provided food, clothing, shelter, and instructions about reaching the next "station". Often, enslaved people had to make their way through southern slave states on their own to reach them.

The network extended throughout the United States—including Spanish Florida, Indian Territory, and Western United States—and into Canada and Mexico. The Underground Railroad was initially an escape route that would assist fugitive enslaved African Americans in arriving in the Northern states; however, with the passage of the Fugitive Slave Act of 1850, as well as other laws aiding the Southern states in the capture of runaway slaves, it became a mechanism to reach Canada. Canada was a haven for enslaved African-Аmericans because it had already abolished slavery by 1783. Black Canadians were also provided equal protection under the law. The well-known Underground Railroad "conductor" Harriet Tubman is said to have led approximately 300 enslaved people to Canada. In some cases, freedom seekers immigrated to Europe and the Caribbean islands.

Harriet Tubman

One of the most notable runaway slaves of American history and conductors of the Underground Railroad is Harriet Tubman. Born into slavery in Dorchester County, Maryland, around 1822, Tubman as a young adult, escaped from her enslaver's plantation in 1849. Between 1850 and 1860, she returned to the South numerous times to lead parties of other enslaved people to freedom, guiding them through the lands she knew well. She aided hundreds of people, including her parents, in their escape from slavery. Tubman followed north–south flowing rivers and the north star to make her way north. She preferred to guide runaway slaves on Saturdays because newspapers were not published on Sundays, which gave her a one-day head-start before runaway advertisements would be published. She preferred the winters because the nights were longer when it was the safest to travel. Tubman wore disguises. She sang songs in different tempos, such as Go Down Moses and Bound For the Promised Land, to indicate whether it was safe for freedom seekers to come out of hiding. Many people called her the "Moses of her people." During the American Civil War, Tubman also worked as a spy, cook, and a nurse.

Bayesian inference

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Bayesian_inference Bayesian inference ( / ...