Search This Blog

Wednesday, October 9, 2019

Units of information

From Wikipedia, the free encyclopedia
 
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the entropy of random variables and information contained in messages.

The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).

Primary units

Comparison of units of information: bit, trit, nat, ban. Quantity of information is the height of bars. Dark green level is the "Nat" unit.
 
In 1928, Ralph Hartley observed a fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted logb N. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with N possible states.

When b is 2, the unit is the shannon, equal to the information content of one "bit" (a portmanteau of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include:
  • Base b = 3: the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.

Units derived from bit

Several conventional names are used for collections or groups of bits.

Byte

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits – that is, an octet. A byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French, but also allows "B" in English, which is what is actually being used). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.

Nibble

A group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit.

Crumb

A pair of two bits or a quarter byte was called a crumb, often used in early 8-bit computing. It is now largely defunct.

Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 26, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72, 80 bits or others. 

Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad"). 

Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines

Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.

Systematic multiples

Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (as in kilobit or kbit), mega- = 106 = 1000000 (as in megabit or Mbit) and giga = 109 = 1000000000 (as in gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (1 kB = 8000 bit), megabyte (1 MB = 8000000bit), and gigabyte (1 GB = 8000000000bit). 

However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often repurposed the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of 228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.

Multiples of bits
Decimal
Value SI
1000 103 kbit kilobit
10002 106 Mbit megabit
10003 109 Gbit gigabit
10004 1012 Tbit terabit
10005 1015 Pbit petabit
10006 1018 Ebit exabit
10007 1021 Zbit zettabit
10008 1024 Ybit yottabit
Binary
Value IEC JEDEC
1024 210 Kibit kibibit Kbit kilobit
10242 220 Mibit mebibit Mbit megabit
10243 230 Gibit gibibit Gbit gigabit
10244 240 Tibit tebibit
-
10245 250 Pibit pebibit
-
10246 260 Eibit exbibit
-
10247 270 Zibit zebibit
-
10248 280 Yibit yobibit
-

Symbol Prefix SI Meaning Binary meaning Size difference
k kilo 103   = 10001 210 = 10241 2.40%
M mega 106   = 10002 220 = 10242 4.86%
G giga 109   = 10003 230 = 10243 7.37%
T tera 1012 = 10004 240 = 10244 9.95%
P peta 1015 = 10005 250 = 10245 12.59%
E exa 1018 = 10006 260 = 10246 15.29%
Z zetta 1021 = 10007 270 = 10247 18.06%
Y yotta 1024 = 10008 280 = 10248 20.89%

In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied.

On the other hand, for external storage systems (such as optical discs), the SI prefixes were commonly used with their decimal values (powers of 10). There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission (IEC) issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix:

Multiples of bytes
Decimal
Value Metric
1000 kB kilobyte
10002 MB megabyte
10003 GB gigabyte
10004 TB terabyte
10005 PB petabyte
10006 EB exabyte
10007 ZB zettabyte
10008 YB yottabyte
Binary
Value IEC JEDEC
1024 KiB kibibyte KB kilobyte
10242 MiB mebibyte MB megabyte
10243 GiB gibibyte GB gigabyte
10244 TiB tebibyte
10245 PiB pebibyte
10246 EiB exbibyte
10247 ZiB zebibyte
10248 YiB yobibyte
Symbol Prefix
Ki kibi, binary kilo 1 kibibyte (KiB) 210 bytes 1024 B
Mi mebi, binary mega 1 mebibyte (MiB) 220 bytes 1024 KiB
Gi gibi, binary giga 1 gibibyte (GiB) 230 bytes 1024 MiB
Ti tebi, binary tera 1 tebibyte (TiB) 240 bytes 1024 GiB
Pi pebi, binary peta 1 pebibyte (PiB) 250 bytes 1024 TiB
Ei exbi, binary exa 1 exbibyte (EiB) 260 bytes 1024 PiB

The JEDEC memory standards however define uppercase K, M, and G for the binary powers 210, 220 and 230 to reflect common usage.

Size examples

  • 1 bit – answer to a yes/no question.
  • 1 byte – a number from 0 to 255.
  • 90 bytes: enough to store a typical line of text from a book.
  • 512 bytes = ½ KiB: the typical sector of a hard disk.
  • 1024 bytes = 1 KiB: the classical block size in UNIX filesystems.
  • 2048 bytes = 2 KiB: a CD-ROM sector.
  • 4096 bytes = 4 KiB: a memory page in x86 (since Intel 80386).
  • 4 kB: about one page of text from a novel.
  • 120 kB: the text of a typical pocket book.
  • 1 MiB – a 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth).
  • 3 MB – a three-minute song (133 kbit/s).
  • 650–900 MB – a CD-ROM.
  • 1 GB – 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s.
  • 8/16 GB – two common sizes of USB flash drives.
  • 4 TB – the size of a $100 hard disk (as of early 2018).
  • 12 TB Largest hard disk drive (as of early 2018)
  • 16 TB Largest commercially available solid state drive (as of early 2018)
  • 100 TB Largest solid state drive constructed (as of early 2018)
  • 1.3 ZB – prediction of the volume of the whole internet in 2016.

Obsolete and unusual units

Several other units of information storage have been named:
  • 1 bit: unibit, sniff.
  • 2 bits: dibit, crumb, quad, quarter, taste, tayste, tidbit, tydbit, lick, lyck, semi-nibble.
  • 3 bits: tribit, triad, triade, tribble.
  • 5 bits: pentad, pentade, nickel, nyckle.
  • 6 bits: byte (in early IBM machines using BCD alphamerics), hexad, hexade, sextet.
  • 7 bits: heptad, heptade.
  • 8 bits: octet, now usually called byte
  • 10 bits: declet, decle, deckle, dyme.
  • 12 bits: slab.
  • 15 bits: parcel (on CDC 6600 and CDC 7600).
  • 16 bits: doublet, wyde, parcel (on Cray-1), plate, playte, chomp, chawmp (on a 32-bit machine).
  • 18 bits: chomp, chawmp (on a 36-bit machine).
  • 32 bits: quadlet, tetra, dinner, dynner, gawble (on a 32-bit machine).
  • 48 bits: gobble, gawble (under circumstances that remain obscure).
  • 64 bits: octlet, octa.
  • 96 bits: bentobox (in ITRON OS)
  • 128 bits: hexlet.
  • 16 bytes: paragraph (on Intel x86 processors).
  • 6 trits: tryte.
  • combit, comword.
Some of these names are jargon, obsolete, or used only in very restricted contexts.

Data type

From Wikipedia, the free encyclopedia
 
Python 3: the standard type hierarchy
 
In computer science and computer programming, a data type or simply type is an attribute of data which tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support common data types of real, integer and boolean. A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A type of value from which an expression may take its value.

Concept

Data types are used within type systems, which offer various ways of defining, implementing and using them. Different type systems ensure varying degrees of type safety

Almost all programming languages explicitly include the notion of data type, though different languages may use different terminology. Common data types include:
For example, in the Java programming language, the type int represents the set of 32-bit integers ranging in value from −2,147,483,648 to 2,147,483,647, as well as the operations that can be performed on integers, such as addition, subtraction, and multiplication. Colors, on the other hand, are represented by three bytes denoting the amounts each of red, green, and blue, and one string representing that color's name; allowable operations include addition and subtraction, but not multiplication. 

Most programming languages also allow the programmer to define additional data types, usually by combining multiple elements of other types and defining the valid operations of the new data type. For example, a programmer might create a new data type named "complex number" that would include real and imaginary parts. A data type also represents a constraint placed upon the interpretation of data in a type system, describing representation, interpretation and structure of values or objects stored in computer memory. The type system uses data type information to check correctness of computer programs that access or manipulate the data. 

Most data types in statistics have comparable types in computer programming, and vice versa, as shown in the following table: 

Statistics Programming
real-valued (interval scale) floating-point
real-valued (ratio scale)
count data (usually non-negative) integer
binary data Boolean
categorical data enumerated type
random vector list or array
random matrix two-dimensional array
random tree tree

Definition

(Parnas, Shore & Weiss 1976) identified five definitions of a "type" that were used—sometimes implicitly—in the literature. Types including behavior align more closely with object-oriented models, whereas a structured programming model would tend to not include code, and are called plain old data structures

The five types are:
Syntactic
A type is a purely syntactic label associated with a variable when it is declared. Such definitions of "type" do not give any semantic meaning to types.[clarification needed]
Representation
A type is defined in terms of its composition of more primitive types—often machine types.
Representation and behaviour
A type is defined as its representation and a set of operators manipulating these representations.
Value space
A type is a set of possible values which a variable can possess. Such definitions make it possible to speak about (disjoint) unions or Cartesian products of types.
Value space and behaviour
A type is a set of values which a variable can possess and a set of functions that one can apply to these values.
The definition in terms of a representation was often done in imperative languages such as ALGOL and Pascal, while the definition in terms of a value space and behaviour was used in higher-level languages such as Simula and CLU.

Classes of data types

Primitive data types

Primitive data types are typically types that are built-in or basic to a language implementation.

Machine data types

All data in computers based on digital electronics is represented as bits (alternatives 0 and 1) on the lowest level. The smallest addressable unit of data is usually a group of bits called a byte (usually an octet, which is 8 bits). The unit processed by machine code instructions is called a word (as of 2011, typically 32 or 64 bits). Most instructions interpret the word as a binary number, such that a 32-bit word can represent unsigned integer values from 0 to or signed integer values from to . Because of two's complement, the machine language and machine doesn't need to distinguish between these unsigned and signed data types for the most part.

Floating point numbers used for floating point arithmetic use a different interpretation of the bits in a word. See: https://en.wikipedia.org/wiki/Floating-point_arithmetic for details. 

Machine data types need to be exposed or made available in systems or low-level programming languages, allowing fine-grained control over hardware. The C programming language, for instance, supplies integer types of various widths, such as short and long. If a corresponding native type does not exist on the target platform, the compiler will break them down into code using types that do exist. For instance, if a 32-bit integer is requested on a 16 bit platform, the compiler will tacitly treat it as an array of two 16 bit integers.

In higher level programming, machine data types are often hidden or abstracted as an implementation detail that would render code less portable if exposed. For instance, a generic numeric type might be supplied instead of integers of some specific bit-width.

Boolean type

The Boolean type represents the values true and false. Although only two values are possible, they are rarely implemented as a single binary digit for efficiency reasons. Many programming languages do not have an explicit Boolean type, instead interpreting (for instance) 0 as false and other values as true. Boolean data refers to the logical structure of how the language is interpreted to the machine language. In this case a Boolean 0 refers to the logic False. True is always a non zero, especially a one which is known as Boolean 1.

Numeric types

Such as:
  • The integer data types, or "non-fractional numbers". May be sub-typed according to their ability to contain negative values (e.g. unsigned in C and C++). May also have a small number of predefined subtypes (such as short and long in C/C++); or allow users to freely define subranges such as 1..12 (e.g. Pascal/Ada).
  • Floating point data types, usually represent values as high-precision fractional values (rational numbers, mathematically), but are sometimes misleadingly called reals (evocative of mathematical real numbers). They usually have predefined limits on both their maximum values and their precision. Typically stored internally in the form a × 2b (where a and b are integers), but displayed in familiar decimal form.
  • Fixed point data types are convenient for representing monetary values. They are often implemented internally as integers, leading to predefined limits.
  • Bignum or arbitrary precision numeric types lack predefined limits. They are not primitive types, and are used sparingly for efficiency reasons.

Composite types

Composite types are derived from more than one primitive type. This can be done in a number of ways. The ways they are combined are called data structures. Composing a primitive type into a compound type generally results in a new type, e.g. array-of-integer is a different type to integer.
  • An array (also called vector, list, or sequence) stores a number of elements and provide random access to individual elements. The elements of an array are typically (but not in all contexts) required to be of the same type. Arrays may be fixed-length or expandable. Indices into an array are typically required to be integers (if not, one may stress this relaxation by speaking about an associative array) from a specific range (if not all indices in that range correspond to elements, it may be a sparse array).
  • Record (also called tuple or struct) Records are among the simplest data structures. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members.
  • Union. A union type definition will specify which of a number of permitted primitive types may be stored in its instances, e.g. "float or long integer". Contrast with a record, which could be defined to contain a float and an integer; whereas, in a union, there is only one type allowed at a time.
    • A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an additional field indicating its current type for enhanced type safety.
  • A set is an abstract data structure that can store certain values, without any particular order, and no repeated values. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a boolean "in" or "not in".
  • An object contains a number of data fields, like a record, and also a number of subroutines for accessing or modifying them, called methods.
Many others are possible, but they tend to be further variations and compounds of the above. For example a linked list can store the same data as an array, but provides sequential access rather than random and is built up of records in dynamic memory; though arguably a data structure rather than a type per se, it is also common and distinct enough that including it in a discussion of composite types can be justified.

Enumerations

The enumerated type has distinct values, which can be compared and assigned, but which do not necessarily have any particular concrete representation in the computer's memory; compilers and interpreters can represent them arbitrarily. For example, the four suits in a deck of playing cards may be four enumerators named CLUB, DIAMOND, HEART, SPADE, belonging to an enumerated type named suit. If a variable V is declared having suit as its data type, one can assign any of those four values to it. Some implementations allow programmers to assign integer values to the enumeration values, or even treat them as type-equivalent to integers.

String and text types

Such as:
  • Alphanumeric character. A letter of the alphabet, digit, blank space, punctuation mark, etc.
  • Alphanumeric strings, a sequence of characters. They are typically used to represent words and text.
Character and string types can store sequences of characters from a character set such as ASCII. Since most character sets include the digits, it is possible to have a numeric string, such as "1234". However, many languages treat these as belonging to a different type to the numeric value 1234

Character and string types can have different subtypes according to the required character "width". The original 7-bit wide ASCII was found to be limited, and superseded by 8 and 16-bit sets, which can encode a wide variety of non-Latin alphabets (Hebrew, Chinese) and other symbols. Strings may be either stretch-to-fit or of fixed size, even in the same programming language. They may also be subtyped by their maximum size. 

Note: strings are not a primitive in all languages, for instance in C, they are composed from an array of characters.

Other types

Types can be based on, or derived from, the basic types explained above. In some languages, such as C, functions have a type derived from the type of their return value.

Pointers and references

The main non-composite, derived type is the pointer, a data type whose value refers directly to (or "points to") another value stored elsewhere in the computer memory using its address. It is a primitive kind of reference. (In everyday terms, a page number in a book could be considered a piece of data that refers to another one). Pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" a pointer whose value was never a valid memory address would cause a program to crash. To ameliorate this potential problem, pointers are considered a separate type to the type of data they point to, even if the underlying representation is the same.

Abstract data types

Any type that does not specify an implementation is an abstract data type. For instance, a stack (which is an abstract type) can be implemented as an array (a contiguous block of memory containing multiple values), or as a linked list (a set of non-contiguous memory blocks linked by pointers). 

Abstract types can be handled by code that does not know or "care" what underlying types are contained in them. Programming that is agnostic about concrete data types is called generic programming. Arrays and records can also contain underlying types, but are considered concrete because they specify how their contents or elements are laid out in memory.

Examples include:

Utility types

For convenience, high-level languages may supply ready-made "real world" data types, for instance times, dates and monetary values and memory, even where the language allows them to be built from primitive types.

Type systems

A type system associates types with computed values. By examining the flow of these values, a type system attempts to prove that no type errors can occur. The type system in question determines what constitutes a type error, but a type system generally seeks to guarantee that operations expecting a certain kind of value are not used with values for which that operation does not make sense. 

A compiler may use the static type of a value to optimize the storage it needs and the choice of algorithms for operations on the value. In many C compilers the float data type, for example, is represented in 32 bits, in accord with the IEEE specification for single-precision floating point numbers. They will thus use floating-point-specific microprocessor operations on those values (floating-point addition, multiplication, etc.).

The depth of type constraints and the manner of their evaluation affect the typing of the language. A programming language may further associate an operation with varying concrete algorithms on each type in the case of type polymorphism. Type theory is the study of type systems, although the concrete type systems of programming languages originate from practical issues of computer architecture, compiler implementation, and language design.

Type systems may be variously static or dynamic, strong or weak typing, and so forth.

Byte

From Wikipedia, the free encyclopedia

byte
Unit systemunits derived from bit
Unit ofdigital information, data size
SymbolB or (when referring to exactly 8 bits) o

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.

The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often used implementation in early encoding systems and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 24, 36, 48, or 60 bits, corresponding to 2, 4, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables, before the term byte became common.

The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte—2 to the power 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the eight-bit size. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes.

The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE) in contrast to the bit, whose IEEE symbol is a lower-case b. Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the ambiguity of the byte.

History

The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. It is a deliberate respelling of bite to avoid accidental mutation to bit.

Another origin of byte for bit groups smaller than a computers's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.

Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches. The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. 

In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit µ-law encoding. This large investment promised to reduce transmission costs for eight-bit data. 

The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit. 

The term octet is used to unambiguously specify a size of eight bits. It is used extensively in protocol definitions.

Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.

Unit symbol

Prefixes for multiples of
bits (bit) or bytes (B)
Decimal
Value SI
1000 103 k kilo
10002 106 M mega
10003 109 G giga
10004 1012 T tera
10005 1015 P peta
10006 1018 E exa
10007 1021 Z zetta
10008 1024 Y yotta
Binary
Value IEC JEDEC
1024 210 Ki kibi K kilo
10242 220 Mi mebi M mega
10243 230 Gi gibi G giga
10244 240 Ti tebi
10245 250 Pi pebi
10246 260 Ei exbi
10247 270 Zi zebi
10248 280 Yi yobi

The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B. In contrast, IEEE 1541 specifies the lower case character b as the symbol for the bit, but IEC 80000-13 and Metric-Interchange-Format specify the symbol as bit, providing disambiguation from B for byte. 

In the International System of Quantities (ISQ), B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.

The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo.

The usage of the term octad(e) for eight bits is no longer common.

Unit multiples

Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size
 
Despite standardization efforts, ambiguity still exists in the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning the prefixes kilo (k or K), mega (M), and giga (G). Computer memory has a binary architecture in which multiples are expressed in powers of 2. In some fields of the software and computer hardware industries a binary prefix is used for bytes and bits, while producers of computer storage devices practice adherence to decimal SI multiples. For example, a computer disk drive capacity of 100 gigabytes is specified when the disk contains 100 billion bytes (93 gibibytes) of storage space. 

While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta. The linear–log graph illustrates the difference versus storage size up to an exabyte.

Common uses

Many programming languages defined the data type byte

The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no "gaps" between two bytes. This means every bit in memory is part of a byte.

Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127. 

.NET programming languages, such as C#, define both an unsigned byte and a signed sbyte, holding values from 0 to 255, and −128 to 127, respectively. 

In data transmission systems, the byte is defined as a contiguous sequence of bits in a serial data stream representing the smallest distinguished unit of data. A transmission unit might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...