Humans have been storing, retrieving, manipulating, and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC. However, the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt
and Thomas L. Whisler commented that "the new technology does not yet
have a single established name. We shall call it information technology
(IT)." Their definition consists of three categories: techniques for
processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.
The term is commonly used as a synonym for computers and computer
networks, but it also encompasses other information distribution
technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce.
Based on the storage and processing technologies employed, it is
possible to distinguish four distinct phases of IT development:
pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940), and electronic (1940–present). This article focuses on the most recent period (electronic).
This is the Antikythera mechanism, which is considered the first mechanical analog computer, dating back to the first century.
Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.
Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanicalZuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. During the Second World War, Colossus developed the first electronicdigital computer to decrypt German messages. Although it was programmable,
it was not general-purpose, being designed to perform only a single
task. It also lacked the ability to store its program in memory;
programming was carried out using plugs and switches to alter the
internal wiring. The first recognizably modern electronic digital stored-program computer was the Manchester Baby, which ran its first program on 21 June 1948.
The development of transistors in the late 1940s at Bell Laboratories
allowed a new generation of computers to be designed with greatly
reduced power consumption. The first commercially available
stored-program computer, the Ferranti Mark I,
contained 4050 valves and had a power consumption of 25 kilowatts. By
comparison, the first transistorized computer developed at the
University of Manchester and operational by November 1953, consumed only
150 watts in its final version.
Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube,
but the information stored in it and delay line memory was volatile in
that it had to be continuously refreshed, and thus was lost once power
was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer.
IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices,
but that year digital storage capacity exceeded analog for the first
time. As of 2007, almost 94% of the data stored worldwide was held
digitally:
52% on hard disks, 28% on optical devices ,and 11% on digital magnetic
tape. It has been estimated that the worldwide capacity to store
information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.
Databases
Database Management Systems (DMS) emerged in the 1960s to address the
problem of storing and retrieving large amounts of data accurately and
quickly. An early such system was IBM's Information Management System (IMS), which is still widely deployed more than 50 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows ,and columns. In 1981, the first commercially available relational database management system (RDBMS) was released by Oracle.
All DMS consist of components, they allow the data they store to
be accessed simultaneously by many users while maintaining its
integrity.
All databases are common in one point that the structure of the data
they contain is defined and stored separately from the data itself, in a
database schema.
In recent years, the extensible markup language (XML) has become a popular format for data representation. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort". As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine and human-readable.
The terms "data" and "information" are not synonymous. Anything
stored is data, but it only becomes information when it is organized and
presented meaningfully. Most of the world's digital data is unstructured, and stored in a variety of different physical formats even within a single organization. Data warehouses
began to be developed in the 1980s to integrate these disparate stores.
They typically contain data extracted from various sources, including
external sources such as the Internet, organized in such a way as to
facilitate decision support systems (DSS).
Data transmission
This
is what a IBM card storage warehouse located in Alexandria, Virginia in
1959. This is where the government kept storage of punched cards.
Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels.
XML has been increasingly employed as a means of data interchange since the early 2000s, particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing "data-in-transit rather than ... data-at-rest".
Data manipulation
Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore's law):
machines' application-specific capacity to compute information per
capita roughly doubled every 14 months between 1986 and 2007; the per
capita capacity of the world's general-purpose computers doubled every
18 months during the same two decades; the global telecommunication
capacity per capita doubled every 34 months; the world's storage
capacity per capita required roughly 40 months to double (every 3
years); and per capita broadcast information has doubled every 12.3
years.
Massive amounts of data are stored worldwide every day, but
unless it can be analyzed and presented effectively it essentially
resides in what have been called data tombs: "data archives that are
seldom visited". To address that issue, the field of data mining – "the process of discovering interesting patterns and knowledge from large amounts of data" – emerged in the late 1980s.
Perspectives
Academic perspective
In an academic context, the Association for Computing Machinery
defines IT as "undergraduate degree programs that prepare students to
meet the computer technology needs of business, government, healthcare,
schools, and other kinds of organizations .... IT specialists assume
responsibility for selecting hardware and software products appropriate
for an organization, integrating those products with organizational
needs and infrastructure, and installing, customizing, and maintaining
those applications for the organization’s computer users."
Undergraduate degrees in IT (B.S., A.S.) are similar to other
computer science degrees. In fact, they often times have the same
foundational level courses. Computer science (CS) programs tend to
focus more on theory and design, whereas Information Technology programs
are structured to equip the graduate with expertise in the practical
application of technology solutions to support modern business and user
needs.
Commercial and employment perspective
Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry".
These titles can be misleading at times and should not be mistaken for
“tech companies”; which are generally large scale, for-profit
corporations that sell consumer technology and software. It is also
worth noting that from a business perspective, Information Technology
departments are a “cost center” the majority of the time. A cost center
is a department or staff which incurs expenses, or “costs”, within a
company rather than generating profits or revenue streams. Modern
businesses rely heavily on technology for their day-to-day operations,
so the expenses delegated to cover technology that facilitates business
in a more efficient manner is usually seen as “just the cost of doing
business”. IT departments are allocated funds by senior leadership and
must attempt to achieve the desired deliverables while staying within
that budget. Government and the private sector might have different
funding mechanisms, but the principles are more-or-less the same. This
is an often overlooked reason for the rapid interest in automation and
Artificial Intelligence, but the constant pressure to do more with less
is opening the door for automation to take control of at least some
minor operations in large companies.
Many companies now have IT departments for managing the
computers, networks, and other technical areas of their businesses.
Companies have also sought to integrate IT with business outcomes and
decision-making through a BizOps or business operations department.
In a business context, the Information Technology Association of America
has defined information technology as "the study, design, development,
application, implementation, support or management of computer-based
information systems".
The responsibilities of those working in the field include network
administration, software development and installation, and the planning
and management of an organization's technology life cycle, by which
hardware and software are maintained, upgraded and replaced.
Information services
Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies, as well as data brokers.
U.S. Employment distribution of computer systems design and related services, 2011
U.S. Employment in the computer systems and design related services industry, in thousands, 1990-2011
U.S. Occupational growth and wages in computer systems design and related services, 2010-2020
U.S. projected percent change in employment in selected occupations in computer systems design and related services, 2010-2020
U.S. projected average annual percent change in output and employment in selected industries, 2010-2020
Ethical perspectives
The field of information ethics was established by mathematician Norbert Wiener in the 1940s. Some of the ethical issues associated with the use of information technology include:
Breaches of copyright by those downloading files stored without the permission of the copyright holders
Employers monitoring their employees' emails and other Internet usage
In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural languageelements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g. memory management),
making the process of developing a program simpler and more
understandable than when using a lower-level language. The amount of
abstraction provided defines how "high-level" a programming language is.
In the 1960s, high-level programming languages using a compiler were commonly called autocodes.
Examples of autocodes are COBOL and Fortran.
The first high-level programming language designed for computers was Plankalkül, created by Konrad Zuse.
However, it was not implemented in his time, and his original
contributions were largely isolated from other developments due to World War II, aside from the language's influence on the "Superplan" language by Heinz Rutishauser and also to some degree Algol. The first significantly widespread high-level language was Fortran, a machine-independent development of IBM's earlier Autocode systems. Algol, defined in 1958 and 1960 by committees of European and American computer scientists, introduced recursion as well as nested functions under lexical scope. It was also the first language with a clear distinction between value and name-parameters and their corresponding semantics. Algol also introduced several structured programming concepts, such as the while-do and if-then-else constructs and its syntax was the first to be described in formal notation – "Backus–Naur form" (BNF). During roughly the same period, Cobol introduced records (also called structs) and Lisp introduced a fully general lambda abstraction in a programming language for the first time.
Features
"High-level language" refers to the higher level of abstraction from machine language. Rather than dealing with registers, memory addresses, and call stacks, high-level languages deal with variables, arrays, objects, complex arithmetic or boolean expressions, subroutines and functions, loops, threads, locks, and other abstract computer science concepts, with a focus on usability over optimal program efficiency. Unlike low-level assembly languages, high-level languages have few, if any, language elements that translate directly into a machine's native opcodes.
Other features, such as string handling routines, object-oriented
language features, and file input/output, may also be present. One thing
to note about high-level programming languages is that these languages
allow the programmer to be detached and separated from the machine. That
is, unlike low-level languages like assembly or machine language,
high-level programming can amplify the programmer's instructions and
trigger a lot of data movements in the background without their
knowledge. The responsibility and power of executing instructions have
been handed over to the machine from the programmer.
Abstraction penalty
High-level
languages intend to provide features which standardize common tasks,
permit rich debugging, and maintain architectural agnosticism; while
low-level languages often produce more efficient code through optimization for a specific system architecture. Abstraction penalty
is the cost that high-level programming techniques pay for being unable
to optimize performance or use certain hardware because they don't take
advantage of certain low-level architectural resources. High-level
programming exhibits features like more generic data structures and
operations, run-time interpretation, and intermediate code files; which
often result in execution of far more operations than necessary, higher
memory consumption, and larger binary program size.
For this reason, code which needs to run particularly quickly and
efficiently may require the use of a lower-level language, even if a
higher-level language would make the coding easier. In many cases,
critical portions of a program mostly in a high-level language can be
hand-coded in assembly language, leading to a much faster, more efficient, or simply reliably functioning optimised program.
However, with the growing complexity of modern microprocessor
architectures, well-designed compilers for high-level languages
frequently produce code comparable in efficiency to what most low-level
programmers can produce by hand, and the higher abstraction may allow
for more powerful techniques providing better overall results than their
low-level counterparts in particular settings.
High-level languages are designed independent of a specific computing
system architecture. This facilitates executing a program written in
such a language on any computing system with compatible support for the
Interpreted or JIT
program. High-level languages can be improved as their designers
develop improvements. In other cases, new high-level languages evolve
from one or more others with the goal of aggregating the most popular
constructs with new or improved features. An example of this is Scala which maintains backward compatibility with Java
which means that programs and libraries written in Java will continue
to be usable even if a programming shop switches to Scala; this makes
the transition easier and the lifespan of such high-level coding
indefinite. In contrast, low-level programs rarely survive beyond the
system architecture which they were written for without major revision.
This is the engineering 'trade-off' for the 'Abstraction Penalty'.
The terms high-level and low-level are inherently relative. Some decades ago, the C language, and similar languages, were most often considered "high-level", as it supported concepts such as expression evaluation, parameterised recursive functions, and data types and structures, while assembly language was considered "low-level". Today, many programmers might refer to C as low-level, as it lacks a large runtime-system
(no garbage collection, etc.), basically supports only scalar
operations, and provides direct memory addressing. It, therefore,
readily blends with assembly language and the machine level of CPUs and microcontrollers.
Assembly language may itself be regarded as a higher level (but often still one-to-one if used without macros) representation of machine code, as it supports concepts such as constants and (limited) expressions, sometimes even variables, procedures, and data structures. Machine code, in its turn, is inherently at a slightly higher level than the microcode or micro-operations used internally in many processors.
Execution modes
There are three general modes of execution for modern high-level languages:
Interpreted
When code written in a language is interpreted, its syntax is read and then executed directly, with no compilation stage. A program called an interpreter
reads each program statement, following the program flow, then decides
what to do, and does it. A hybrid of an interpreter and a compiler will
compile the statement into machine code and execute that; the machine
code is then discarded, to be interpreted anew if the line is executed
again. Interpreters are commonly the simplest implementations of the
behavior of a language, compared to the other two variants listed here.
Compiled
When code written in a language is compiled, its syntax is transformed into an executable form before running. There are two types of compilation:
Machine code generation
Some compilers compile source code directly into machine code.
This is the original mode of compilation, and languages that are
directly and completely transformed to machine-native code in this way
may be called truly compiled languages. See assembly language.
Intermediate representations
When code written in a language is compiled to an intermediate representation,
that representation can be optimized or saved for later execution
without the need to re-read the source file. When the intermediate
representation is saved, it may be in a form such as bytecode. The intermediate representation must then be interpreted or further compiled to execute it. Virtual machines
that execute bytecode directly or transform it further into machine
code have blurred the once clear distinction between intermediate
representations and truly compiled languages.
Source-to-source translated or transcompiled
Code written in a language may be translated into terms of a
lower-level language for which native code compilers are already common.
JavaScript and the language C are common targets for such translators. See CoffeeScript, Chicken Scheme, and Eiffel as examples. Specifically, the generated C and C++ code can be seen (as generated from the Eiffel language when using the EiffelStudio IDE) in the EIFGENs directory of any compiled Eiffel project. In Eiffel, the translated process is referred to as transcompiling or transcompiled, and the Eiffel compiler as a transcompiler or source-to-source compiler.
Note that languages are not strictly interpreted languages or compiled languages. Rather, implementations of language behavior use interpreting or compiling. For example, ALGOL 60 and Fortran
have both been interpreted (even though they were more typically
compiled). Similarly, Java shows the difficulty of trying to apply these
labels to languages, rather than to implementations; Java is compiled
to bytecode which is then executed by either interpreting (in a Java virtual machine (JVM)) or compiling (typically with a just-in-time compiler such as HotSpot,
again in a JVM). Moreover, compiling, transcompiling, and interpreting
is not strictly limited to only a description of the compiler artifact
(binary executable or IL assembly).
High-level language computer architecture
Alternatively,
it is possible for a high-level language to be directly implemented by a
computer – the computer directly executes the HLL code. This is known
as a high-level language computer architecture – the computer architecture itself is designed to be targeted by a specific high-level language. The Burroughs large systems were target machines for ALGOL 60, for example.
In computer programming, assembly language (or assembler language), sometimes abbreviated asm, is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture'smachine codeinstructions.
Because assembly depends on the machine code instructions, every
assembly language is designed for exactly one specific computer
architecture. Assembly language may also be called symbolic machine code.
Assembly code is converted into executable machine code by a utility program referred to as an assembler. The conversion process is referred to as assembly, as in assembling the source code. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of program and memory locations, and macros are generally also supported.
Each assembly language is specific to a particular computer architecture and sometimes to an operating system. However, some assembly languages do not provide specific syntax
for operating system calls, and most assembly languages can be used
universally with any operating system, as the language provides access
to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, a much more complicated task than assembling.
The computational step when an assembler is processing a program is called assembly time.
Assembly language syntax
Assembly language uses a mnemonic to represent each low-level machine instruction or opcode, typically also each architectural register, flag, etc. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions
for operands. Thus, programmers are freed from tedious repetitive
calculations and assembler programs are much more readable than machine
code. Depending on the architecture, these elements may also be combined
for specific instructions or addressing modes using offsets
or other data as well as fixed addresses. Many assemblers offer
additional mechanisms to facilitate program development, to control the
assembly process, and to aid debugging.
Terminology
A macro assembler includes a macroinstruction
facility so that (parameterized) assembly language text can be
represented by a name, and that name can be used to insert the expanded
text into other code.
A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system).
Cross-assembling facilitates the development of programs for systems
that do not have the resources to support software development, such as
an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer
(when the read-only memory is integrated in the device, as in
microcontrollers), or a data link using either an exact bit-by-bit copy
of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
A high-level assembler
is a program that provides language abstractions more often associated
with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
A microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer.
A meta-assembler is "a program that accepts the syntactic and
semantic description of an assembly language, and generates an
assembler for that language". "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
Key concepts
Assembler
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities.
The use of symbolic references is a key feature of assemblers, saving
tedious calculations and manual address updates after program
modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of calledsubroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing,
most of them are able to perform jump-instruction replacements (long
jumps replaced by short or relative jumps) in any number of passes, on
request. Others may even do simple rearrangement or insertion of
instructions, such as some assemblers for RISCarchitectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code.
A single assembler may also have different modes in order to support
variations in syntactic forms as well as their exact semantic
interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
Number of passes
There
are two types of assemblers based on how many passes through the source
are needed (how many times the assembler reads the source) to produce
the object file.
One-pass assemblers go through the source code once. Any symbol used before it is defined will require "errata" at the end of the object code (or, at least, no earlier than the point where the symbol is defined) telling the linker or the loader to "go back" and overwrite a placeholder which had been left where the as yet undefined symbol was used.
Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of
each instruction on the initial passes in order to calculate the
addresses of subsequent symbols. This means that if the size of an
operation referring to an operand defined later depends on the type or
distance of the operand, the assembler will make a pessimistic estimate
when first encountering the operation, and if necessary, pad it with one
or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization,
addresses may be recalculated between passes to allow replacing
pessimistic code with code tailored to the exact distance from the
target.
The original reason for the use of one-pass assemblers was memory
size and speed of assembly – often a second pass would require storing
the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape.
Later computers with much larger memories (especially disc storage),
had the space to perform all necessary processing without such
re-reading. The advantage of the multi-pass assembler is that the
absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD
may be undefined. A two-pass assembler would determine both addresses
in pass 1, so they would be known when generating code in pass 2.
S1 B FWD
...
FWD EQU *
...
BKWD EQU *
...
S2 B BKWD
High-level procedure/function declarations and invocations
Advanced control structures (IF/THEN/ELSE, SWITCH)
High-level abstract data types, including structures/records, unions, classes, and sets
Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
A program written in assembly language consists of a series of mnemonic
processor instructions and meta-statements (known variously as
directives, pseudo-instructions, and pseudo-ops), comments and data.
Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters.
Some instructions may be "implied," which means the data upon which
the instruction operates is implicitly defined by the instruction
itself—such an instruction does not take an operand. The resulting
statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
10110000 01100001
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 61
Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonicMOV (an abbreviation of move)
for instructions such as this, so the machine code above can be written
as follows in assembly language, complete with an explanatory comment
if required, after the semicolon. This is much easier to read and to
remember.
MOVAL,61h; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic,
such as MOV, may be used for a family of related instructions for
loading, copying and moving data, whether these are immediate values,
values in registers, or memory locations pointed to by values in
registers or by immediate (a.k.a direct) addresses. Other assemblers
may use separate opcode mnemonics such as L for "move memory to
register", ST for "move register to memory", LR for "move register to
register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that
means that the mnemonic corresponds to several different binary
instruction codes, excluding data (e.g. the 61h in this
example), depending on the operands that follow the mnemonic. For
example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:
88 E0
The first byte, 88h, identifies a move between a byte-sized register
and either another register or memory, and the second byte, E0h, is
encoded (with three bit-fields) to specify that both operands are
registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more
than one binary instruction, the assembler determines which instruction
to generate by examining the operands. In the first example, the
operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable.
Assembly languages are always designed so that this sort of
unambiguousness is universally enforced by their syntax. For example,
in the Intel x86 assembly language, a hexadecimal constant must start
with a numeral digit, so that the hexadecimal number 'A' (equal to
decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow.
MOVAL,1h; Load AL with immediate value 1MOVCL,2h; Load CL with immediate value 2MOVDL,3h; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOVEAX,[EBX]; Move the 4 bytes in memory at the address contained in EBX into EAXMOV[ESI+EAX],CL; Move the contents of CL into the byte at address ESI+EAXMOVDS,DX; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the
opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the
programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an
assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions
(essentially macros) which expand into several machine language
instructions to provide commonly needed functionality. For example, for a
machine that lacks a "branch if greater or equal" instruction, an
assembler may provide a pseudoinstruction that expands to the machine's
"set if less than" and "branch if zero (on the result of the set
instruction)". Most full-featured assemblers also provide a rich macro
language (discussed below) which is used by vendors and programmers to
generate more complex code and data sequences. Since the information
about pseudoinstructions and macros defined in the assembler environment
is not present in the object program, a disassembler cannot reconstruct
the macro and pseudoinstruction invocations but can only disassemble
the actual machine instructions that the assembler generated from those
abstract assembly-language entities. Likewise, since comments in the
assembly language source file are ignored by the assembler and have no
effect on the object code it generates, a disassembler is always
completely unable to recover source comments.
Each computer architecture
has its own machine language. Computers differ in the number and type
of operations they support, in the different sizes and numbers of
registers, and in the representations of data in storage. While most
general-purpose computers are able to carry out essentially the same
functionality, the ways they do so differ; the corresponding assembly
languages reflect these differences.
Multiple sets of mnemonics
or assembly-language syntax may exist for a single instruction set,
typically instantiated in different assembler programs. In these cases,
the most popular one is usually that supplied by the CPU manufacturer
and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics
are the Intel 8080 family and the Intel 8086/8088. Because Intel
claimed copyright on its assembly language mnemonics (on each page of
their documentation published in the 1970s and early 1980s, at least),
some companies that independently produced CPUs compatible with Intel
instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A,
supports all the 8080A instructions plus many more; Zilog invented an
entirely new assembly language, not only for the new instructions but
also for all of the 8080A instructions. For example, where Intel uses
the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30
CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like
Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and
8088 instructions, to avoid accusations of infringement of Intel's
copyright. (It is questionable whether such copyrights can be valid,
and later CPU companies such as AMD[nb 3] and Cyrix
republished Intel's x86/IA-32 instruction mnemonics exactly with
neither permission nor legal penalty.) It is doubtful whether in
practice many people who programmed the V20 and V30 actually wrote in
NEC's assembly language rather than Intel's; since any two assembly
languages for the same instruction set architecture are isomorphic
(somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
Language design
Basic elements
There
is a large degree of diversity in the way the authors of assemblers
categorize statements and in the nomenclature that they use. In
particular, some describe anything other than a machine mnemonic or
extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly
language consists of 3 types of instruction statements that are used to
define program operations:
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode),
and there is at least one opcode mnemonic defined for each machine
language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands.
Most instructions refer to a single value or a pair of values.
Operands can be immediate (value coded in the instruction itself),
registers specified in the instruction or implied, or the addresses of
data located elsewhere in storage. This is determined by the underlying
processor architecture: the assembler merely reflects how this
architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP ("NO OPeration" – do nothing for one step) for BC with a mask of 0.
Extended mnemonics are often used to support specialized
uses of instructions, often for purposes not obvious from the
instruction name. For example, many CPU's do not have an explicit NOP
instruction, but do have instructions that can be used for the purpose.
In 8086 CPUs the instruction xchgax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchgax,ax. Some disassemblers recognize this and will decode the xchgax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions
that generate two or more machine instructions. For instance, with some
Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b. These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
Data directives
There
are instructions used to define data elements to hold data and
variables. They define the type of data, the length and the alignment
of data. These instructions can also define whether the data is
available to outside programs (programs assembled separately) or only to
the program in which the data section is defined. Some assemblers
classify these as pseudo-ops.
Assembly directives
Assembly
directives, also called pseudo-opcodes, pseudo-operations or
pseudo-ops, are commands given to an assembler "directing it to perform
operations other than assembling instructions".
Directives affect how the assembler operates and "may affect the object
code, the symbol table, the listing file, and the values of internal
assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish
them from machine instructions. Pseudo-ops can make the assembly of the
program dependent on parameters input by a programmer, so that one
program can be assembled in different ways, perhaps for different
applications. Or, a pseudo-op can be used to manipulate presentation of a
program to make it easier to read and maintain. Another common use of
pseudo-ops is to reserve storage areas for run-time data and optionally
initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names (labels or symbols)
with memory locations and various constants. Usually, every constant
and variable is given a name so instructions can reference those
locations by name, thus promoting self-documenting code.
In executable code, the name of each subroutine is associated with its
entry point, so any calls to a subroutine can use its name. Inside
subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures,
and assign labels that refer to literal values or the result of simple
computations performed by the assembler. Labels can also be used to
initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code
that will be ignored during assembly. Judicious commenting is essential
in assembly language programs, as the meaning and purpose of a sequence
of binary machine instructions can be difficult to determine. The "raw"
(uncommented) assembly language generated by compilers or disassemblers
is quite difficult to read when changes must be made.
Macros
Many assemblers support predefined macros, and others support programmer-defined
(and repeatedly re-definable) macros involving sequences of text lines
in which variables and constants are embedded. The macro definition is
most commonly
a mixture of assembler statements, e.g., directives, symbolic machine
instructions, and templates for assembler statements. This sequence of
text lines may include opcodes or directives. Once a macro has been
defined its name may be used in place of a mnemonic. When the assembler
processes such a statement, it replaces the statement with the text
lines associated with that macro, then processes them as if they existed
in the source code file (including, in some assemblers, expansion of
any macros existing in the replacement text). Macros in this sense date
to IBM autocoders of the 1950s.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language,
where its #define directive typically is used to create short single
line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or
indeed many lines of code, they can be used to make assembly language
programs appear to be far shorter, requiring fewer lines of source code,
as with higher level languages. They can also be used to add higher
levels of structure to assembly programs, optionally introduce embedded
debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters.
Some assemblers include quite sophisticated macro languages,
incorporating such high-level language elements as optional parameters,
symbolic variables, conditionals, string manipulation, and arithmetic
operations, all usable during the execution of a given macro, and
allowing macros to save context or exchange information. Thus a macro
might generate numerous assembly language instructions or data
definitions, based on the macro arguments. This could be used to
generate record-style data structures or "unrolled"
loops, for example, or could generate entire algorithms based on
complex parameters. For instance, a "sort" macro could accept the
specification of a complex sort key and generate code crafted for that
specific key, not needing the run-time tests that would be required for a
general procedure interpreting the specification. An organization using
assembly language that has been heavily extended using such a macro
suite can be considered to be working in a higher-level language since
such programmers are not working with a computer's lowest-level
conceptual elements. Underlining this point, macros were used to
implement an early virtual machine in SNOBOL4
(1967), which was written in the SNOBOL Implementation Language (SIL),
an assembly language for a virtual machine. The target machine would
translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for
specific customers in the mainframe era and were also used by customer
personnel to satisfy their employers' needs by making specific versions
of manufacturer operating systems. This was done, for example, by
systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities
of an assembler to generate code written in completely different
languages, for example, to generate a version of a program in COBOL
using a pure macro assembler program containing lines of COBOL code
inside assembly time operators instructing the assembler to generate
arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of
"macro processing" is independent of the concept of "assembly", the
former being in modern terms more word processing, text processing, than
generating object code. The concept of macro processing appeared, and
appears, in the C programming language, which supports "preprocessor
instructions" to set variables, and make conditional tests on their
values. Unlike certain previous macro processors inside assemblers, the C
preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro
processing time, the value of a parameter is textually substituted for
its name. The most famous class of bugs resulting was the use of a
parameter that itself was an expression and not a simple name when the
macro writer expected a name. In the macro:
foo: macro a
load a*b
the intention was that the caller would provide the name of a
variable, and the "global" variable or constant b would be used to
multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b
occurs. To avoid any possible ambiguity, users of macro processors can
parenthesize formal parameters inside macro definitions, or callers can
parenthesize the input parameters.
Support for structured programming
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills
(March 1970), and implemented by Marvin Kessler at IBM's Federal
Systems Division, which provided IF/ELSE/ENDIF and similar control flow
blocks for OS/360 assembler programs. This was a way to reduce or
eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code
in assembly language. This approach was widely accepted in the early
1980s (the latter days of large-scale assembly language use). IBM's High
Level Assembler Toolkit includes such a macro package.
A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial Ccompiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers,
and memory references; but it incorporated an expression syntax to
indicate execution order. Parentheses and other special symbols, along
with block-oriented structured programming constructs, controlled the
sequence of the generated instructions. A-natural was built as the
object language of a C compiler, rather than for hand-coding, but its
logical syntax won some fans.
There has been little apparent demand for more sophisticated
assemblers since the decline of large-scale assembly language
development.
In spite of that, they are still being developed and applied in cases
where resource constraints or peculiarities in the target system's
architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured
programming via macros, such as the switch macro provided with the
Masm32 package (this code is a complete program):
include\masm32\include\masm32rt.inc; use the Masm32 library.codedemomain:REPEAT20switchrv(nrandom,9); generate a number between 0 and 8movecx,7case0print"case 0"caseecx; in contrast to most other programming languages,print"case 7"; the Masm32 switch allows "variable cases"case1..3.ifeax==1print"case 1".elseifeax==2print"case 2".elseprint"cases 1 to 3: other".endifcase4,6,8print"cases 4, 6 or 8"defaultmovebx,19; print 20 stars.Repeatprint"*"decebx.UntilSign?; loop until the sign flag is setendswprintchr$(13,10)ENDMexitenddemomain
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation
programming needed with the earliest computers, freeing programmers
from tedium such as remembering numeric codes and calculating addresses.
Assembly languages were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by higher-level languages, in the search for improved programming productivity.
Today, assembly language is still used for direct hardware
manipulation, access to specialized processor instructions, or to
address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.
Historically, numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language
(ESPOL), an Algol dialect. Many commercial applications were written in
assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN
and some PL/I eventually displaced much of this work, although a number
of large organizations retained assembly-language application
infrastructures well into the 1990s.
Most early microcomputers relied on hand-coded assembly language,
including most operating systems and large applications. This was
because these systems had severe resource constraints, imposed
idiosyncratic memory and display architectures, and provided limited,
buggy system services. Perhaps more important was the lack of
first-class high-level language compilers suitable for microcomputer
use. A psychological factor may have also played a role: the first
generation of microcomputer programmers retained a hobbyist, "wires and
pliers" attitude.
In a more commercial context, the biggest reasons for using
assembly language were minimal bloat (size), minimal overhead, greater
speed, and reliability.
Typical examples of large assembly language programs from this time are IBM PC DOS operating systems, the Turbo Pascal compiler and early applications such as the spreadsheet program Lotus 1-2-3. Assembly language was used to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for. The 1993 arcade game NBA Jam is another example.
Assembly language has long been the primary development language
for many popular home computers of the 1980s and 1990s (such as the MSX, SinclairZX Spectrum, Commodore 64, Commodore Amiga, and Atari ST). This was in large part because interpreted
BASIC dialects on these systems offered insufficient execution speed,
as well as insufficient facilities to take full advantage of the
available hardware on these systems. Some systems even have an integrated development environment (IDE) with highly advanced debugging and macro facilities. Some compilers available for the Radio ShackTRS-80
and its successors had the capability to combine inline assembly source
with high-level program statements. Upon compilation, a built-in
assembler produced inline machine code.
Current usage
There have always been debates over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
As of July 2017, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed
to render high-level languages into code that can run as fast as
hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes
effective optimization increasingly difficult for compilers, as well as
for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
There are some situations in which developers might choose to use assembly language:
Writing code for systems with older processors that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures.
In an embedded processor or DSP, high-repetition interrupts require
the shortest number of cycles per interrupt, such as an interrupt that
occurs 1000 or 10000 times a second.
Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation
instruction at the core of many encryption algorithms, as well as
querying the parity of a byte or the 4-bit carry of an addition.
A stand-alone executable of compact size is required that must execute without recourse to the run-time components or libraries
associated with a high-level language. Examples have included firmware
for telephones, automobile fuel and ignition systems, air-conditioning
control systems, security systems, and sensors.
Programs with performance-sensitive inner loops, where assembly
language provides optimization opportunities that are difficult to
achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264).
Programs that create vectorized functions for programs in
higher-level languages such as C. In the higher-level language this is
sometimes aided by compiler intrinsic functions
which map directly to SIMD mnemonics, but nevertheless result in a
one-to-one assembly conversion specific for the given vector processor.
Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire
system, telemetry must be interpreted and acted upon within strict time
constraints. Such systems must eliminate sources of unpredictable
delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking.
However, some higher-level languages incorporate run-time components
and operating system interfaces that can introduce such delays. Choosing
assembly or lower level languages for such systems gives programmers greater visibility and control over processing details.
Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
Modify and extend legacy code written for IBM mainframe computers.
Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
existing binaries
that may or may not have originally been written in a high-level
language, for example when trying to recreate programs for which source
code is not available or has been lost, or cracking copy protection of
proprietary software.
Video games (also termed ROM hacking),
which is possible via several methods. The most widely employed method
is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering
programs. Although few programmers today regularly work with assembly
language as a tool, the underlying concepts remain important. Such
fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler
design would be hard to study in detail without a grasp of how a
computer operates at the hardware level. Since a computer's behavior is
fundamentally defined by its instruction set, the logical way to learn
such concepts is to study an assembly language. Most modern computers
have similar instruction sets. Therefore, studying a single assembly
language is sufficient to learn: I) the basic concepts; II) to recognize
situations where the use of assembly language might be appropriate; and
III) to see how efficient executable code can be created from
high-level languages.
Typical applications
Assembly language is typically used in a system's boot
code, the low-level code that initializes and tests the system hardware
prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.)
Assembly language is often used for low-level code, for instance for operating system kernels,
which cannot rely on the availability of pre-existing system calls and
must indeed implement them for the particular processor architecture on
which the system will be running.
Some compilers translate high-level languages into assembly first
before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly).
Programs using such facilities can then construct abstractions using
different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler
make extensive use of disassembly for such a purpose. This technique is
used by hackers to crack commercial software, and competitors to
produce software with similar results from competing companies.
Assembly language is used to enhance speed of execution, especially
in early personal computers with limited processing power and RAM.
Assemblers can be used to generate blocks of data, with no
high-level language overhead, from formatted and commented source code,
to be used by other code.