Search This Blog

Friday, July 27, 2018

Computer architecture

From Wikipedia, the free encyclopedia
A pipelined implementation of the MIPS architecture.
Pipelining is a key concept in computer architecture.
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.

History

The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. When building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e. the stored-program concept.[3][4] Two other early and important examples are:
The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Frederick P. Brooks, Jr., and Mohammad Usman Khan, all members of the Machine Organization department in IBM’s main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture” – a term that seemed more useful than “machine organization.”[7]

Subsequently, Brooks, a Stretch designer, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing,[8]
Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.
Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which “architecture” became a noun defining “what the user needs to know”.[9] Later, computer users came to use the term in many less-explicit ways.[10]

The earliest computer architectures were designed on paper and then directly built into the final hardware form.[11] Later, computer architecture prototypes were physically built in the form of a transistor–transistor logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form.[12]

Subcategories

The discipline of computer architecture has three main subcategories:[13]
  1. Instruction Set Architecture, or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type.
  2. Microarchitecture, or computer organization describes how a particular processor will implement the ISA.[14] The size of a computer's CPU cache for instance, is an issue that generally has nothing to do with the ISA.
  3. System Design includes all of the other hardware components within a computing system. These include:
    1. Data processing other than the CPU, such as direct memory access (DMA)
    2. Other issues such as virtualization, multiprocessing, and software features.
There are other types of computer architecture. The following types are used in bigger companies like Intel, and count for 1% of all of computer architecture
  • Macroarchitecture: architectural layers more abstract than microarchitecture
  • Assembly Instruction Set Architecture (ISA): A smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
  • Programmer Visible Macroarchitecture: higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture.
  • UISA (Microcode Instruction Set Architecture)—a group of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.[citation needed]
  • Pin Architecture: The hardware functions that a microprocessor should provide to a hardware platform, e.g., the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated (emptied). Pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term "architecture" fits, because the functions must be provided for compatible systems, even if the detailed method changes.

Roles

Definition

The purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes instruction set design, functional organization, logic design, and implementation. The implementation involves integrated circuit design, packaging, power, and cooling. Optimization of the design requires familiarity with compilers, operating systems to logic design, and packaging.[15]

Instruction set architecture

An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand.

Besides instructions, the ISA defines items in the computer that are available to a program—e.g. data types, registers, addressing modes, and memory. Instructions locate these available items with register indexes (or names) and memory addressing modes.

The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs.

ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself.

During design emulation software (emulators) can run programs written in a proposed instruction set. Modern emulators can measure size, cost, and speed to determine if a particular ISA is meeting its goals.

Computer organization

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite detailed analysis of the computer's organization. For example, in a SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost.

Implementation

Once an instruction set and micro-architecture are designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps:
  • Logic Implementation designs the circuits required at a logic gate level
  • Circuit Implementation does transistor-level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at the log gate level, or even at the physical level if the design calls for it.
  • Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created.
  • Design Validation tests the computer as a whole to see if it works in all situations and all timings. Once the design validation process starts, the design at the logic level are tested using logic emulators. However, this is usually too slow to run realistic test. So, after making corrections based on the first test, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Most hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.
For CPUs, the entire implementation process is organized differently and is often referred to as CPU design.

Design goals

The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors.

The most common scheme does an in depth power analysis and figures out how to keep power consumption low, while maintaining adequate performance.

Performance

Modern computer performance is often described in IPC (instructions per cycle). This measures the efficiency of the architecture at any clock frequency. Since a faster rate can make a faster computer, this is a useful measurement. Older computers had IPC counts as low as 0.1 instructions per cycle. Simple modern processors easily reach near 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle.

Counting machine language instructions would be misleading because they can do varying amounts of work in different ISAs. The "instruction" in the standard measurements is not a count of the ISA's actual machine language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture.

Many people used to measure a computer's speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance.

Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs.

There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data).

Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed or else failure of the brake will occur.

Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it shouldn't be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but don't offer similar advantages to general tasks.

Power efficiency

Power efficiency is another important measurement in modern computers. A higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).

Modern circuits have less power required per transistor as the number of transistors per chip grows.[16] This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.[17] In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency.

Shifts in market demand

Increases in publicly released refresh rates have grown slowly over the past few years, with respect to vast leaps in power consumption reduction and miniaturization demand. This has led to a new demand for longer battery life and reductions in size due to the mobile technology being produced at a greater rate. This change in focus from greater refresh rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell microarchitecture; where they dropped their power consumption benchmark from 30-40 watts down to 10-20 watts.[18] Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006)[19] it can be seen that the focus in research and development are shifting away from refresh rates and moving towards consuming less power and taking up less space.

IBM researchers use analog memory to train deep neural networks faster and more efficiently

New approach allows deep neural networks to run hundreds of times faster than with GPUs, using hundreds of times less energy
June 15, 2018
Original link:  http://www.kurzweilai.net/ibm-researchers-use-analog-memory-to-train-deep-neural-networks-faster-and-more-efficiently
Crossbar arrays of non-volatile memories can accelerate the
training of neural networks by performing computation at the
actual location of the data. (credit: IBM Research)

Imagine advanced artificial intelligence (AI) running on your smartphone — instantly presenting the information that’s relevant to you in real time. Or a supercomputer that requires hundreds of times less energy.

The IBM Research AI team has demonstrated a new approach that they believe is a major step toward those scenarios.

Deep neural networks normally require fast, powerful graphical processing unit (GPU) hardware accelerators to support the needed high speed and computational accuracy — such as the GPU devices used in the just-announced Summit supercomputer. But GPUs are highly energy-intensive, making their use expensive and limiting their future growth, the researchers explain in a recent paper published in Nature.

Analog memory replaces software, overcoming the “von Neumann bottleneck”


Instead, the IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power — without sacrificing the ability to create deep learning systems.*

The trick was to replace conventional von Neumann architecture, which is “constrained by the time and energy spent moving data back and forth between the memory and the processor (the ‘von Neumann bottleneck’),” the researchers explain in the paper. “By contrast, in a non-von Neumann scheme, computing is done at the location of the data [in memory], with the strengths of the synaptic connections (the ‘weights’) stored and adjusted directly in memory.

“Delivering the future of AI will require vastly expanding the scale of AI calculations,” they note. “Instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all of the computation inside the analog memory chip. We believe this is a major step on the path to the kind of hardware accelerators necessary for the next AI breakthroughs.”**

Given these encouraging results, the IBM researchers have already started exploring the design of prototype hardware accelerator chips, as part of an IBM Research Frontiers Institute project, they said.

Ref.: Nature. Source: IBM Research

 * “From these early design efforts, we were able to provide, as part of our Nature paper, initial estimates for the potential of such [non-volatile memory]-based chips for training fully-connected layers, in terms of the computational energy efficiency (28,065 GOP/sec//W) and throughput-per-area (3.6 TOP/sec/mm2). These values exceed the specifications of today’s GPUs by two orders of magnitude. Furthermore, fully-connected layers are a type of neural network layer for which actual GPU performance frequently falls well below the rated specifications. … Analog non-volatile memories can efficiently accelerate at the heart of many recent AI advances. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other.

** “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional complementary metal-oxide semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices.  It was essential to use real analog memory devices for every weight in our neural networks, because modeling approaches for such novel devices frequently fail to capture the full range of device-to-device variability they can exhibit.”

By 2030, this is what computers will be able to do

Ugo Dumont, a volunteer for the Genworth R70i Aging Experience, participates in a demonstration at the Liberty Science Center in Jersey City, New Jersey, April 5, 2016.
Computing in 2030: medical nanobots and autonomous
vehicles. But will they bring people together?
Image: REUTERS/Shannon Stapleton

Medium 731yaydb3jvy92bwtz6tyknywtk2mdzmwt4 squsrzg
Developments in computing are driving the transformation of entire systems of production, management, and governance. In this interview Justine Cassell, Associate Dean, Technology, Strategy and Impact, at the School of Computer Science, Carnegie Mellon University, and co-chair of the Global Future Council on Computing, says we must ensure that these developments benefit all society, not just the wealthy or those participating in the “new economy”.

Why should the world care about the future of computing?

Today computers are in virtually everything we touch, all day long. We still have an image of computers as being rectangular objects either on a desk, or these days in our pockets; but computers are in our cars, they’re in our thermostats, they’re in our refrigerators. In fact, increasingly computers are no longer objects at all, but they suffuse fabric and virtually every other material. Because of that, we really do need to care about what the future of computing holds because it is going to impact our lives all day long.

Tell me about the technological breakthroughs we have already seen, and what you expect to see in the coming years?

Some of the exciting breakthroughs have to do with the internet of things. In the same way we have a tendency to think of computers as rectangular boxes, we have a tendency to think of the internet as being some kind of ether that floats around us. But quite recently researchers have made enormous breakthroughs in creating a way for all objects to communicate; so your phone might communicate to your refrigerator, which might communicate to the light bulb. In fact, in a near future, the light bulb will itself become a computer, projecting information instead of light.
Image: Statista
Similarly, biological computing addresses how the body itself can compute, how we can think about genetic material as computing. You can think of biological computing as a way of computing RNA or DNA and understanding biotechnology as a kind of computer. One of my colleagues here at Carnegie Mellon, Adam Feinberg, has been 3D-printing heart tissue. He’s been designing parts of the body on a computer using very fine-grained models that are based on the human body, and then using engineering techniques to create living organisms. That’s a very radical difference in what we consider the digital infrastructure and that shift is supporting a radical shift in the way we work, and live, and who we are as humans.

And quantum computing allows us to imagine a future where great breakthroughs in science will be made by computers that are no longer tethered to simple binary 0s and 1s.

How is computing changing? What are the forces driving those changes?

Some of the ways that computing is changing now are that it is moving into the fabrics in our clothing and it’s moving into our very bodies. We are now in the process of refining prosthetics that not only help people reach for something but in reaching, those prosthetics now send a message back to the brain. The first prosthetics were able quite miraculously to take a message from the brain and use it to control the world. But imagine how astounding it is if that prosthetic also tells the brain that it has grasped something. That really changes the way we think of what it means to be human, if our very brains are impacted by the movement of a piece of metal at the edge of our hands.

How could developments in computing impact industry, governments and society?

First of all, there’s really a disruption of all industry sectors. Everything from the information and entertainment sectors, that can imagine ads that understand your emotions when you look at them using machine learning; to manufacturing, where the robots on a production line can learn in real time as a function of what they perceive. You can imagine a robot arm in a factory that automatically remanufactures itself when the object that it is putting into boxes changes shape. Every sector is changing and even the lines between industry sectors are becoming blurred, as 3D-printing and machine learning come together for example; as manufacturing and information; or manufacturing and the body come together.

What needs to be done to ensure that their benefits are maximized and the associated risks kept under control?

If you think about the future of computing as a convergence of the biological, the physical and the digital (and the post-digital quantum), using as examples 3D-printing, biotechnology, robotics for prosthetics, the internet of things, autonomous vehicles, other kinds of artificial intelligence, you can see the extent of how life will change. We need to make sure that these developments benefit all of society, not just the most wealthy members of society who might want these prosthetics, but every person who needs them.

One of our first questions in the Council is going to be, how do we establish governance for equitable innovation? How do we foster the equitable benefits of these technologies for every nation and every person in every nation? And, is top-down governance the right model for controlling the use of these technologies, or is bottom-up ethical education of those that engage in the development of the technologies and their distribution, a better way to think about how to ensure equitable use?

I believe that all technologists need to keep in mind a multi-level, multi-part model of technology that takes into account the technological but also the social, the cultural, the legal, all of these aspects of development. All technologists need to be trained in the human as well as the technological so that they understand uses to which their technology could be put and reflect on the uses they want it to be put to.

What will computing look like in 2030?

We have no idea yet because change is happening so quickly. We know that quantum computing – the introduction of physics into the field of computer science – is going to be extremely important; that computers are going to become really, very tiny, the size of an atom. That’s going to make a huge difference; nano-computing, very small computers that you might swallow inside a pill and that will then learn about your illness and set about curing it; that brings together biological computing as well, where we can print parts of the body. So I think we’re going to see the increasing infusing of computing into all aspects of our lives. If our Council has its way, we’re going to see an increasing sense of responsibility on the part of technologists to ensure that those developments are for good.

What technology or gadget would you most like to see by 2030?

In my own work, I’m committed to ensuring that technology brings people together rather than separating them. There’s been some fear that having everybody stare at their cellphone all day long is separating us from one another; that we are no longer building bonds with other people. My own work goes towards ensuring that social bonds and the relationships amongst people, and even the relationship between us and our technology, supports a social infrastructure, so that we never forget those values that make us human.

To my mind it’s not a particular gadget that I want to see, it’s gadgets that ensure the bond between people is not only continued but strengthened, that the understanding amongst nations and amongst individuals is improved by virtue of the technologies that we encounter.

Computer science

From Wikipedia, the free encyclopedia

large capital lambda Plot of a quicksort algorithm
Utah teapot representing computer graphics Microsoft Tastenmaus mouse representing human-computer interaction
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations.

Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. It is the scientific and practical approach to computation and its applications and the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie the acquisition, representation, processing, storage, communication of, and access to, information. An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems.

Its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications. Other fields still focus on challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.

History

Charles Babbage sometimes referred as "father of computing".
Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.

Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623.[4] In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner.[5] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment.  Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[6] He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer".[7] "A crucial step was the adoption of a punched card system derived from the Jacquard loom"[7] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[8] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[9] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[10]

During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[11] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[12][13] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[14] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.

Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[15][16] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[17] and later the IBM 709[18] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating […] if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[15] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[16]

Time has seen significant improvements in the usability and effectiveness of computing technology.[19] Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.

Contributions

The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[20]

Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the information revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750–1850 CE) and the Agricultural Revolution (8000–5000 BC).

These contributions include:

Etymology

Although first proposed in 1956,[16] the term "computer science" appears in a 1959 article in Communications of the ACM,[28] in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921,[29] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[28] His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[30] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[31] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[32] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a distinct field of data analysis, including statistics and databases.

Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[33] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[34] The term computics has also been suggested.[35] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[36] "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."[37]

A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes."[note 3] The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology, statistics, and logic.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[12] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.[16]

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined.[38] David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[39]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

Philosophy

A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[40] Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[41] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[42]

Areas of computer science

As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[43][44] CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)[45]—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[43]

Theoretical computer science

Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. All studies related to mathematical, logic and formal concepts and methods could be considered as theoretical computer science, provided that the motivation is clearly drawn from the field of computing.

Data structures and algorithms

Data structures and algorithms is the study of commonly used computational methods and their computational efficiency.

Theory of computation

According to Peter Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[12] Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems,[46] is an open problem in the theory of computation.

Information and coding theory

Information theory is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[47] Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.

Programming language theory

Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.

Formal methods

Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.

Computer systems

Computer architecture and computer engineering

Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[48] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.

Computer performance analysis

Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[49]

Concurrent, parallel and distributed systems

Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea of concurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information is often exchanged among themselves to achieve a common goal.

Computer networks

This branch of computer science aims to manage networks between computers worldwide.

Computer security and cryptography

Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.

Databases

A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.

Computer applications

Computer graphics and visualization

Computer graphics is the study of digital visual contents, and involves synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.

Human–computer interaction

Research that develops theories, principles, and guidelines for user interface designers, so they can create satisfactory user experiences with desktop, laptop, and mobile devices.

Scientific computing

Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines.

Artificial intelligence

Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.

Software engineering

Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 to 2018.

The great insights of computer science

The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:[50]
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
  • Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything".
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:
  • move left one location;
  • move right one location;
  • read symbol at current location;
  • print 0 at current location;
  • print 1 at current location.
  • Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
Only three rules are needed to combine any set of basic instructions into more complex ones:
  • sequence: first do this, then do that;
  • selection: IF such-and-such is the case, THEN do this, ELSE do that;
  • repetition: WHILE such-and-such is the case DO this.
Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).

Academia

Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications.[51][52] One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[53]

Education

Since computer science is a relatively new field, it is not as widely taught in schools and universities as other academic subjects. For example, in 2014, Code.org estimated that only 10 percent of high schools in the United States offered computer science education.[54] A 2010 report by Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA) revealed that only 14 out of 50 states have adopted significant education standards for high school computer science.[55] However, computer science education is growing.[56] Some countries, such as Israel, New Zealand and South Korea, have already included computer science in their respective national secondary education curriculum.[57][58] Several countries are following suit.[59]

In most countries, there is a significant gender gap in computer science education. For example, in the US about 20% of computer science degrees in 2012 were conferred to women.[60] This gender gap also exists in other Western countries.[61] However, in some parts of the world, the gap is small or nonexistent. In 2011, approximately half of all computer science degrees in Malaysia were conferred to women.[62] In 2001, women made up 54.5% of computer science graduates in Guyana.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...