Search This Blog

Saturday, July 28, 2018

Software architecture

From Wikipedia, the free encyclopedia
 
Software architecture refers to the high level structures of a software system, the discipline of creating such structures, and system. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building. It functions as a blueprint for the system and the developing project, laying out the tasks necessary to be executed by the design teams.

Software architecture is about making fundamental structural choices which are costly to change once implemented. Software architecture choices include specific structural options from possibilities in the design of software. For example, the systems that controlled the space shuttle launch vehicle had the requirement of being very fast and very reliable. Therefore, an appropriate real-time computing language would need to be chosen. Additionally, to satisfy the need for reliability the choice could be made to have multiple redundant and independently produced copies of the program, and to run these copies on independent hardware while cross-checking results.

Documenting software architecture facilitates communication between stakeholders, captures early decisions about the high-level design, and allows reuse of design components between projects.

Scope

Opinions vary as to the scope of software architectures:[5]
  • Overall, macroscopic system structure;[6] this refers to architecture as a higher level abstraction of a software system that consists of a collection of computational components together with connectors that describe the interaction between these components.
  • The important stuff—whatever that is;[7] this refers to the fact that software architects should concern themselves with those decisions that have high impact on the system and its stakeholders.
  • That which is fundamental to understanding a system in its environment"[8]
  • Things that people perceive as hard to change;[7] since designing the architecture takes place at the beginning of a software system's lifecycle, the architect should focus on decisions that "have to" be right the first time. Following this line of thought, architectural design issues may become non-architectural once their irreversibility can be overcome.
  • A set of architectural design decisions;[9] software architecture should not be considered merely a set of models or structures, but should include the decisions that lead to these particular structures, and the rationale behind them. This insight has led to substantial research into software architecture knowledge management.[10]
There is no sharp distinction between software architecture versus design and requirements engineering (see Related fields below). They are all part of a "chain of intentionality" from high-level intentions to low-level details.[11]:18

Characteristics

Software architecture exhibits the following:

Multitude of stakeholders: software systems have to cater to a variety of stakeholders such as business managers, owners, users, and operators. These stakeholders all have their own concerns with respect to the system. Balancing these concerns and demonstrating how they are addressed is part of designing the system.[4]:29–31 This implies that architecture involves dealing with a broad variety of concerns and stakeholders, and has a multidisciplinary nature.

Separation of concerns: the established way for architects to reduce complexity is to separate the concerns that drive the design. Architecture documentation shows that all stakeholder concerns are addressed by modeling and describing the architecture from separate points of view associated with the various stakeholder concerns.[12] These separate descriptions are called architectural views (see for example the 4+1 Architectural View Model).

Quality-driven: classic software design approaches (e.g. Jackson Structured Programming) were driven by required functionality and the flow of data through the system, but the current insight[4]:26–28 is that the architecture of a software system is more closely related to its quality attributes such as fault-tolerance, backward compatibility, extensibility, reliability, maintainability, availability, security, usability, and other such –ilities. Stakeholder concerns often translate into requirements on these quality attributes, which are variously called non-functional requirements, extra-functional requirements, behavioral requirements, or quality attribute requirements.

Recurring styles: like building architecture, the software architecture discipline has developed standard ways to address recurring concerns. These "standard ways" are called by various names at various levels of abstraction. Common terms for recurring solutions are architectural style,[11]:273–277 tactic,[4]:70–72 reference architecture[13][14] and architectural pattern.[4]:203–205

Conceptual integrity: a term introduced by Fred Brooks in The Mythical Man-Month to denote the idea that the architecture of a software system represents an overall vision of what it should do and how it should do it. This vision should be separated from its implementation. The architect assumes the role of "keeper of the vision", making sure that additions to the system are in line with the architecture, hence preserving conceptual integrity.[15]:41–50

Cognitive constraints: an observation first made in a 1967 paper by computer programmer Melvin Conway that organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations. As with conceptual integrity, it was Fred Brooks who introduced it to a wider audience when he cited the paper and the idea in his elegant classic The Mythical Man-Month, calling it "Conway's Law."

Motivation

Software architecture is an "intellectually graspable" abstraction of a complex system.[4]:5–6 This abstraction provides a number of benefits:
  • It gives a basis for analysis of software systems' behavior before the system has been built.[2] The ability to verify that a future software system fulfills its stakeholders' needs without actually having to build it represents substantial cost-saving and risk-mitigation.[16] A number of techniques have been developed to perform such analyses, such as ATAM.
  • It provides a basis for re-use of elements and decisions.[2][4]:35 A complete software architecture or parts of it, like individual architectural strategies and decisions, can be re-used across multiple systems whose stakeholders require similar quality attributes or functionality, saving design costs and mitigating the risk of design mistakes.
  • It supports early design decisions that impact a system's development, deployment, and maintenance life.[4]:31 Getting the early, high-impact decisions right is important to prevent schedule and budget overruns.
  • It facilitates communication with stakeholders, contributing to a system that better fulfills their needs.[4]:29–31 Communicating about complex systems from the point of view of stakeholders helps them understand the consequences of their stated requirements and the design decisions based on them. Architecture gives the ability to communicate about design decisions before the system is implemented, when they are still relatively easy to adapt.
  • It helps in risk management. Software architecture helps to reduce risks and chance of failure.[11]:18
  • It enables cost reduction. Software architecture is a means to manage risk and costs in complex IT projects.[17]

History

The comparison between software design and (civil) architecture was first drawn in the late 1960s,[18] but the term software architecture became prevalent only in the beginning of the 1990s.[19] The field of computer science had encountered problems associated with complexity since its formation.[20] Earlier problems of complexity were solved by developers by choosing the right data structures, developing algorithms, and by applying the concept of separation of concerns. Although the term "software architecture" is relatively new to the industry, the fundamental principles of the field have been applied sporadically by software engineering pioneers since the mid-1980s. Early attempts to capture and explain software architecture of a system were imprecise and disorganized, often characterized by a set of box-and-line diagrams. [21]

Software architecture as a concept has its origins in the research of Edsger Dijkstra in 1968 and David Parnas in the early 1970s. These scientists emphasized that the structure of a software system matters and getting the structure right is critical. During the 1990s there was a concerted effort to define and codify fundamental aspects of the discipline, with research work concentrating on architectural styles (patterns), architecture description languages, architecture documentation, and formal methods.[22]

Research institutions have played a prominent role in furthering software architecture as a discipline. Mary Shaw and David Garlan of Carnegie Mellon wrote a book titled Software Architecture: Perspectives on an Emerging Discipline in 1996, which promoted software architecture concepts such as components, connectors, and styles. The University of California, Irvine's Institute for Software Research's efforts in software architecture research is directed primarily in architectural styles, architecture description languages, and dynamic architectures.

IEEE 1471-2000, Recommended Practice for Architecture Description of Software-Intensive Systems, was the first formal standard in the area of software architecture. It was adopted in 2007 by ISO as ISO/IEC 42010:2007. In November 2011, IEEE 1471–2000 was superseded by ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description (jointly published by IEEE and ISO).[12]

While in IEEE 1471, software architecture was about the architecture of "software-intensive systems", defined as "any system where software contributes essential influences to the design, construction, deployment, and evolution of the system as a whole", the 2011 edition goes a step further by including the ISO/IEC 15288 and ISO/IEC 12207 definitions of a system, which embrace not only hardware and software, but also "humans, processes, procedures, facilities, materials and naturally occurring entities". This reflects the relationship between software architecture, enterprise architecture and solution architecture.

Architecture activities

There are many activities that a software architect performs. A software architect typically works with project managers, discusses architecturally significant requirements with stakeholders, designs a software architecture, evaluates a design, communicates with designers and stakeholders, documents the architectural design and more.[23] There are four core activities in software architecture design.[24] These core architecture activities are performed iteratively and at different stages of the initial software development life-cycle, as well as over the evolution of a system.

Architectural analysis is the process of understanding the environment in which a proposed system or systems will operate and determining the requirements for the system. The input or requirements to the analysis activity can come from any number of stakeholders and include items such as:
  • what the system will do when operational (the functional requirements)
  • how well the system will perform runtime non-functional requirements such as reliability, operability, performance efficiency, security, compatibility defined in ISO/IEC 25010:2011 standard[25]
  • development-time non-functional requirements such as maintainability and transferability defined in ISO 25010:2011 standard[25]
  • business requirements and environmental contexts of a system that may change over time, such as legal, social, financial, competitive, and technology concerns[26]
The outputs of the analysis activity are those requirements that have a measurable impact on a software system’s architecture, called architecturally significant requirements.[27]

Architectural synthesis or design is the process of creating an architecture. Given the architecturally significant requirements determined by the analysis, the current state of the design and the results of any evaluation activities, the design is created and improved.[24][4]:311–326

Architecture evaluation is the process of determining how well the current design or a portion of it satisfies the requirements derived during analysis. An evaluation can occur whenever an architect is considering a design decision, it can occur after some portion of the design has been completed, it can occur after the final design has been completed or it can occur after the system has been constructed. Some of the available software architecture evaluation techniques include Architecture Tradeoff Analysis Method (ATAM) and TARA.[28] Frameworks for comparing the techniques are discussed in frameworks such as SARA Report[16] and Architecture Reviews: Practice and Experience.[29]

Architecture evolution is the process of maintaining and adapting an existing software architecture to meet changes in requirements and environment. As software architecture provides a fundamental structure of a software system, its evolution and maintenance would necessarily impact its fundamental structure. As such, architecture evolution is concerned with adding new functionality as well as maintaining existing functionality and system behavior.

Architecture requires critical supporting activities. These supporting activities take place throughout the core software architecture process. They include knowledge management and communication, design reasoning and decision making, and documentation.

Architecture supporting activities

Software architecture supporting activities are carried out during core software architecture activities. These supporting activities assist a software architect to carry out analysis, synthesis, evaluation, and evolution. For instance, an architect has to gather knowledge, make decisions and document during the analysis phase.
  • Knowledge management and communication is the act of exploring and managing knowledge that is essential to designing a software architecture. A software architect does not work in isolation. They get inputs, functional and non-functional requirements and design contexts, from various stakeholders; and provides outputs to stakeholders. Software architecture knowledge is often tacit and is retained in the heads of stakeholders. Software architecture knowledge management activity is about finding, communicating, and retaining knowledge. As software architecture design issues are intricate and interdependent, a knowledge gap in design reasoning can lead to incorrect software architecture design.[23][30] Examples of knowledge management and communication activities include searching for design patterns, prototyping, asking experienced developers and architects, evaluating the designs of similar systems, sharing knowledge with other designers and stakeholders, and documenting experience in a wiki page.
  • Design reasoning and decision making is the activity of evaluating design decisions. This activity is fundamental to all three core software architecture activities.[9][31] It entails gathering and associating decision contexts, formulating design decision problems, finding solution options and evaluating tradeoffs before making decisions. This process occurs at different levels of decision granularity while evaluating significant architectural requirements and software architecture decisions, and software architecture analysis, synthesis, and evaluation. Examples of reasoning activities include understanding the impacts of a requirement or a design on quality attributes, questioning the issues that a design might cause, assessing possible solution options, and evaluating the tradeoffs between solutions.
  • Documentation is the act of recording the design generated during the software architecture process. A system design is described using several views that frequently include a static view showing the code structure of the system, a dynamic view showing the actions of the system during execution, and a deployment view showing how a system is placed on hardware for execution. Kruchten's 4+1 view suggests a description of commonly used views for documenting software architecture;[32] Documenting Software Architectures: Views and Beyond has descriptions of the kinds of notations that could be used within the view description.[1] Examples of documentation activities are writing a specification, recording a system design model, documenting a design rationale, developing a viewpoint, documenting views.

Software architecture topics

Software architecture description

Software architecture description involves the principles and practices of modeling and representing architectures, using mechanisms such as: architecture description languages, architecture viewpoints, and architecture frameworks.

Architecture description languages

An architecture description language (ADL) is any means of expression used to describe a software architecture (ISO/IEC/IEEE 42010). Many special-purpose ADLs have been developed since the 1990s, including AADL (SAE standard), Wright (developed by Carnegie Mellon), Acme (developed by Carnegie Mellon), xADL (developed by UCI), Darwin (developed by Imperial College London), DAOP-ADL (developed by University of Málaga), SBC-ADL (developed by National Sun Yat-Sen University), and ByADL (University of L'Aquila, Italy).

Architecture viewpoints


Software architecture descriptions are commonly organized into views, which are analogous to the different types of blueprints made in building architecture. Each view addresses a set of system concerns, following the conventions of its viewpoint, where a viewpoint is a specification that describes the notations, modeling, and analysis techniques to use in a view that express the architecture in question from the perspective of a given set of stakeholders and their concerns (ISO/IEC/IEEE 42010). The viewpoint specifies not only the concerns framed (i.e., to be addressed) but the presentation, model kinds used, conventions used and any consistency (correspondence) rules to keep a view consistent with other views.

Architecture frameworks

An architecture framework captures the "conventions, principles and practices for the description of architectures established within a specific domain of application and/or community of stakeholders" (ISO/IEC/IEEE 42010). A framework is usually implemented in terms of one or more viewpoints or ADLs.

Architectural styles and patterns

An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are often documented as software design patterns.
Following traditional building architecture, a 'software architectural style' is a specific method of construction, characterized by the features that make it notable" (architectural style).

There are many recognized architectural patterns and styles, among them:
Some treat architectural patterns and architectural styles as the same,[35] some treat styles as specializations of patterns. What they have in common is both patterns and styles are idioms for architects to use, they "provide a common language"[35] or "vocabulary"[33] with which to describe classes of systems.

Software architecture and agile development

There are also concerns that software architecture leads to too much Big Design Up Front, especially among proponents of agile software development. A number of methods have been developed to balance the trade-offs of up-front design and agility,[36] including the agile method DSDM which mandates a "Foundations" phase during which "just enough" architectural foundations are laid. IEEE Software devoted a special issue[37] to the interaction between agility and architecture.

Software architecture erosion

Software architecture erosion (or "decay") refers to the gap observed between the planned and actual architecture of a software system as realized in its implementation.[38] Software architecture erosion occurs when implementation decisions either do not fully achieve the architecture-as-planned or otherwise violate constraints or principles of that architecture.[2] The gap between planned and actual architectures is sometimes understood in terms of the notion of technical debt.

As an example, consider a strictly layered system, where each layer can only use services provided by the layer immediately below it. Any source code component that does not observe this constraint represents an architecture violation. If not corrected, such violations can transform the architecture into a monolithic block, with adverse effects on understandability, maintainability, and evolvability.

Various approaches have been proposed to address erosion. "These approaches, which include tools, techniques, and processes, are primarily classified into three general categories that attempt to minimize, prevent and repair architecture erosion. Within these broad categories, each approach is further broken down reflecting the high-level strategies adopted to tackle erosion. These are process-oriented architecture conformance, architecture evolution management, architecture design enforcement, architecture to implementation linkage, self-adaptation and architecture restoration techniques consisting of recovery, discovery, and reconciliation."[39]

There are two major techniques to detect architectural violations: reflexion models and domain-specific languages. Reflexion model (RM) techniques compare a high-level model provided by the system's architects with the source code implementation. There are also domain-specific languages with a focus on specifying and checking architectural constraints.

Software architecture recovery

Software architecture recovery (or reconstruction, or reverse engineering) includes the methods, techniques, and processes to uncover a software system's architecture from available information, including its implementation and documentation. Architecture recovery is often necessary to make informed decisions in the face of obsolete or out-of-date documentation and architecture erosion: implementation and maintenance decisions diverging from the envisioned architecture.[40] Practices exist to recover software architecture as Static program analysis. This is a part of subjects covered by the Software Intelligence practice.

Related fields

Design

Architecture is design but not all design is architectural.[1] In practice, the architect is the one who draws the line between software architecture (architectural design) and detailed design (non-architectural design). There are no rules or guidelines that fit all cases, although there have been attempts to formalize the distinction. According to the Intension/Locality Hypothesis,[41] the distinction between architectural and detailed design is defined by the Locality Criterion,[41] according to which a statement about software design is non-local (architectural) if and only if a program that satisfies it can be expanded into a program that does not. For example, the client–server style is architectural (strategic) because a program that is built on this principle can be expanded into a program that is not client–server—for example, by adding peer-to-peer nodes.

Requirements engineering

Requirements engineering and software architecture can be seen as complementary approaches: while software architecture targets the 'solution space' or the 'how', requirements engineering addresses the 'problem space' or the 'what'.[42] Requirements engineering entails the elicitation, negotiation, specification, validation, documentation and management of requirements. Both requirements engineering and software architecture revolve around stakeholder concerns, needs and wishes.
There is considerable overlap between requirements engineering and software architecture, as evidenced for example by a study into five industrial software architecture methods that concludes that "the inputs (goals, constrains, etc.) are usually ill-defined, and only get discovered or better understood as the architecture starts to emerge" and that while "most architectural concerns are expressed as requirements on the system, they can also include mandated design decisions".[24] In short, the choice of required behavior given a particular problem impacts the architecture of the solution that addresses that problem, while at the same time the architectural design may impact the problem and introduce new requirements.[43] Approaches such as the Twin Peaks model[44] aim to exploit the synergistic relation between requirements and architecture.

Other types of 'architecture'

Computer architecture
Computer architecture targets the internal structure of a computer system, in terms of collaborating hardware components such as the CPU – or processor – the bus and the memory.
Systems architecture
The term systems architecture has originally been applied to the architecture of systems that consists of both hardware and software. The main concern addressed by the systems architecture is then the integration of software and hardware in a complete, correctly working device. In another common – much broader – meaning, the term applies to the architecture of any complex system which may be of technical, sociotechnical or social nature.
Enterprise architecture
The goal of enterprise architecture is to "translate business vision and strategy into effective enterprise".[45] Enterprise architecture frameworks, such as TOGAF and the Zachman Framework, usually distinguish between different enterprise architecture layers. Although terminology differs from framework to framework, many include at least a distinction between a business layer, an application (or information) layer, and a technology layer. Enterprise architecture addresses among others the alignment between these layers, usually in a top-down approach.

Friday, July 27, 2018

Computer architecture

From Wikipedia, the free encyclopedia
A pipelined implementation of the MIPS architecture.
Pipelining is a key concept in computer architecture.
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.

History

The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. When building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e. the stored-program concept.[3][4] Two other early and important examples are:
The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Frederick P. Brooks, Jr., and Mohammad Usman Khan, all members of the Machine Organization department in IBM’s main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture” – a term that seemed more useful than “machine organization.”[7]

Subsequently, Brooks, a Stretch designer, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing,[8]
Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.
Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which “architecture” became a noun defining “what the user needs to know”.[9] Later, computer users came to use the term in many less-explicit ways.[10]

The earliest computer architectures were designed on paper and then directly built into the final hardware form.[11] Later, computer architecture prototypes were physically built in the form of a transistor–transistor logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form.[12]

Subcategories

The discipline of computer architecture has three main subcategories:[13]
  1. Instruction Set Architecture, or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type.
  2. Microarchitecture, or computer organization describes how a particular processor will implement the ISA.[14] The size of a computer's CPU cache for instance, is an issue that generally has nothing to do with the ISA.
  3. System Design includes all of the other hardware components within a computing system. These include:
    1. Data processing other than the CPU, such as direct memory access (DMA)
    2. Other issues such as virtualization, multiprocessing, and software features.
There are other types of computer architecture. The following types are used in bigger companies like Intel, and count for 1% of all of computer architecture
  • Macroarchitecture: architectural layers more abstract than microarchitecture
  • Assembly Instruction Set Architecture (ISA): A smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
  • Programmer Visible Macroarchitecture: higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture.
  • UISA (Microcode Instruction Set Architecture)—a group of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.[citation needed]
  • Pin Architecture: The hardware functions that a microprocessor should provide to a hardware platform, e.g., the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated (emptied). Pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term "architecture" fits, because the functions must be provided for compatible systems, even if the detailed method changes.

Roles

Definition

The purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes instruction set design, functional organization, logic design, and implementation. The implementation involves integrated circuit design, packaging, power, and cooling. Optimization of the design requires familiarity with compilers, operating systems to logic design, and packaging.[15]

Instruction set architecture

An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand.

Besides instructions, the ISA defines items in the computer that are available to a program—e.g. data types, registers, addressing modes, and memory. Instructions locate these available items with register indexes (or names) and memory addressing modes.

The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs.

ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself.

During design emulation software (emulators) can run programs written in a proposed instruction set. Modern emulators can measure size, cost, and speed to determine if a particular ISA is meeting its goals.

Computer organization

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite detailed analysis of the computer's organization. For example, in a SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost.

Implementation

Once an instruction set and micro-architecture are designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps:
  • Logic Implementation designs the circuits required at a logic gate level
  • Circuit Implementation does transistor-level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at the log gate level, or even at the physical level if the design calls for it.
  • Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created.
  • Design Validation tests the computer as a whole to see if it works in all situations and all timings. Once the design validation process starts, the design at the logic level are tested using logic emulators. However, this is usually too slow to run realistic test. So, after making corrections based on the first test, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Most hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.
For CPUs, the entire implementation process is organized differently and is often referred to as CPU design.

Design goals

The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors.

The most common scheme does an in depth power analysis and figures out how to keep power consumption low, while maintaining adequate performance.

Performance

Modern computer performance is often described in IPC (instructions per cycle). This measures the efficiency of the architecture at any clock frequency. Since a faster rate can make a faster computer, this is a useful measurement. Older computers had IPC counts as low as 0.1 instructions per cycle. Simple modern processors easily reach near 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle.

Counting machine language instructions would be misleading because they can do varying amounts of work in different ISAs. The "instruction" in the standard measurements is not a count of the ISA's actual machine language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture.

Many people used to measure a computer's speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance.

Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs.

There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data).

Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed or else failure of the brake will occur.

Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it shouldn't be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but don't offer similar advantages to general tasks.

Power efficiency

Power efficiency is another important measurement in modern computers. A higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).

Modern circuits have less power required per transistor as the number of transistors per chip grows.[16] This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.[17] In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency.

Shifts in market demand

Increases in publicly released refresh rates have grown slowly over the past few years, with respect to vast leaps in power consumption reduction and miniaturization demand. This has led to a new demand for longer battery life and reductions in size due to the mobile technology being produced at a greater rate. This change in focus from greater refresh rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell microarchitecture; where they dropped their power consumption benchmark from 30-40 watts down to 10-20 watts.[18] Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006)[19] it can be seen that the focus in research and development are shifting away from refresh rates and moving towards consuming less power and taking up less space.

IBM researchers use analog memory to train deep neural networks faster and more efficiently

New approach allows deep neural networks to run hundreds of times faster than with GPUs, using hundreds of times less energy
June 15, 2018
Original link:  http://www.kurzweilai.net/ibm-researchers-use-analog-memory-to-train-deep-neural-networks-faster-and-more-efficiently
Crossbar arrays of non-volatile memories can accelerate the
training of neural networks by performing computation at the
actual location of the data. (credit: IBM Research)

Imagine advanced artificial intelligence (AI) running on your smartphone — instantly presenting the information that’s relevant to you in real time. Or a supercomputer that requires hundreds of times less energy.

The IBM Research AI team has demonstrated a new approach that they believe is a major step toward those scenarios.

Deep neural networks normally require fast, powerful graphical processing unit (GPU) hardware accelerators to support the needed high speed and computational accuracy — such as the GPU devices used in the just-announced Summit supercomputer. But GPUs are highly energy-intensive, making their use expensive and limiting their future growth, the researchers explain in a recent paper published in Nature.

Analog memory replaces software, overcoming the “von Neumann bottleneck”


Instead, the IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power — without sacrificing the ability to create deep learning systems.*

The trick was to replace conventional von Neumann architecture, which is “constrained by the time and energy spent moving data back and forth between the memory and the processor (the ‘von Neumann bottleneck’),” the researchers explain in the paper. “By contrast, in a non-von Neumann scheme, computing is done at the location of the data [in memory], with the strengths of the synaptic connections (the ‘weights’) stored and adjusted directly in memory.

“Delivering the future of AI will require vastly expanding the scale of AI calculations,” they note. “Instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all of the computation inside the analog memory chip. We believe this is a major step on the path to the kind of hardware accelerators necessary for the next AI breakthroughs.”**

Given these encouraging results, the IBM researchers have already started exploring the design of prototype hardware accelerator chips, as part of an IBM Research Frontiers Institute project, they said.

Ref.: Nature. Source: IBM Research

 * “From these early design efforts, we were able to provide, as part of our Nature paper, initial estimates for the potential of such [non-volatile memory]-based chips for training fully-connected layers, in terms of the computational energy efficiency (28,065 GOP/sec//W) and throughput-per-area (3.6 TOP/sec/mm2). These values exceed the specifications of today’s GPUs by two orders of magnitude. Furthermore, fully-connected layers are a type of neural network layer for which actual GPU performance frequently falls well below the rated specifications. … Analog non-volatile memories can efficiently accelerate at the heart of many recent AI advances. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other.

** “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional complementary metal-oxide semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices.  It was essential to use real analog memory devices for every weight in our neural networks, because modeling approaches for such novel devices frequently fail to capture the full range of device-to-device variability they can exhibit.”

By 2030, this is what computers will be able to do

Ugo Dumont, a volunteer for the Genworth R70i Aging Experience, participates in a demonstration at the Liberty Science Center in Jersey City, New Jersey, April 5, 2016.
Computing in 2030: medical nanobots and autonomous
vehicles. But will they bring people together?
Image: REUTERS/Shannon Stapleton

Medium 731yaydb3jvy92bwtz6tyknywtk2mdzmwt4 squsrzg
Developments in computing are driving the transformation of entire systems of production, management, and governance. In this interview Justine Cassell, Associate Dean, Technology, Strategy and Impact, at the School of Computer Science, Carnegie Mellon University, and co-chair of the Global Future Council on Computing, says we must ensure that these developments benefit all society, not just the wealthy or those participating in the “new economy”.

Why should the world care about the future of computing?

Today computers are in virtually everything we touch, all day long. We still have an image of computers as being rectangular objects either on a desk, or these days in our pockets; but computers are in our cars, they’re in our thermostats, they’re in our refrigerators. In fact, increasingly computers are no longer objects at all, but they suffuse fabric and virtually every other material. Because of that, we really do need to care about what the future of computing holds because it is going to impact our lives all day long.

Tell me about the technological breakthroughs we have already seen, and what you expect to see in the coming years?

Some of the exciting breakthroughs have to do with the internet of things. In the same way we have a tendency to think of computers as rectangular boxes, we have a tendency to think of the internet as being some kind of ether that floats around us. But quite recently researchers have made enormous breakthroughs in creating a way for all objects to communicate; so your phone might communicate to your refrigerator, which might communicate to the light bulb. In fact, in a near future, the light bulb will itself become a computer, projecting information instead of light.
Image: Statista
Similarly, biological computing addresses how the body itself can compute, how we can think about genetic material as computing. You can think of biological computing as a way of computing RNA or DNA and understanding biotechnology as a kind of computer. One of my colleagues here at Carnegie Mellon, Adam Feinberg, has been 3D-printing heart tissue. He’s been designing parts of the body on a computer using very fine-grained models that are based on the human body, and then using engineering techniques to create living organisms. That’s a very radical difference in what we consider the digital infrastructure and that shift is supporting a radical shift in the way we work, and live, and who we are as humans.

And quantum computing allows us to imagine a future where great breakthroughs in science will be made by computers that are no longer tethered to simple binary 0s and 1s.

How is computing changing? What are the forces driving those changes?

Some of the ways that computing is changing now are that it is moving into the fabrics in our clothing and it’s moving into our very bodies. We are now in the process of refining prosthetics that not only help people reach for something but in reaching, those prosthetics now send a message back to the brain. The first prosthetics were able quite miraculously to take a message from the brain and use it to control the world. But imagine how astounding it is if that prosthetic also tells the brain that it has grasped something. That really changes the way we think of what it means to be human, if our very brains are impacted by the movement of a piece of metal at the edge of our hands.

How could developments in computing impact industry, governments and society?

First of all, there’s really a disruption of all industry sectors. Everything from the information and entertainment sectors, that can imagine ads that understand your emotions when you look at them using machine learning; to manufacturing, where the robots on a production line can learn in real time as a function of what they perceive. You can imagine a robot arm in a factory that automatically remanufactures itself when the object that it is putting into boxes changes shape. Every sector is changing and even the lines between industry sectors are becoming blurred, as 3D-printing and machine learning come together for example; as manufacturing and information; or manufacturing and the body come together.

What needs to be done to ensure that their benefits are maximized and the associated risks kept under control?

If you think about the future of computing as a convergence of the biological, the physical and the digital (and the post-digital quantum), using as examples 3D-printing, biotechnology, robotics for prosthetics, the internet of things, autonomous vehicles, other kinds of artificial intelligence, you can see the extent of how life will change. We need to make sure that these developments benefit all of society, not just the most wealthy members of society who might want these prosthetics, but every person who needs them.

One of our first questions in the Council is going to be, how do we establish governance for equitable innovation? How do we foster the equitable benefits of these technologies for every nation and every person in every nation? And, is top-down governance the right model for controlling the use of these technologies, or is bottom-up ethical education of those that engage in the development of the technologies and their distribution, a better way to think about how to ensure equitable use?

I believe that all technologists need to keep in mind a multi-level, multi-part model of technology that takes into account the technological but also the social, the cultural, the legal, all of these aspects of development. All technologists need to be trained in the human as well as the technological so that they understand uses to which their technology could be put and reflect on the uses they want it to be put to.

What will computing look like in 2030?

We have no idea yet because change is happening so quickly. We know that quantum computing – the introduction of physics into the field of computer science – is going to be extremely important; that computers are going to become really, very tiny, the size of an atom. That’s going to make a huge difference; nano-computing, very small computers that you might swallow inside a pill and that will then learn about your illness and set about curing it; that brings together biological computing as well, where we can print parts of the body. So I think we’re going to see the increasing infusing of computing into all aspects of our lives. If our Council has its way, we’re going to see an increasing sense of responsibility on the part of technologists to ensure that those developments are for good.

What technology or gadget would you most like to see by 2030?

In my own work, I’m committed to ensuring that technology brings people together rather than separating them. There’s been some fear that having everybody stare at their cellphone all day long is separating us from one another; that we are no longer building bonds with other people. My own work goes towards ensuring that social bonds and the relationships amongst people, and even the relationship between us and our technology, supports a social infrastructure, so that we never forget those values that make us human.

To my mind it’s not a particular gadget that I want to see, it’s gadgets that ensure the bond between people is not only continued but strengthened, that the understanding amongst nations and amongst individuals is improved by virtue of the technologies that we encounter.

Anticancer gene

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Anticancer_gene   ...