Schematic of a set of molybdenum (M0) end-contacted
nanotube transistors (credit: Qing Cao et al./Science)
IBM Research has announced
a “major engineering breakthrough” that could lead to carbon nanotubes
replacing silicon transistors in future computing technologies.
As transistors shrink in size, electrical resistance increases within
the contacts, which impedes performance. So IBM researchers invented a
metallurgical process similar to microscopic welding that chemically
binds the contact’s metal (molybdenum) atoms to the carbon atoms at the
ends of nanotubes.
The new method promises to shrink transistor contacts without
reducing performance of carbon-nanotube devices, opening a pathway to
dramatically faster, smaller, and more powerful computer chips beyond
the capabilities of traditional silicon semiconductors.
“This is the kind of breakthrough that we’re committed to making at IBM Research via
our $3 billion investment over 5 years in research and development
programs aimed a pushing the limits of chip technology,” said Dario Gil,
VP, Science & Technology, IBM Research. “Our aim is to help IBM
produce high-performance systems capable of handling the extreme demands
of new data analytics and cognitive computing applications.”
The development was reported today in the October 2 issue of the journal Science.
Overcoming contact resistance
Schematic
of carbon nanotube transistor contacts. Left: High-
resistance
side-bonded contact, where the single-wall
nanotube (SWNT) (black tube)
is partially covered by the
metal molybdenum (Mo) (purple dots). Right:
low-resistance
end-bonded contact, where the SWNT is attached to the
molybdenum electrode through carbide bonds, while the
carbon atoms
(black dots) from the originally covered portion
of the SWNT uniformly
diffuse out into the Mo electrode
(credit: Qing Cao et al./Science)
The new “end-bonded contact scheme” allows carbon-nanotube contacts
to be shrunken down to below 10 nanometers without deteriorating
performance. IBM says the scheme could overcome contact resistance
challenges all the way to the 1.8 nanometer node and replace silicon
with carbon nanotubes.
Silicon transistors have been made smaller year after year, but they
are approaching a point of physical limitation. With Moore’s Law running
out of steam, shrinking the size of the transistor — including the
channels and contacts — without compromising performance has been a
challenge for researchers for decades.
Single wall carbon nanotube (credit: IBM)
IBM has previously shown that carbon nanotube transistors can operate
as excellent switches at channel dimensions of less than ten
nanometers, which is less than half the size of today’s leading silicon
technology. Electrons in carbon transistors can move more easily than in
silicon-based devices and use less power.
Carbon nanotubes are also flexible and transparent, making them
useful for flexible and stretchable electronics or sensors embedded in
wearables.
IBM acknowledges that several major manufacturing challenges still
stand in the way of commercial devices based on nanotube transistors.
Structured programming is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by making extensive use of the structured control flow constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines in contrast to using simple tests and jumps such as the go to statement, which can lead to "spaghetti code" that is potentially difficult to follow and maintain.
It emerged in the late 1950s with the appearance of the ALGOL 58 and ALGOL 60 programming languages,[1]
with the latter including support for block structures. Contributing
factors to its popularity and widespread acceptance, at first in
academia and later among practitioners, include the discovery of what is
now known as the structured program theorem in 1966,[2] and the publication of the influential "Go To Statement Considered Harmful" open letter in 1968 by Dutch computer scientist Edsger W. Dijkstra, who coined the term "structured programming".[3]
Structured programming is most frequently used with deviations
that allow for clearer programs in some particular cases, such as when exception handling has to be performed.
"Sequence"; ordered statements or subroutines executed in sequence.
"Selection"; one or a number of statements is executed depending on the state of the program. This is usually expressed with keywords such as if..then..else..endif.
"Iteration"; a statement or block is executed until the program
reaches a certain state, or operations have been applied to every
element of a collection. This is usually expressed with keywords such
as while, repeat, for or do..until.
Often it is recommended that each loop should only have one entry point
(and in the original structural programming, also only one exit point,
and a few languages enforce this).
"Recursion"; a statement is executed by repeatedly calling itself
until termination conditions are met. While similar in practice to
iterative loops, recursive loops may be more computationally efficient,
and are implemented differently as a cascading stack.
Subroutines;
callable units such as procedures, functions, methods, or subprograms
are used to allow a sequence to be referred to by a single statement.
Blocks
Blocks are used to enable groups of statements to be treated as if they were one statement. Block-structured languages have a syntax for enclosing structures in some formal way, such as an if-statement bracketed by if..fi as in ALGOL 68, or a code section bracketed by BEGIN..END, as in PL/I and Pascal, whitespace indentation as in Python - or the curly braces {...} of C and many later languages.
Structured programming languages
It is possible to do structured programming in any programming language, though it is preferable to use something like a procedural programming language. Some of the languages initially used for structured programming include: ALGOL, Pascal, PL/I and Ada,
but most new procedural programming languages since that time have
included features to encourage structured programming, and sometimes
deliberately left out features – notably GOTO – in an effort to make unstructured programming more difficult.
Structured programming (sometimes known as modular programming)
enforces a logical structure on the program being written to make it
more efficient and easier to understand and modify.
History
Theoretical foundation
The structured program theorem
provides the theoretical basis of structured programming. It states
that three ways of combining programs—sequencing, selection, and
iteration—are sufficient to express any computable function. This observation did not originate with the structured programming movement; these structures are sufficient to describe the instruction cycle of a central processing unit, as well as the operation of a Turing machine.
Therefore, a processor is always executing a "structured program" in
this sense, even if the instructions it reads from memory are not part
of a structured program. However, authors usually credit the result to a
1966 paper by Böhm and Jacopini, possibly because Dijkstra cited this paper himself.[4]
The structured program theorem does not address how to write and
analyze a usefully structured program. These issues were addressed
during the late 1960s and early 1970s, with major contributions by Dijkstra, Robert W. Floyd, Tony Hoare, Ole-Johan Dahl, and David Gries.
Debate
P. J. Plauger, an early adopter of structured programming, described his reaction to the structured program theorem:
Us converts waved this interesting bit of news under the noses
of the unreconstructed assembly-language programmers who kept trotting
forth twisty bits of logic and saying, 'I betcha can't structure this.'
Neither the proof by Böhm and Jacopini nor our repeated successes at
writing structured code brought them around one day sooner than they
were ready to convince themselves.[5]
Donald Knuth accepted the principle that programs must be written with provability in mind, but he disagreed (and still disagrees[citation needed]) with abolishing the GOTO statement. In his 1974 paper, "Structured Programming with Goto Statements",[6]
he gave examples where he believed that a direct jump leads to clearer
and more efficient code without sacrificing provability. Knuth proposed a
looser structural constraint: It should be possible to draw a program's
flow chart
with all forward branches on the left, all backward branches on the
right, and no branches crossing each other. Many of those knowledgeable
in compilers and graph theory have advocated allowing only reducible flow graphs[when defined as?].[who?]
Structured programming theorists gained a major ally in the 1970s after IBM researcher Harlan Mills applied his interpretation of structured programming theory to the development of an indexing system for The New York Times
research file. The project was a great engineering success, and
managers at other companies cited it in support of adopting structured
programming, although Dijkstra criticized the ways that Mills's
interpretation differed from the published work.[citation needed]
As late as 1987 it was still possible to raise the question of
structured programming in a computer science journal. Frank Rubin did so
in that year with an open letter titled ""GOTO considered harmful"
considered harmful".[7]
Numerous objections followed, including a response from Dijkstra that
sharply criticized both Rubin and the concessions other writers made
when responding to him.
Outcome
By the
end of the 20th century nearly all computer scientists were convinced
that it is useful to learn and apply the concepts of structured
programming. High-level programming languages that originally lacked
programming structures, such as FORTRAN, COBOL, and BASIC, now have them.
Common deviations
While
goto has now largely been replaced by the structured constructs of
selection (if/then/else) and repetition (while and for), few languages
are purely structured. The most common deviation, found in many
languages, is the use of a return statement
for early exit from a subroutine. This results in multiple exit points,
instead of the single exit point required by structured programming.
There are other constructions to handle cases that are awkward in purely
structured programming.
Early exit
The
most common deviation from structured programming is early exit from a
function or loop. At the level of functions, this is a return statement. At the level of loops, this is a break statement (terminate the loop) or continue
statement (terminate the current iteration, proceed with next
iteration). In structured programming, these can be replicated by adding
additional branches or tests, but for returns from nested code this can
add significant complexity. C
is an early and prominent example of these constructs. Some newer
languages also have "labeled breaks", which allow breaking out of more
than just the innermost loop. Exceptions also allow early exit, but have
further consequences, and thus are treated below.
Multiple exits can arise for a variety of reasons, most often
either that the subroutine has no more work to do (if returning a value,
it has completed the calculation), or has encountered "exceptional"
circumstances that prevent it from continuing, hence needing exception
handling.
The most common problem in early exit is that cleanup or final
statements are not executed – for example, allocated memory is not
deallocated, or open files are not closed, causing memory leaks or resource leaks.
These must be done at each return site, which is brittle and can easily
result in bugs. For instance, in later development, a return statement
could be overlooked by a developer, and an action which should be
performed at the end of a subroutine (e.g., a trace statement) might not be performed in all cases. Languages without a return statement, such as standard Pascal, do not have this problem.
Most modern languages provide language-level support to prevent such leaks;[8] see detailed discussion at resource management.
Most commonly this is done via unwind protection, which ensures that
certain code is guaranteed to be run when execution exits a block; this
is a structured alternative to having a cleanup block and a goto. This is most often known as try...finally, and considered a part of exception handling. Various techniques exist to encapsulate resource management. An alternative approach, found primarily in C++, is Resource Acquisition Is Initialization,
which uses normal stack unwinding (variable deallocation) at function
exit to call destructors on local variables to deallocate resources.
Kent Beck, Martin Fowler and co-authors have argued in their refactoring
books that nested conditionals may be harder to understand than a
certain type of flatter structure using multiple exits predicated by guard clauses.
Their 2009 book flatly states that "one exit point is really not a
useful rule. Clarity is the key principle: If the method is clearer with
one exit point, use one exit point; otherwise don’t". They offer a
cookbook solution for transforming a function consisting only of nested
conditionals into a sequence of guarded return (or throw) statements,
followed by a single unguarded block, which is intended to contain the
code for the common case, while the guarded statements are supposed to
deal with the less common ones (or with errors).[9]Herb Sutter and Andrei Alexandrescu also argue in their 2004 C++ tips book that the single-exit point is an obsolete requirement.[10]
In his 2004 textbook, David Watt writes that "single-entry multi-exit control flows are often desirable". Using Tennent's framework notion of sequencer,
Watt uniformly describes the control flow constructs found in
contemporary programming languages and attempts to explain why certain
types of sequencers are preferable to others in the context of
multi-exit control flows. Watt writes that unrestricted gotos (jump
sequencers) are bad because the destination of the jump is not
self-explanatory to the reader of a program until the reader finds and
examines the actual label or address that is the target of the jump. In
contrast, Watt argues that the conceptual intent of a return sequencer
is clear from its own context, without having to examine its
destination. Watt writes that a class of sequencers known as escape sequencers,
defined as a "sequencer that terminates execution of a textually
enclosing command or procedure", encompasses both breaks from loops
(including multi-level breaks) and return statements. Watt also notes
that while jump sequencers (gotos) have been somewhat restricted in
languages like C, where the target must be an inside the local block or
an encompassing outer block, that restriction alone is not sufficient to
make the intent of gotos in C self-describing and so they can still
produce "spaghetti code".
Watt also examines how exception sequencers differ from escape and jump
sequencers; this is explained in the next section of this article.[11]
In contrast to the above, Bertrand Meyer wrote in his 2009 textbook that instructions like break and continue "are just the old goto in sheep's clothing" and strongly advised against their use.[12]
Exception handling
Based on the coding error from the Ariane 501 disaster,
software developer Jim Bonang argues that any exceptions thrown from a
function violate the single-exit paradigm, and proposes that all
inter-procedural exceptions should be forbidden. In C++ syntax, this is
done by declaring all function signatures as noexcept (since C++11) or throw().[13] Bonang proposes that all single-exit conforming C++ should be written along the lines of:
boolmyCheck1()throw(){boolsuccess=false;try{// do something that may throw exceptionsif(myCheck2()==false){throwSomeInternalException();}// other code similar to the abovesuccess=true;}catch(...){// all exceptions caught and logged}returnsuccess;}
Peter Ritchie also notes that, in principle, even a single throw right before the return
in a function constitutes a violation of the single-exit principle, but
argues that Dijkstra's rules were written in a time before exception
handling became a paradigm in programming languages, so he proposes to
allow any number of throw points in addition to a single return point.
He notes that solutions which wrap exceptions for the sake of creating a
single-exit have higher nesting depth and thus are more difficult to
comprehend, and even accuses those who propose to apply such solutions
to programming languages which support exceptions of engaging in cargo cult thinking.[14]
David Watt also analyzes exception handling in the framework of
sequencers (introduced in this article in the previous section on early
exits.) Watt notes that an abnormal situation (generally exemplified
with arithmetic
overflows or input/output failures like file not found) is a kind of
error that "is detected in some low-level program unit, but [for which] a
handler is more naturally located in a high-level program unit". For
example, a program might contain several calls to read files, but the
action to perform when a file is not found depends on the meaning
(purpose) of the file in question to the program and thus a handling
routine for this abnormal situation cannot be located in low-level
system code. Watts further notes that introducing status flags testing
in the caller, as single-exit structured programming or even
(multi-exit) return sequencers would entail, results in a situation
where "the application code tends to get cluttered by tests of status
flags" and that "the programmer might forgetfully or lazily omit to test
a status flag. In fact, abnormal situations represented by status flags
are by default ignored!" He notes that in contrast to status flags
testing, exceptions have the opposite default behavior,
causing the program to terminate unless the programmer explicitly deals
with the exception in some way, possibly by adding code to willfully
ignore it. Based on these arguments, Watt concludes that jump sequencers
or escape sequencers (discussed in the previous section) aren't as
suitable as a dedicated exception sequencer with the semantics discussed
above.[15]
The textbook by Louden and Lambert emphasizes that exception handling differs from structured programming constructs like while
loops because the transfer of control "is set up at a different point
in the program than that where the actual transfer takes place. At the
point where the transfer actually occurs, there may be no syntactic
indication that control will in fact be transferred."[16]
Computer science professor Arvind Kumar Bansal also notes that in
languages which implement exception handling, even control structures
like for, which have the single-exit property in absence of
exceptions, no longer have it in presence of exceptions, because an
exception can prematurely cause an early exit in any part of the control
structure; for instance if init() throws an exception in for (init(); check(); increm()), then the usual exit point after check() is not reached.[17] Citing multiple prior studies by others (1999-2004) and their own results, Westley Weimer and George Necula
wrote that a significant problem with exceptions is that they "create
hidden control-flow paths that are difficult for programmers to reason
about".[18]:8:27
The necessity to limit code to single-exit points appears in some
contemporary programming environments focused on parallel computing,
such as OpenMP. The various parallel constructs from OpenMP, like parallel do,
do not allow early exits from inside to the outside of the parallel
construct; this restriction includes all manner of exits, from break to C++ exceptions, but all of these are permitted inside the parallel construct if the jump target is also inside it.[19]
Multiple entry
More rarely, subprograms allow multiple entry. This is most commonly only re-entry into a coroutine (or generator/semicoroutine),
where a subprogram yields control (and possibly a value), but can then
be resumed where it left off. There are a number of common uses of such programming, notably for streams
(particularly input/output), state machines, and concurrency. From a
code execution point of view, yielding from a coroutine is closer to
structured programming than returning from a subroutine, as the
subprogram has not actually terminated, and will continue when called
again – it is not an early exit. However, coroutines mean that multiple
subprograms have execution state – rather than a single call stack of
subroutines – and thus introduce a different form of complexity.
It is very rare for subprograms to allow entry to an arbitrary
position in the subprogram, as in this case the program state (such as
variable values) is uninitialized or ambiguous, and this is very similar
to a goto.
State machines
Some programs, particularly parsers and communications protocols, have a number of states
that follow each other in a way that is not easily reduced to the basic
structures, and some programmers implement the state-changes with a
jump to the new state. This type of state-switching is often used in the
Linux kernel.[citation needed]
However, it is possible to structure these systems by making each
state-change a separate subprogram and using a variable to indicate the
active state (see trampoline). Alternatively, these can be implemented via coroutines, which dispense with the trampoline.
Breakthrough could jumpstart further miniaturization of transistors, possibly extending Moore's law
June 7, 2018 Original link: http://www.kurzweilai.net/overcoming-transistor-miniaturization-limits-due-to-quantum-tunneling
An
illustration of a single-molecule device that blocks leakage current in
a transistor (yellow: gold transistor electrodes) (credit: Haixing
Li/Columbia Engineering)
A team of researchers at Columbia Engineering and associates* have synthesized a molecule that could overcome a major physical limit to miniaturizing computer transistors at the nanometer scale (under about 3 nanometers) — caused by “leakage current.”
Leakage current between two metal transistor electrodes results when
the gap between the electrodes narrows to the point that electrons are
no longer contained by their barriers — a phenomenon known as quantum tunneling.
The researchers synthesized the first molecule** capable of
insulating (preventing electron flow) at the nanometer scale more
effectively than a vacuum barrier (the traditional approach). The
molecule bridges the nanometer gap between two metal electrodes.
Constructive
interference (left) between two waves increases the resulting wave;
destructive interference (right) decreases the resulting wave. (credit:
Wikipedia)
The silicon-based molecule design uses “destructive quantum
interference,” which occurs when the peaks and valleys of two waves are
placed exactly out of phase, annulling oscillation.
“We’ve reached the point where it’s critical for researchers to
develop creative solutions for redesigning insulators. Our molecular
strategy represents a new design principle for classic devices, with the
potential to support continued miniaturization in the near term,” said
Columbia Engineering physicist Latha Venkataraman, Ph.D.
Software architecture refers to the high level structures of a software system,
the discipline of creating such structures, and system. Each structure
comprises software elements, relations among them, and properties of
both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building.
It functions as a blueprint for the system and the developing project,
laying out the tasks necessary to be executed by the design teams.
Software architecture is about making fundamental structural
choices which are costly to change once implemented. Software
architecture choices include specific structural options from
possibilities in the design of software. For example, the systems that
controlled the space shuttle launch vehicle had the requirement of being very fast and very reliable. Therefore, an appropriate real-time computing
language would need to be chosen. Additionally, to satisfy the need for
reliability the choice could be made to have multiple redundant and
independently produced copies of the program, and to run these copies on
independent hardware while cross-checking results.
Documenting software architecture facilitates communication between stakeholders, captures early decisions about the high-level design, and allows reuse of design components between projects.
Scope
Opinions vary as to the scope of software architectures:[5]
Overall, macroscopic system structure;[6] this refers to architecture as a higher level abstraction of a software system that consists of a collection of computational components together with connectors that describe the interaction between these components.
The important stuff—whatever that is;[7]
this refers to the fact that software architects should concern
themselves with those decisions that have high impact on the system and
its stakeholders.
That which is fundamental to understanding a system in its environment"[8]
Things that people perceive as hard to change;[7]
since designing the architecture takes place at the beginning of a
software system's lifecycle, the architect should focus on decisions
that "have to" be right the first time. Following this line of thought,
architectural design issues may become non-architectural once their
irreversibility can be overcome.
A set of architectural design decisions;[9]
software architecture should not be considered merely a set of models
or structures, but should include the decisions that lead to these
particular structures, and the rationale behind them. This insight has
led to substantial research into software architecture knowledge management.[10]
There is no sharp distinction between software architecture versus design and requirements engineering (see Related fields below). They are all part of a "chain of intentionality" from high-level intentions to low-level details.[11]:18
Characteristics
Software architecture exhibits the following:
Multitude of stakeholders: software systems have to cater
to a variety of stakeholders such as business managers, owners, users,
and operators. These stakeholders all have their own concerns with
respect to the system. Balancing these concerns and demonstrating how
they are addressed is part of designing the system.[4]:29–31
This implies that architecture involves dealing with a broad variety of
concerns and stakeholders, and has a multidisciplinary nature.
Separation of concerns:the established way
for architects to reduce complexity is to separate the concerns that
drive the design. Architecture documentation shows that all stakeholder
concerns are addressed by modeling and describing the architecture from
separate points of view associated with the various stakeholder
concerns.[12] These separate descriptions are called architectural views (see for example the 4+1 Architectural View Model).
Recurring styles: like building architecture, the software
architecture discipline has developed standard ways to address
recurring concerns. These "standard ways" are called by various names at
various levels of abstraction. Common terms for recurring solutions are
architectural style,[11]:273–277 tactic,[4]:70–72reference architecture[13][14] and architectural pattern.[4]:203–205
Conceptual integrity: a term introduced by Fred Brooks in The Mythical Man-Month
to denote the idea that the architecture of a software system
represents an overall vision of what it should do and how it should do
it. This vision should be separated from its implementation. The
architect assumes the role of "keeper of the vision", making sure that
additions to the system are in line with the architecture, hence
preserving conceptual integrity.[15]:41–50
Cognitive constraints:an observation first made in a 1967 paper by computer programmer Melvin Conway
that organizations which design systems are constrained to produce
designs which are copies of the communication structures of these
organizations. As with conceptual integrity, it was Fred Brooks who
introduced it to a wider audience when he cited the paper and the idea
in his elegant classic The Mythical Man-Month, calling it "Conway's Law."
Motivation
Software architecture is an "intellectually graspable" abstraction of a complex system.[4]:5–6 This abstraction provides a number of benefits:
It gives a basis for analysis of software systems' behavior before the system has been built.[2]
The ability to verify that a future software system fulfills its
stakeholders' needs without actually having to build it represents
substantial cost-saving and risk-mitigation.[16] A number of techniques have been developed to perform such analyses, such as ATAM.
It provides a basis for re-use of elements and decisions.[2][4]:35
A complete software architecture or parts of it, like individual
architectural strategies and decisions, can be re-used across multiple
systems whose stakeholders require similar quality attributes or
functionality, saving design costs and mitigating the risk of design
mistakes.
It supports early design decisions that impact a system's development, deployment, and maintenance life.[4]:31 Getting the early, high-impact decisions right is important to prevent schedule and budget overruns.
It facilitates communication with stakeholders, contributing to a system that better fulfills their needs.[4]:29–31
Communicating about complex systems from the point of view of
stakeholders helps them understand the consequences of their stated
requirements and the design decisions based on them. Architecture gives
the ability to communicate about design decisions before the system is
implemented, when they are still relatively easy to adapt.
It helps in risk management. Software architecture helps to reduce risks and chance of failure.[11]:18
It enables cost reduction. Software architecture is a means to manage risk and costs in complex IT projects.[17]
History
The comparison between software design and (civil) architecture was first drawn in the late 1960s,[18] but the term software architecture became prevalent only in the beginning of the 1990s.[19] The field of computer science had encountered problems associated with complexity since its formation.[20] Earlier problems of complexity were solved by developers by choosing the right data structures, developing algorithms, and by applying the concept of separation of concerns.
Although the term "software architecture" is relatively new to the
industry, the fundamental principles of the field have been applied
sporadically by software engineering
pioneers since the mid-1980s. Early attempts to capture and explain
software architecture of a system were imprecise and disorganized, often
characterized by a set of box-and-line diagrams.
[21]
Software architecture as a concept has its origins in the research of Edsger Dijkstra in 1968 and David Parnas
in the early 1970s. These scientists emphasized that the structure of a
software system matters and getting the structure right is critical.
During the 1990s there was a concerted effort to define and codify
fundamental aspects of the discipline, with research work concentrating
on architectural styles (patterns), architecture description languages, architecture documentation, and formal methods.[22]
Research institutions have played a prominent role in furthering software architecture as a discipline. Mary Shaw and David Garlan of Carnegie Mellon wrote a book titled Software Architecture: Perspectives on an Emerging Discipline in 1996, which promoted software architecture concepts such as components, connectors, and styles. The University of California, Irvine's
Institute for Software Research's efforts in software architecture
research is directed primarily in architectural styles, architecture
description languages, and dynamic architectures.
IEEE 1471-2000, Recommended Practice for Architecture Description of Software-Intensive Systems, was the first formal standard in the area of software architecture. It was adopted in 2007 by ISO as ISO/IEC 42010:2007. In November 2011, IEEE 1471–2000 was superseded by ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description (jointly published by IEEE and ISO).[12]
While in IEEE 1471,
software architecture was about the architecture of "software-intensive
systems", defined as "any system where software contributes essential
influences to the design, construction, deployment, and evolution of the
system as a whole", the 2011 edition goes a step further by including
the ISO/IEC 15288 and ISO/IEC 12207
definitions of a system, which embrace not only hardware and software,
but also "humans, processes, procedures, facilities, materials and
naturally occurring entities". This reflects the relationship between
software architecture, enterprise architecture and solution architecture.
Architecture activities
There
are many activities that a software architect performs. A software
architect typically works with project managers, discusses architecturally significant requirements
with stakeholders, designs a software architecture, evaluates a design,
communicates with designers and stakeholders, documents the
architectural design and more.[23] There are four core activities in software architecture design.[24]
These core architecture activities are performed iteratively and at
different stages of the initial software development life-cycle, as well
as over the evolution of a system.
Architectural analysis is the process of understanding the
environment in which a proposed system or systems will operate and
determining the requirements for the system. The input or requirements
to the analysis activity can come from any number of stakeholders and
include items such as:
what the system will do when operational (the functional requirements)
how well the system will perform runtime non-functional requirements
such as reliability, operability, performance efficiency, security,
compatibility defined in ISO/IEC 25010:2011 standard[25]
development-time non-functional requirements such as maintainability and transferability defined in ISO 25010:2011 standard[25]
business requirements and environmental contexts of a system that
may change over time, such as legal, social, financial, competitive, and
technology concerns[26]
The outputs of the analysis activity are those requirements that have
a measurable impact on a software system’s architecture, called architecturally significant requirements.[27]
Architectural synthesis or design is the process of creating an architecture. Given the architecturally significant requirements
determined by the analysis, the current state of the design and the
results of any evaluation activities, the design is created and
improved.[24][4]:311–326
Architecture evaluation is the process of determining how
well the current design or a portion of it satisfies the requirements
derived during analysis. An evaluation can occur whenever an architect
is considering a design decision, it can occur after some portion of the
design has been completed, it can occur after the final design has been
completed or it can occur after the system has been constructed. Some
of the available software architecture evaluation techniques include Architecture Tradeoff Analysis Method (ATAM) and TARA.[28] Frameworks for comparing the techniques are discussed in frameworks such as SARA Report[16] and Architecture Reviews: Practice and Experience.[29]
Architecture evolution is the process of maintaining and
adapting an existing software architecture to meet changes in
requirements and environment. As software architecture provides a
fundamental structure of a software system, its evolution and
maintenance would necessarily impact its fundamental structure. As such,
architecture evolution is concerned with adding new functionality as
well as maintaining existing functionality and system behavior.
Architecture requires critical supporting activities. These
supporting activities take place throughout the core software
architecture process. They include knowledge management and
communication, design reasoning and decision making, and documentation.
Architecture supporting activities
Software
architecture supporting activities are carried out during core software
architecture activities. These supporting activities assist a software
architect to carry out analysis, synthesis, evaluation, and evolution.
For instance, an architect has to gather knowledge, make decisions and
document during the analysis phase.
Knowledge management and communication is the act of
exploring and managing knowledge that is essential to designing a
software architecture. A software architect does not work in isolation.
They get inputs, functional and non-functional requirements and design
contexts, from various stakeholders; and provides outputs to
stakeholders. Software architecture knowledge is often tacit and is
retained in the heads of stakeholders. Software architecture knowledge
management activity is about finding, communicating, and retaining
knowledge. As software architecture design issues are intricate and
interdependent, a knowledge gap in design reasoning can lead to
incorrect software architecture design.[23][30]
Examples of knowledge management and communication activities include
searching for design patterns, prototyping, asking experienced
developers and architects, evaluating the designs of similar systems,
sharing knowledge with other designers and stakeholders, and documenting
experience in a wiki page.
Design reasoning and decision making is the activity of
evaluating design decisions. This activity is fundamental to all three
core software architecture activities.[9][31]
It entails gathering and associating decision contexts, formulating
design decision problems, finding solution options and evaluating
tradeoffs before making decisions. This process occurs at different
levels of decision granularity while evaluating significant
architectural requirements and software architecture decisions, and
software architecture analysis, synthesis, and evaluation. Examples of
reasoning activities include understanding the impacts of a requirement
or a design on quality attributes, questioning the issues that a design
might cause, assessing possible solution options, and evaluating the
tradeoffs between solutions.
Documentation is the act of recording the design generated
during the software architecture process. A system design is described
using several views that frequently include a static view showing the
code structure of the system, a dynamic view showing the actions of the
system during execution, and a deployment view showing how a system is
placed on hardware for execution. Kruchten's 4+1 view suggests a
description of commonly used views for documenting software
architecture;[32]
Documenting Software Architectures: Views and Beyond has descriptions
of the kinds of notations that could be used within the view
description.[1]
Examples of documentation activities are writing a specification,
recording a system design model, documenting a design rationale,
developing a viewpoint, documenting views.
Software architecture topics
Software architecture description
Software architecture description involves the principles and
practices of modeling and representing architectures, using mechanisms
such as: architecture description languages, architecture viewpoints,
and architecture frameworks.
Architecture description languages
An architecture description language (ADL) is any means of expression used to describe a software architecture (ISO/IEC/IEEE 42010).
Many special-purpose ADLs have been developed since the 1990s, including AADL (SAE standard), Wright (developed by Carnegie Mellon), Acme (developed by Carnegie Mellon), xADL (developed by UCI), Darwin (developed by Imperial College London), DAOP-ADL (developed by University of Málaga), SBC-ADL (developed by National Sun Yat-Sen University), and ByADL (University of L'Aquila, Italy).
Software architecture descriptions are commonly organized into views, which are analogous to the different types of blueprints made in building architecture. Each view addresses a set of system concerns, following the conventions of its viewpoint,
where a viewpoint is a specification that describes the notations,
modeling, and analysis techniques to use in a view that express the
architecture in question from the perspective of a given set of
stakeholders and their concerns (ISO/IEC/IEEE 42010).
The viewpoint specifies not only the concerns framed (i.e., to be
addressed) but the presentation, model kinds used, conventions used and
any consistency (correspondence) rules to keep a view consistent with
other views.
Architecture frameworks
An architecture framework captures the "conventions, principles and
practices for the description of architectures established within a
specific domain of application and/or community of stakeholders" (ISO/IEC/IEEE 42010). A framework is usually implemented in terms of one or more viewpoints or ADLs.
Architectural styles and patterns
An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context.
Architectural patterns are often documented as software design patterns.
Following traditional building architecture, a 'software
architectural style' is a specific method of construction, characterized
by the features that make it notable" (architectural style).
“
An
architectural style defines: a family of systems in terms of a pattern
of structural organization; a vocabulary of components and connectors,
with constraints on how they can be combined.[33]
”
“
Architectural
styles are reusable 'packages' of design decisions and constraints that
are applied to an architecture to induce chosen desirable qualities.[34]
”
There are many recognized architectural patterns and styles, among them:
Some treat architectural patterns and architectural styles as the same,[35]
some treat styles as specializations of patterns. What they have in
common is both patterns and styles are idioms for architects to use,
they "provide a common language"[35] or "vocabulary"[33] with which to describe classes of systems.
Software architecture and agile development
There are also concerns that software architecture leads to too much Big Design Up Front, especially among proponents of agile software development. A number of methods have been developed to balance the trade-offs of up-front design and agility,[36] including the agile method DSDM which mandates a "Foundations" phase during which "just enough" architectural foundations are laid. IEEE Software devoted a special issue[37] to the interaction between agility and architecture.
Software architecture erosion
Software
architecture erosion (or "decay") refers to the gap observed between
the planned and actual architecture of a software system as realized in
its implementation.[38]
Software architecture erosion occurs when implementation decisions
either do not fully achieve the architecture-as-planned or otherwise
violate constraints or principles of that architecture.[2] The gap between planned and actual architectures is sometimes understood in terms of the notion of technical debt.
As an example, consider a strictly layered
system, where each layer can only use services provided by the layer
immediately below it. Any source code component that does not observe
this constraint represents an architecture violation. If not corrected,
such violations can transform the architecture into a monolithic block,
with adverse effects on understandability, maintainability, and
evolvability.
Various approaches have been proposed to address erosion.
"These approaches, which include tools, techniques, and processes, are
primarily classified into three general categories that attempt to
minimize, prevent and repair architecture erosion. Within these broad
categories, each approach is further broken down reflecting the
high-level strategies adopted to tackle erosion. These are
process-oriented architecture conformance, architecture evolution
management, architecture design enforcement, architecture to
implementation linkage, self-adaptation and architecture restoration
techniques consisting of recovery, discovery, and reconciliation."[39]
There are two major techniques to detect architectural
violations: reflexion models and domain-specific languages. Reflexion
model (RM) techniques compare a high-level model provided by the
system's architects with the source code implementation. There are also domain-specific languages with a focus on specifying and checking architectural constraints.
Software architecture recovery
Software architecture recovery (or reconstruction, or reverse engineering)
includes the methods, techniques, and processes to uncover a software
system's architecture from available information, including its
implementation and documentation. Architecture recovery is often
necessary to make informed decisions in the face of obsolete or
out-of-date documentation and
architecture erosion: implementation and maintenance decisions diverging from the envisioned architecture.[40] Practices exist to recover software architecture as Static program analysis. This is a part of subjects covered by the Software Intelligence practice.
Related fields
Design
Architecture is design but not all design is architectural.[1]
In practice, the architect is the one who draws the line between
software architecture (architectural design) and detailed design
(non-architectural design). There are no rules or guidelines that fit
all cases, although there have been attempts to formalize the
distinction.
According to the Intension/Locality Hypothesis,[41] the distinction between architectural and detailed design is defined by the Locality Criterion,[41]
according to which a statement about software design is non-local
(architectural) if and only if a program that satisfies it can be
expanded into a program that does not. For example, the client–server
style is architectural (strategic) because a program that is built on
this principle can be expanded into a program that is not
client–server—for example, by adding peer-to-peer nodes.
Requirements engineering
Requirements engineering and software architecture can be seen as complementary approaches: while software architecture targets the 'solution space' or the 'how', requirements engineering addresses the 'problem space' or the 'what'.[42] Requirements engineering entails the elicitation, negotiation, specification, validation, documentation and management of requirements. Both requirements engineering and software architecture revolve around stakeholder concerns, needs and wishes.
There is considerable overlap between requirements engineering
and software architecture, as evidenced for example by a study into five
industrial software architecture methods that concludes that "the
inputs (goals, constrains, etc.) are usually ill-defined, and only get
discovered or better understood as the architecture starts to emerge" and that while "most architectural concerns are expressed as requirements on the system, they can also include mandated design decisions".[24]
In short, the choice of required behavior given a particular problem
impacts the architecture of the solution that addresses that problem,
while at the same time the architectural design may impact the problem
and introduce new requirements.[43] Approaches such as the Twin Peaks model[44] aim to exploit the synergistic relation between requirements and architecture.
Other types of 'architecture'
Computer architecture
Computer architecture targets the internal structure of a computer system, in terms of collaborating hardware components such as the CPU – or processor – the bus and the memory.
Systems architecture
The term systems architecture has originally been applied to the architecture of systems that consists of both hardware and software.
The main concern addressed by the systems architecture is then the
integration of software and hardware in a complete, correctly working
device. In another common – much broader – meaning, the term applies to
the architecture of any complex system which may be of technical, sociotechnical or social nature.
Enterprise architecture
The goal of enterprise architecture is to "translate business vision and strategy into effective enterprise".[45] Enterprise architecture frameworks, such as TOGAF and the Zachman Framework,
usually distinguish between different enterprise architecture layers.
Although terminology differs from framework to framework, many include
at least a distinction between a business layer, an application (or information) layer, and a technology layer. Enterprise architecture addresses among others the alignment between these layers, usually in a top-down approach.