Search This Blog

Monday, September 30, 2024

Anglo-Saxon model

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Anglo-Saxon_model

The Anglo-Saxon model (so called because it is practiced in Anglosphere countries such as the United Kingdom, the United States, Canada, New Zealand, Australia and Ireland) is a regulated market-based economic model that emerged in the 1970s based on the Chicago school of economics, spearheaded in the 1980s in the United States by the economics of then President Ronald Reagan (dubbed Reaganomics), and reinforced in the United Kingdom by then Prime Minister Margaret Thatcher (dubbed Thatcherism). However, its origins are said to date to the 18th century in the United Kingdom and the ideas of the classical economist Adam Smith.

Characteristics of this model include low levels of regulation and taxation, with the public sector providing minimal services. It also means strong private property rights, contract enforcement, and overall ease of doing business as well as low barriers to free trade.

Disagreements over meaning

Proponents of the term "Anglo-Saxon economy" argue that the economies of these countries currently are so closely related in their liberal and free market orientation that they can be regarded as sharing a specific macroeconomic model. However, those who disagree with the use of the term claim that the economies of these countries differ as much from each other as they do from the so-called "welfare capitalist" economies of northern and continental Europe.

The Anglo-Saxon model of capitalism is usually contrasted with the Continental model of capitalism, known as Rhine capitalism, the social market economy or the German model, but it is also contrasted with Northern-European models of capitalism found in the Nordic countries, called the Nordic model. The major difference between these economies from Anglo-Saxon economies is the scope of collective bargaining rights and corporatist policies. Differences between Anglo-Saxon economies are illustrated by taxation and the welfare state. The United Kingdom has a significantly higher level of taxation than the United States. Moreover, the United Kingdom spends far more than the United States on the welfare state as a percentage of GDP and also spends more than Spain, Portugal, or the Netherlands. This spending figure is still considerably lower than that of France or Germany.

In northern continental Europe, most countries use mixed economy models, called Rhine capitalism (a current term used especially for the macroeconomics of Germany, France, Belgium and the Netherlands), or its close relative the Nordic model (which refers to the macroeconomics of Denmark, Iceland, Norway, Sweden and Finland).

The debate amongst economists as to which economic model is better, circles around perspectives involving poverty, job insecurity, social services and inequality. Generally speaking, advocates of Anglo-Saxon model argue that more liberalized economies produce greater overall prosperity while defenders of continental models counter that they produce lesser inequality and lesser poverty at the lowest margins.

The rise of China has brought into focus the relevance of an alternate economic model which has helped propel the economy of China for thirty years since its opening up in 1978. The socialist market economy or a system based on what is called "socialism with Chinese characteristics". A confident China is increasingly offering it as an alternate development model to the Anglo-Saxon model to emerging economies in Africa and Asia.

History of Anglo-Saxon model

The Anglo-Saxon model came out in the 1970s from the Chicago School of Economics. The return to economic liberalism in the Anglo-Saxon countries is explained by the failure of Keynesian economic management to control the stagflation in the 1970s and early 1980s The Anglo-Saxon model was made from the ideas of Friedman and the Chicago School economists and the conventional wisdom of pre-Keynesian, liberal economic ideas which stated that success in fighting inflation is dependent on managing the money supply whilst efficiency in the utilization of resources and that unrestricted markets are the most efficient for this goal of combating inflation.

By the end of the 1970s the British post-war economic model was in trouble. After Labour failed to solve the problems it was left to Margaret Thatcher's Conservatives to reverse Britain's economic decline. During Thatcher's second term in office the nature of the British economy and its society started to change. Marketization, privatization and the deliberate diminishing of the remnants of the post-war social-democratic model were all affected by the American ideas. The Thatcher era revived British social and economic thinking. It did not entail wholesale import of American ideas and practices, so the British shift to the right did not cause the any real convergence toward American socio-economic norms. However, with time the British approach, that European economies should be inspired by the success of the United States, built an ideological proximity with the United States. After a process of transferring policy from the United States, it became apparent that a distinctive Anglo-Saxon economic model was forming.

Types of Anglo-Saxon economic models

According to some researchers, not all liberal economics models are created equally. There are different sub-types and variations among countries that practice Anglo-Saxon model. One of these variations is neo-classical economic liberalism exhibited in American and British economies. The underlying assumption of this variation is that the inherent selfishness of individuals is transferred by the self-regulating market into general economic well-being, known as the invisible hand. In neo-classical economic liberalism, competitive markets should function as equilibrating mechanisms, which deliver both economic welfare and distributive justice. One of the main aims of the economic liberalism in the United States and United Kingdom, which was significantly influenced by Friedrich Hayek's ideas, is that government should regulate economic activity; but the state should not get involved as economic actor.

The other variation of economic liberalism is "balanced model" or ‘ordoliberalism’ (the concept is from the concept of ‘ordo’, the Latin word for ‘order’). Ordoliberalism means an ideal economic system which would be more well ordered than the laissez-faire economy supported by classical liberals. After the 1929 Stock Market Crash and Great Depression, the German Freiburg School's intellectuals argued that to ensure that market functions effectively, government should undertake an active role, backed by a strong legal system and suitable regulatory framework. They claimed that without strong government private interests would undercut competition in the system which is characterized by differences in relative power. Ordoliberals thought that liberalism (the freedom of individuals to compete in markets) and laissez-faire (the freedom of markets from government intervention) should be separated. Walter Eucken, the founding father and one of the most influential representatives of the Freiburg School, condemned classical laissez-faire liberalism for its ‘naturalistic naivety.’ Eucken states that the market and competition can only exist if economic order is created by a strong state. The power of government should be clearly determined, but in its area in which the state plays a role, the state has to be active and powerful. For ordoliberals, the right kind of government is the solution of the problem. Alexander Rüstow claimed that government should refrain from getting too engaged in markets. He was against protectionism, subsidies or cartels. However, he suggested limited interventionism should be allowed as long as it went "in the direction of the market’s laws." Another difference between two variations is that ordoliberals saw the main enemy of free society in monopolies instead of the state. It is hard to empirically show a direct influence of the history of ordoliberalism on Australia or Canada. However, economic liberalism in Australia and Canada resembles German ordoliberalism much more than neo-classical liberalism of the US and UK. Differing interpretations of the Anglo-Saxon economic school of thought and, especially different justifications and perceptions of state intervention in the economy, led to policy differences within these countries. Then these policies continued and influenced the relationship between the public and private sectors. For example, in the United States, the state enforces notably lower tax rates than in the United Kingdom. In addition, the government of the United Kingdom invests more money proportionately on welfare programs and social services than the government of the United States.

Bottom–up and top–down design

From Wikipedia, the free encyclopedia
Illustration of bottom up and top down approach to heap sort

Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership.

A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments.

A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.

Product design and development

During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed.

Computer science

Software development

Part of this section is from the Perl Design Patterns Book.

In the software development process, the top–down and bottom–up approaches play a key role.

Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete.

Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach.

Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used.

Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor.

Programming

Building blocks are an example of bottom–up design because the parts are first created and then assembled without regard to how the parts will work in the assembly.

Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained.

In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design".

Parsing

Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.

Nanotechnology

Nanoparticle synthesis techniques

Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.

A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures.

Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases.

Neuroscience and psychology

An example of top-down processing: Even though the second letter in each word is ambiguous, top–down processing allows for easy disambiguation based on the context.

These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing.[5] Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).

According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."

Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough."

Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence.

The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information.

In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.

Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015).  Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015).  

This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003). 

Schooling

Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022).

Management and organization

Information flow top-down and bottom-up in leadership

In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented.

A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.

A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".

Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.

Public health

Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare.

Architecture

Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.

By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design).

Ecology

The energy pyramid represents the ecosystem and it's layers, the symbols represent the various limiting factors

In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased.

Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.

There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems.

Philosophy and ethics

Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.

Sunday, September 29, 2024

Programmer

From Wikipedia, the free encyclopedia

Betty Jennings and Fran Bilas, part of the first ENIAC programming team
 
Occupation
NamesComputer Programmer
Occupation type
Profession
Activity sectors
Information technology, Software industry
Description
CompetenciesWriting and debugging computer code
Education required
Varies from apprenticeship to bachelor's degree, or self-taught

A programmer, computer programmer or coder is an author of computer source code – someone with skill in computer programming.

The professional titles software developer and software engineer are used for jobs that require a programmer.

Generally, a programmer writes code in a computer language and with an intent to build software that achieves some goal.

Identification

Sometimes a programmer or job position is identified by the language used or target platform. For example, assembly programmer, web developer.

Job title

The job titles that include programming tasks have differing connotations across the computer industry and to different individuals. The following are notable descriptions.

A software developer primarily implements software based on specifications and fixes bugs. Other duties may include reviewing code changes and testing. To achieve the required skills for the job, they might obtain a computer science or associate degree, attend a programming boot camp or be self-taught.

A software engineer usually is responsible for the same tasks as a developer plus broader responsibilities of software engineering including architecting and designing new features and applications, targeting new platforms, managing the software development lifecycle (design, implementation, testing, and deployment), leading a team of programmers, communicating with customers, managers and other engineers, considering system stability and quality, and exploring software development methodologies.

Sometimes, a software engineer is required to have a degree in software engineering, computer engineering, or computer science. Some countries legally require an engineering degree to be called engineer

History

Ada Lovelace is considered by many to be the first computer programmer.

British countess and mathematician Ada Lovelace is often considered to be the first computer programmer. She authored an algorithm, which was published in October 1842, for calculating Bernoulli numbers on the Charles Babbage analytical engine. Because the machine was not completed in her lifetime, she never experienced the algorithm in action.

In 1941, German civil engineer Konrad Zuse was the first person to execute a program on a working, program-controlled, electronic computer. From 1943 to 1945, per computer scientist Wolfgang K. Giloi and AI professor Raúl Rojas et al., Zuse created the first, high-level programming language, Plankalkül.

Members of the 1945 ENIAC programming team of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman have since been credited as the first professional computer programmers.

The software industry

The first company founded specifically to provide software products and services was the Computer Usage Company in 1955. Before that time, computers were programmed either by customers or the few commercial computer manufacturers of the time, such as Sperry Rand and IBM.

The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, governments, and businesses created a demand for software. Many of these programs were written in-house by full-time staff programmers; some were distributed between users of a particular machine for no charge, while others were sold on a commercial basis. Other firms, such as Computer Sciences Corporation (founded in 1959), also started to grow. Computer manufacturers soon started bundling operating systems, system software and programming environments with their machines; the IBM 1620 came with the 1620 Symbolic Programming System and FORTRAN.

The industry expanded greatly with the rise of the personal computer (PC) in the mid-1970s, which brought computing to the average office worker. In the following years, the PC also helped create a constantly growing market for games, applications and utility software. This resulted in increased demand for software developers for that period of time.

Nature of the work

Computer programmers write, test, debug, and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.

Programmers work in many settings, including corporate information technology (IT) departments, big software companies, small service firms and government entities of all sizes. Many professional programmers also work for consulting companies at client sites as contractors. Licensing is not typically required to work as a programmer, although professional certifications are commonly held by programmers. Programming is considered a profession.

Programmers' work varies widely depending on the type of business for which they are writing programs. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Simple programs can be written in a few hours. More complex ones may require more than a year of work, while others are never considered 'complete' but rather are continuously improved as long as they stay in use. In most cases, several programmers work together as a team under a senior programmer's supervision.

Types of software

Programming editors, also known as source code editors, are text editors that are specifically designed for programmers or developers to write the source code of an application or a program. Most of these editors include features useful for programmers, which may include color syntax highlighting, auto indentation, auto-complete, bracket matching, syntax check, and allows plug-ins. These features aid the users during coding, debugging and testing.

Globalization

Market changes in the UK

According to BBC News, 17% of computer science students could not find work in their field six months after graduation in 2009 which was the highest rate of the university subjects surveyed while 0% of medical students were unemployed in the same survey.

Market changes in the US

After the crash of the dot-com bubble (1999–2001) and the Great Recession (2008), many U.S. programmers were left without work or with lower wages. In addition, enrollment in computer-related degrees and other STEM degrees (STEM attrition) in the US has been dropping for years, especially for women, which, according to Beaubouef and Mason, could be attributed to a lack of general interest in science and mathematics and also out of an apparent fear that programming will be subject to the same pressures as manufacturing and agriculture careers. For programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook originally predicted a growth for programmers of 12 percent from 2010 to 2020 and thereafter a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. and then a decline of -11 percent from 2022 to 2032. Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. However, for software developers BLS projects for 2019 to 2029 a 22% increase in employment, from 1,469,200 to 1,785,200 jobs with a median base salary of $110,000 per year. This prediction is lower than the earlier 2010 to 2020 predicted increase of 30% for software developers. Though the distinction is somewhat ambiguous, software developers engage in a wider array of aspects of application development and are generally higher skilled than programmers, making outsourcing less of a risk. Another reason for the decline for programmers is their skills are being merged with other professions, such as developers, as employers increase the requirements for a position over time. Then there is the additional concern that recent advances in artificial intelligence might impact the demand for future generations of Software professions.

Software development

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Software_development

Software development
is the process used to create software. Programming and maintaining the source code is the central step of this process, but it also includes conceiving the project, evaluating its feasibility, analyzing the business requirements, software design, testing, to release. Software engineering, in addition to development, also includes project management, employee management, and other overhead functions. Software development may be sequential, in which each step is complete before the next begins, but iterative development methods where multiple steps can be executed at once and earlier steps can be revisited have also been devised to improve flexibility, efficiency, and scheduling.

Software development involves professionals from various fields, not just software programmers but also individuals specialized in testing, documentation writing, graphic design, user support, marketing, and fundraising. A number of tools and models are commonly used in software development, such as integrated development environment (IDE), version control, computer-aided software engineering, and software documentation.

Methodologies

Flowchart of the evolutionary prototyping model, an iterative development model

Each of the available methodologies are best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.

  • The simplest methodology is the "code and fix", typically used by a single programmer working on a small project. After briefly considering the purpose of the program, the programmer codes it and runs it to see if it works. When they are done, the product is released. This methodology is useful for prototypes but cannot be used for more elaborate programs.
  • In the top-down waterfall model, feasibility, analysis, design, development, quality assurance, and implementation occur sequentially in that order. This model requires one step to be complete before the next begins, causing delays, and makes it impossible to revise previous steps if necessary.
  • With iterative processes these steps are interleaved with each other for improved flexibility, efficiency, and more realistic scheduling. Instead of completing the project all at once, one might go through most of the steps with one component at a time. Iterative development also lets developers prioritize the most important features, enabling lower priority ones to be dropped later on if necessary. Agile is one popular method, originally intended for small or medium sized projects, that focuses on giving developers more control over the features that they work on to reduce the risk of time or cost overruns. Derivatives of agile include extreme programming and Scrum. Open-source software development typically uses agile methodology with concurrent design, coding, and testing, due to reliance on a distributed network of volunteer contributors.
  • Beyond agile, some companies integrate information technology (IT) operations with software development, which is called DevOps or DevSecOps including computer security. DevOps includes continuous development, testing, integration of new code in the version control system, deployment of the new code, and sometimes delivery of the code to clients. The purpose of this integration is to deliver IT services more quickly and efficiently.

Another focus in many programming methodologies is the idea of trying to catch issues such as security vulnerabilities and bugs as early as possible (shift-left testing) to reduce the cost of tracking and fixing them.

In 2009, it was estimated that 32 percent of software projects were delivered on time and budget, and with the full functionality. An additional 44 percent were delivered, but missing at least one of these features. The remaining 24 percent were cancelled prior to release.

Steps

Software development life cycle refers to the systematic process of developing applications.

Feasibility

The sources of ideas for software products are plentiful. These ideas can come from market research including the demographics of potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated by marketing personnel for economic feasibility, fit with existing channels of distribution, possible effects on existing product lines, required features, and fit with the company's marketing objectives. In the marketing evaluation phase, the cost and time assumptions become evaluated. The feasibility analysis estimates the project's return on investment, its development cost and timeframe. Based on this analysis, the company can make a business decision to invest in further development. After deciding to develop the software, the company is focused on delivering the product at or below the estimated cost and time, and with a high standard of quality (i.e., lack of bugs) and the desired functionality. Nevertheless, most software projects run late and sometimes compromises are made in features or quality to meet a deadline.

Analysis

Software analysis begins with a requirements analysis to capture the business needs of the software. Challenges for the identification of needs are that current or potential users may have different and incompatible needs, may not understand their own needs, and change their needs during the process of software development. Ultimately, the result of analysis is a detailed specification for the product that developers can work from. Software analysts often decompose the project into smaller objects, components that can be reused for increased cost-effectiveness, efficiency, and reliability. Decomposing the project may enable a multi-threaded implementation that runs significantly faster on multiprocessor computers.

During the analysis and design phases of software development, structured analysis is often used to break down the customer's requirements into pieces that can be implemented by software programmers. The underlying logic of the program may be represented in data-flow diagrams, data dictionaries, pseudocode, state transition diagrams, and/or entity relationship diagrams. If the project incorporates a piece of legacy software that has not been modeled, this software may be modeled to help ensure it is correctly incorporated with the newer software.

Design

Design involves choices about the implementation of the software, such as which programming languages and database software to use, or how the hardware and network communications will be organized. Design may be iterative with users consulted about their needs in a process of trial and error. Design often involves people expert in aspect such as database design, screen architecture, and the performance of servers and other hardware. Designers often attempt to find patterns in the software's functionality to spin off distinct modules that can be reused with object-oriented programming. An example of this is the model–view–controller, an interface between a graphical user interface and the backend.

Programming

The central feature of software development is creating and understanding the software that implements the desired functionality. There are various strategies for writing the code. Cohesive software has various components that are independent from each other. Coupling is the interrelation of different software components, which is viewed as undesirable because it increases the difficulty of maintenance. Often, software programmers do not follow industry best practices, resulting in code that is inefficient, difficult to understand, or lacking documentation on its functionality. These standards are especially likely to break down in the presence of deadlines. As a result, testing, debugging, and revising the code becomes much more difficult. Code refactoring, for example adding more comments to the code, is a solution to improve the understandability of code.

Testing

Testing is the process of ensuring that the code executes correctly and without errors. Debugging is performed by each software developer on their own code to confirm that the code does what it is intended to. In particular, it is crucial that the software executes on all inputs, even if the result is incorrect. Code reviews by other developers are often used to scrutinize new code added to the project, and according to some estimates dramatically reduce the number of bugs persisting after testing is complete. Once the code has been submitted, quality assurance—a separate department of non-programmers for most large companies—test the accuracy of the entire software product. Acceptance tests derived from the original software requirements are a popular tool for this. Quality testing also often includes stress and load checking (whether the software is robust to heavy levels of input or usage), integration testing (to ensure that the software is adequately integrated with other software), and compatibility testing (measuring the software's performance across different operating systems or browsers). When tests are written before the code, this is called test-driven development.

Production

Production is the phase in which software is deployed to the end user. During production, the developer may create technical support resources for users or a process for fixing bugs and errors that were not caught earlier. There might also be a return to earlier development phases if user needs changed or were misunderstood.

Developers

Software development is performed by software developers, usually working on a team. Efficient communications between team members is essential to success. This is more easily achieved if the team is small, used to working together, and located near each other. Communications also help identify problems at an earlier state of development and avoid duplicated effort. Many development projects avoid the risk of losing essential knowledge held by only one employee by ensuring that multiple workers are familiar with each component. Software development involves professionals from various fields, not just software programmers but also individuals specialized in testing, documentation writing, graphic design, user support, marketing, and fundraising. Although workers for proprietary software are paid, most contributors to open-source software are volunteers. Alternately, they may be paid by companies whose business model does not involve selling the software, but something else—such as services and modifications to open source software.

Models and tools

Computer-aided software engineering

Computer-aided software engineering (CASE) is tools for the partial automation of software development. CASE enables designers to sketch out the logic of a program, whether one to be written, or an already existing one to help integrate it with new code or reverse engineer it (for example, to change the programming language).

Documentation

Documentation comes in two forms that are usually kept separate—that intended for software developers, and that made available to the end user to help them use the software. Most developer documentation is in the form of code comments for each file, class, and method that cover the application programming interface (API)—how the piece of software can be accessed by another—and often implementation details. This documentation is helpful for new developers to understand the project when they begin working on it. In agile development, the documentation is often written at the same time as the code. User documentation is more frequently written by technical writers.

Effort estimation

Accurate estimation is crucial at the feasibility stage and in delivering the product on time and within budget. The process of generating estimations is often delegated by the project manager. Because the effort estimation is directly related to the size of the complete application, it is strongly influenced by addition of features in the requirements—the more requirements, the higher the development cost. Aspects not related to functionality, such as the experience of the software developers and code reusability, are also essential to consider in estimation. As of 2019, most of the tools for estimating the amount of time and resources for software development were designed for conventional applications and are not applicable to web applications or mobile applications.

Integrated development environment

Anjuta, a C and C++ IDE for the GNOME environment

An integrated development environment (IDE) supports software development with enhanced features compared to a simple text editor. IDEs often include automated compiling, syntax highlighting of errors, debugging assistance, integration with version control, and semi-automation of tests.

Version control

Version control is a popular way of managing changes made to the software. Whenever a new version is checked in, the software saves a backup of all modified files. If multiple programmers are working on the software simultaneously, it manages the merging of their code changes. The software highlights cases where there is a conflict between two sets of changes and allows programmers to fix the conflict.

View model

The TEAF Matrix of Views and Perspectives

A view model is a framework that provides the viewpoints on the system and its environment, to be used in the software development process. It is a graphical representation of the underlying semantics of a view.

The purpose of viewpoints and views is to enable human engineers to comprehend very complex systems and to organize the elements of the problem around domains of expertise. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization.

Fitness functions

Fitness functions are automated and objective tests to ensure that the new developments don't deviate from the established constraints, checks and compliance controls.

Intellectual property

Intellectual property can be an issue when developers integrate open-source code or libraries into a proprietary product, because most open-source licenses used for software require that modifications be released under the same license. As an alternative, developers may choose a proprietary alternative or write their own software module.

Application portfolio management

From Wikipedia, the free encyclopedia

IT Application Portfolio Management (APM) is a practice that has emerged in mid to large-size information technology (IT) organizations since the mid-1990s. Application Portfolio Management attempts to use the lessons of financial portfolio management to justify and measure the financial benefits of each application in comparison to the costs of the application's maintenance and operations.

Evolution of the practice

Likely the earliest mention of the Applications Portfolio was in Cyrus Gibson and Richard Nolan's HBR article "Managing the Four Stages of EDP Growth" in 1974.

Gibson and Nolan posited that businesses' understanding and successful use of IT "grows" in predictable stages and a given business' progress through the stages can be measured by observing the Applications Portfolio, User Awareness, IT Management Practices, and IT Resources within the context of an analysis of overall IT spending.

Nolan, Norton & Co. pioneered the use of these concepts in practice with studies at DuPont, Deere, Union Carbide, IBM and Merrill Lynch among others. In these "Stage Assessments" they measured the degree to which each application supported or "covered" each business function or process, spending on the application, functional qualities, and technical qualities. These measures provided a comprehensive view of the application of IT to the business, the strengths and weaknesses, and a road map to improvement.

APM was widely adopted in the late 1980s and through the 1990s as organizations began to address the threat of application failure when the date changed to the year 2000 (a threat that became known as Year 2000 or Y2K). During this time, tens of thousands of IT organizations around the world developed a comprehensive list of their applications, with information about each application.

In many organizations, the value of developing this list was challenged by business leaders concerned about the cost of addressing the Y2K risk. In some organizations, the notion of managing the portfolio was presented to the business people in charge of the Information Technology budget as a benefit of performing the work, above and beyond managing the risk of application failure.

There are two main categories of application portfolio management solutions, generally referred to as 'Top Down' and 'Bottom Up' approaches. The first need in any organization is to understand what applications exist and their main characteristics (such as flexibility, maintainability, owner, etc.), typically referred to as the 'Inventory'. Another approach to APM is to gain a detailed understanding of the applications in the portfolio by parsing the application source code and its related components into a repository database (i.e. 'Bottom Up'). Application mining tools, now marketed as APM tools, support this approach.

Hundreds of tools are available to support the 'Top Down' approach. This is not surprising, because the majority of the task is to collect the right information; the actual maintenance and storage of the information can be implemented relatively easily. For that reason, many organizations bypass using commercial tools and use Microsoft Excel to store inventory data. However, if the inventory becomes complex, Excel can become cumbersome to maintain. Automatically updating the data is not well supported by an Excel-based solution. Finally, such an Inventory solution is completely separate from the 'Bottom Up' understanding needs.

Business case for APM

According to Forrester Research, "For IT operating budgets, enterprises spend two-thirds or more on ongoing operations and maintenance.".

It is common to find organizations that have multiple systems that perform the same function. Many reasons may exist for this duplication, including the former prominence of departmental computing, the application silos of the 1970s and 1980s, the proliferation of corporate mergers and acquisitions, and abortive attempts to adopt new tools. Regardless of the duplication, each application is separately maintained and periodically upgraded, and the redundancy increases complexity and cost.

With a large majority of expenses going to manage the existing IT applications, the transparency of the current inventory of applications and resource consumption is a primary goal of Application Portfolio Management. This enables firms to: 1) identify and eliminate partially and wholly redundant applications, 2) quantify the condition of applications in terms of stability, quality, and maintainability, 3) quantify the business value/impact of applications and the relative importance of each application to the business, 4) allocate resources according to the applications' condition and importance in the context of business priorities.

Transparency also aids strategic planning efforts and diffuses business / IT conflict, because when business leaders understand how applications support their key business functions, and the impact of outages and poor quality, conversations turn away from blaming IT for excessive costs and toward how to best spend precious resources to support corporate priorities.

Portfolio

Taking ideas from investment portfolio management, APM practitioners gather information about each application in use in a business or organization, including the cost to build and maintain the application, the business value produced, the quality of the application, and the expected lifespan. Using this information, the portfolio manager is able to provide detailed reports on the performance of the IT infrastructure in relation to the cost to own and the business value delivered.

Definition of an application

In application portfolio management, the definition of an application is a critical component. Many service providers help organizations create their own definition, due to the often contentious results that come from these definitions.

  • Application software — An executable software component or tightly coupled set of executable software components (one or more), deployed together, that deliver some or all of a series of steps needed to create, update, manage, calculate or display information for a specific business purpose. In order to be counted, each component must not be a member of another application.
  • Software component — An executable set of computer instructions contained in a single deployment container in such a way that it cannot be broken apart further. Examples include a Dynamic Link Library, an ASP web page, and a command line "EXE" application. A zip file may contain more than one software component because it is easy to break them down further (by unpacking the ZIP archive).

Software application and software component are technical terms used to describe a specific instance of the class of application software for the purposes of IT portfolio management. See application software for a definition for non-practitioners of IT Management or Enterprise Architecture.

Software application portfolio management requires a fairly detailed and specific definition of an application in order to create a catalog of applications installed in an organization.

The requirements of a definition for an application

The definition of an application has the following needs in the context of application portfolio management:

  • It must be simple for business team members to explain, understand, and apply.
  • It must make sense to development, operations, and project management in the IT groups.
  • It must be useful as an input to a complex function whose output is the overall cost of the portfolio. In other words, there are many factors that lead to the overall cost of an IT portfolio. The sheer number of applications is one of those factors. Therefore, the definition of an application must be useful in that calculation.
  • It must be useful for the members of the Enterprise Architecture team who are attempting to judge a project with respect to their objectives for portfolio optimization and simplification.
  • It must clearly define the boundaries of an application so that a person working on a measurable 'portfolio simplification' activity cannot simply redefine the boundaries of two existing applications in such a way as to call them a single application.

Many organizations will readdress the definition of an application within the context of their IT portfolio management and governance practices. For that reason, this definition should be considered as a working start.

Examples

The definition of an application can be difficult to convey clearly. In an IT organization, there might be subtle differences in the definition among teams and even within one IT team. It helps to illustrate the definition by providing examples. The section below offers some examples of things that are applications, things that are not applications, and things that comprise two or more applications.

Inclusions

By this definition, the following are applications:

  • A web service endpoint that presents three web services: InvoiceCreate, InvoiceSearch, and InvoiceDetailGet
  • A service-oriented business application (SOBA) that presents a user interface for creating invoices, and that turns around and calls the InvoiceCreate service. (note that the service itself is a different application).
  • A mobile application that is published to an enterprise application store and thus deployed to employee-owned or operated portable devices enabling authenticated access to data and services.
  • A legacy system composed of a rich client, a server-based middle tier, and a database, all of which are tightly coupled. (e.g. changes in one are very likely to trigger changes in another).
  • A website publishing system that pulls data from a database and publishes it to an HTML format as a sub-site on a public URL.
  • A database that presents data to a Microsoft Excel workbook that queries the information for layout and calculations. This is interesting in that the database itself is an application unless the database is already included in another application (like a legacy system).
  • An Excel spreadsheet that contains a coherent set of reusable macros that deliver business value. The spreadsheet itself constitutes a deployment container for the application (like a TAR or CAB file).
  • A set of ASP or PHP web pages that work in conjunction with one another to deliver the experience and logic of a web application. It is entirely possible that a sub-site would qualify as a separate application under this definition if the coupling is loose.
  • A web service end point established for machine-to-machine communication (not for human interaction), but which can be rationally understood to represent one or more useful steps in a business process.

Exclusions

The following are not applications:

  • An HTML website.
  • A database that contains data but is not part of any series of steps to deliver business value using that data.
  • A web service that is structurally incapable of being part of a set of steps that provides value. For example, a web service that requires incoming data that breaks shared schema.
  • A standalone batch script that compares the contents of two databases by making calls to each and then sends e-mail to a monitoring alias if data anomalies are noticed. In this case, the batch script is very likely to be tightly coupled with at least one of the two databases, and therefore should be included in the application boundary that contains the database that it is most tightly coupled with.

Composites

The following are many applications:

  • A composite SOA application composed of a set of reusable services and a user interface that leverages those services. There are at least two applications here (the user interface and one or more service components). Each service is not counted as an application.
  • A legacy client-server app that writes to a database to store data and an Excel spreadsheet that uses macros to read data from the database to present a report. There are TWO apps in this example. The database clearly belongs to the legacy app because it was developed with it, delivered with it, and is tightly coupled to it. This is true even if the legacy system uses the same stored procedures as the Excel spreadsheet.

Methods and measures for evaluating applications

There are many popular financial measures, and even more metrics of different (non-financial or complex) types that are used for evaluating applications or information systems.

Return on investment (ROI)

Return on Investment is one of the most popular performance measurement and evaluation metrics used in business analysis. ROI analysis (when applied correctly) is a powerful tool for evaluating existing information systems and making informed decisions on software acquisitions and other projects. However, ROI is a metric designed for a certain purpose – to evaluate profitability or financial efficiency. It cannot reliably substitute for many other financial metrics in providing an overall economic picture of the information solution. The attempts at using ROI as the sole or principal metric for decision making regarding in-formation systems cannot be productive. It may be appropriate in a very limited number of cases/projects. ROI is a financial measure and does not provide information about efficiency or effectiveness of the information systems.

Economic value added (EVA)

A measure of a company's financial performance based on the residual wealth calculated by deducting cost of capital from its operating profit (adjusted for taxes on a cash basis). (Also referred to as "economic profit".)

Formula = Net Operating Profit After Taxes (NOPAT) - (Capital * Cost of Capital)

Total cost of ownership (TCO)

Total Cost of Ownership is a way to calculate what the application will cost over a defined period of time. In a TCO model, costs for hardware, software, and labor are captured and organized into the various application life cycle stages. An in depth TCO model helps management understand the true cost of the application as it attempts to measure build, run/support, and indirect costs. Many large consulting firms have defined strategies for building a complete TCO model.

Total economic impact (TEI)

TEI was developed by Forrester Research Inc. Forrester claims TEI systematically looks at the potential effects of technology investments across four dimensions: cost — impact on IT; benefits — impact on business; flexibility — future options created by the investment; risk — uncertainty.

Business value of IT (ITBV)

ITBV program was developed by Intel Corporation in 2002. The program uses a set of financial measurements of business value that are called Business Value Dials (Indicators). It is a multidimensional program, including a business component, and is relatively easy to implement.

Applied information economics (AIE)

AIE is a decision analysis method developed by Hubbard Decision Research. AIE claims to be "the first truly scientific and theoretically sound method" that builds on several methods from decision theory and risk analysis including the use of Monte Carlo methods. AIE is not used often because of its complexity.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...