Search This Blog

Tuesday, April 26, 2022

Marketing strategy

From Wikipedia, the free encyclopedia

Marketing strategy is a process that can allow an organization to concentrate its limited resources on the greatest opportunities to increase sales and achieve a sustainable competitive advantage.

Strategic planning involves an analysis of the company's strategic initial situation prior to the formulation, evaluation and selection of market-oriented competitive position that contributes to the company's goals and marketing objectives.

Strategic marketing, as a distinct field of study emerged in the 1970s and 80s, and built on strategic management that preceded it. Marketing strategy highlights the role of marketing as a link between the organization and its customers.

Marketing strategy leverages the combination of resources and capabilities within an organization to achieve a competitive advantage and thus enhances firm performance (Cacciolatti & Lee, 2016).

Definitions of Marketing Strategy

"The marketing strategy lays out target markets and the value proposition that will be offered based on an analysis of the best market opportunities." (Philip Kotler & Kevin Keller, Marketing Management, Pearson, 14th Edition)
“An over-riding directional concept that sets out the planned path.” (David Aaker and Michael K. Mills, Strategic Market Management, 2001, p. 11)
"Essentially a formula for how a business is going to compete, what its goals should be and what policies will be needed to carry out these goals." (Michael Porter, Competitive Strategy: Techniques for Analyzing Industries and Competitors , NY, Free Press, 1980)
"The pattern of major objectives, purposes and goals and essential policies and plans for achieving those goals, stated in such a way as to define what business the company is in or is to be in. (S. Jain, Marketing Planning and Strategy, 1993)
"An explicit guide to future Behaviour.” (Henry Mintzberg, “ Crafting Strategy,” Harvard Business Review, July–August, 1987 pp. 66–74)
Strategy is "reserved for actions aimed directly at altering the strengths of the enterprise relative to that of its competitors... Perfect strategies are not called for. What counts is... performance relative to competitors.” (Kenichi Ohmae, The Mind of the Strategist, 1982, p. 37)
Strategy formulation is built on "the match between organizational resources and skills and environmental opportunities and risks it faces and the purposes it wishes to accomplish." (Dan Schendel and Charles W. Hofer, Strategy Formulation: Analytical Concepts, South-Western, 1978, p. 11)

Marketing management versus marketing strategy

The distinction between “strategic” and “managerial” marketing is used to distinguish "two phases having different goals and based on different conceptual tools. Strategic marketing concerns the choice of policies aiming at improving the competitive position of the firm, taking account of challenges and opportunities proposed by the competitive environment. On the other hand, managerial marketing is focused on the implementation of specific targets." Marketing strategy is about "lofty visions translated into less lofty and practical goals [while marketing management] is where we start to get our hands dirty and make plans for things to happen." Marketing strategy is sometimes called higher order planning because it sets out the broad direction and provides guidance and structure for the marketing program.

Brief history of strategic marketing

Marketing scholars have suggested that strategic marketing arose in the late 1970s and its origins can be understood in terms of a distinct evolutionary path.

Budgeting Control (also known as scientific management)
Date: From late 19th century
Key Thinkers: Frederick Winslow Taylor, Frank and Lillian Gilbreth, Henry L. Gantt, Harrington Emerson
Key Ideas: Emphasis on quantification and scientific modelling, reduce work to smallest possible units and assign work to specialists, exercise control through rigid managerial hierarchies, standardize inputs to reduce variation, defects and control costs, use quantitative forecasting methods to predict any changes.
Long Range Planning
Date: From 1950s
Key Thinkers: Herbert A. Simon
Key Ideas: Managerial focus was to anticipate growth and manage operations in an increasingly complex business world.
Strategic Planning (also known as corporate planning)
Date: From the 1960s
Key Thinkers: Michael Porter
Key Ideas: Organizations must find the right fit within an industry structure; advantage derives from industry concentration and market power; firms should strive to achieve a monopoly or quasi-monopoly; successful firms should be able to erect barriers to entry.
Strategic Marketing Management It refers to a business's overall game plan for reaching prospective consumers and turning them into customers of the products or services the business provides.
Date: from late 1970s
Key thinkers: R. Buzzell and B. Gale
Key Ideas: Each business is unique and that there can be no formula for achieving competitive advantage; firms should adopt a flexible planning and review process that aims to cope with strategic surprises and rapidly developing threats; management's focus is on how to deliver superior customer value; highlights the key role of marketing as the link between customers and the organization.
Resource Based View (RBV) (also known as resource-advantage theory)
Date: From mid 1990s
Key Thinkers: Jay B. Barney, George S. Day, Gary Hamel, Shelby D. Hunt, G. Hooley and C.K. Prahalad
Key Ideas: The firm's resources are financial, legal, human, organizational, informational and relational; resources are heterogeneous and imperfectly mobile, management's key task is to understand and organize resources for sustainable competitive advantage.

Strategic marketing planning: An overview

The strategic gap

Marketing strategy involves mapping out the company's direction for the forthcoming planning period, whether that be three, five or ten years. It involves undertaking a 360° review of the firm and its operating environment with a view to identifying new business opportunities that the firm could potentially leverage for competitive advantage. Strategic planning may also reveal market threats that the firm may need to consider for long-term sustainability. Strategic planning makes no assumptions about the firm continuing to offer the same products to the same customers into the future. Instead, it is concerned with identifying the business opportunities that are likely to be successful and evaluates the firm's capacity to leverage such opportunities. It seeks to identify the strategic gap; that is the difference between where a firm is currently situated (the strategic reality or inadvertent strategy) and where it should be situated for sustainable, long-term growth (the strategic intent or deliberate strategy).

Strategic planning seeks to address three deceptively simple questions, specifically:

* Where are we now? (Situation analysis)
* What business should we be in? (Vision and mission)
* How should we get there? (Strategies, plans, goals, and objectives)

A fourth question may be added to the list, namely 'How do we know when we got there?' Due to the increasing need for accountability, many marketing organizations use a variety of marketing metrics to track strategic performance, allowing for corrective action to be taken as required. On the surface, strategic planning seeks to address three simple questions, however, the research and analysis involved in strategic planning is very sophisticated and requires a great deal of skill and judgement.

Strategic analysis: tools and techniques

Strategic analysis is designed to address the first strategic question, "Where are we now?" Traditional market research is less useful for strategic marketing because the analyst is not seeking insights about customer attitudes and preferences. Instead, strategic analysts are seeking insights about the firm's operating environment with a view to identifying possible future scenarios, opportunities, and threats.

Strategic planning focuses on the 3C's, namely: Customer, Corporation, and Competitors. A detailed analysis of each factor is key to the success of strategy formulation. The 'competitors' element refers to an analysis of the strengths of the business relative to close rivals, and consideration of competitive threats that might impinge on the business's ability to move in certain directions. The 'customer' element refers to an analysis of any possible changes in customer preferences that potentially give rise to new business opportunities. The 'corporation' element refers to a detailed analysis of the company's internal capabilities and its readiness to leverage market-based opportunities or its vulnerability to external threats.

 

The BCG Matrix is just one of the many analytical techniques used by strategic analysts as a means of evaluating the performance of the firm's current stable of brands
 
Perceptual mapping assists analysts to evaluate the competitive performance of brands
 
A product evolutionary cycle helps to envision future directions for product development
 
Porter's five forces

Mintzberg suggests that the top planners spend most of their time engaged in analysis and are concerned with industry or competitive analyses as well as internal studies, including the use of computer models to analyze trends in the organization. Strategic planners use a variety of research tools and analytical techniques, depending on the environment complexity and the firm's goals. Fleitcher and Bensoussan, for instance, have identified some 200 qualitative and quantitative analytical techniques regularly used by strategic analysts while a recent publication suggests that 72 techniques are essential. No optimal technique can be identified as useful across all situations or problems. Determining which technique to use in any given situation rests with the skill of the analyst. The choice of tool depends on a variety of factors including: data availability; the nature of the marketing problem; the objective or purpose, the analyst's skill level as well as other constraints such as time or motivation.

The most commonly used tools and techniques include:

Research methods

Analytical techniques

Brief description of gap analysis

Gap analysis is a type of higher-order analysis that seeks to identify the difference between the organization's current strategy and its desired strategy. This difference is sometimes known as the strategic gap. Mintzberg identifies two types of strategy namely deliberate strategy and inadvertent strategy. The deliberate strategy represents the firm's strategic intent or its desired path while the inadvertent strategy represents the path that the firm may have followed as it adjusted to environmental, competitive and market changes. Other scholars use the terms realized strategy versus intended strategy to refer to the same concepts. This type of analysis indicates whether an organization has strayed from its desired path during the planning period. The presence of a large gap may indicate the organization has become stuck in the middle; a recipe for strategic mediocrity and potential failure.

Brief description of Category/Brand Development Index

The category/brand development index is a method used to assess the sales potential for a region or market and identify market segments that can be developed (i.e. high CDI and high BDI). In addition, it may be used to identify markets where the category or brand is underperforming and may signal underlying marketing problems such as poor distribution (i.e. high CDI and low BDI).

BDI and CDI are calculated as follows:

BDI = (Brand Sales (%) in Market A/ Population (%) in Market A) X 100
CDI = (Category Sales (%) in Market A/ Population (%) in Market A) X 100

Brief description of PEST analysis

PEST analysis: variables that may be considered in the environmental scan

Strategic planning typically begins with a scan of the business environment, both internal and external, this includes understanding strategic constraints. An understanding of the external operating environment, including political, economic, social and technological which includes demographic and cultural aspects, is necessary for the identification of business opportunities and threats. This analysis is called PEST; an acronym for Political, Economic, Social and Technological. A number of variants of the PEST analysis can be identified in literature, including: PESTLE analysis (Political, Economic, Social, Technological, Legal and Environmental); STEEPLE (adds ethics); STEEPLED (adds demographics) and STEER (adds regulatory).

The aim of the PEST analysis is to identify opportunities and threats in the wider operating environment. Firms try to leverage opportunities while trying to buffer themselves against potential threats. Basically, the PEST analysis guides strategic decision-making. The main elements of the PEST analysis are:

  • Political: political interventions with the potential to disrupt or enhance trading conditions e.g. government statutes, policies, funding or subsidies, support for specific industries, trade agreements, tax rates, and fiscal policy.
  • Economic: economic factors with the potential to affect profitability and the prices that can be charged, such as, economic trends, inflation, exchange rates, seasonality and economic cycles, consumer confidence, consumer purchasing power, and discretionary incomes.
  • Social: social factors that affect demand for products and services, consumer attitudes, tastes and preferences like demographics, social influencers, role models, shopping habits.
  • Technological: Innovation, technological developments or breakthroughs that create opportunities for new products, improved production processes or new ways of transacting business e.g. new materials, new ingredients, new machinery, new packaging solutions, new software, and new intermediaries.

When carrying out a PEST analysis, planners and analysts may consider the operating environment at three levels, namely the supranational; the national and subnational or local level. As businesses become more globalized, they may need to pay greater attention to the supranational level.

Brief description of SWOT analysis

A SWOT analysis, with its four elements in a 2×2 matrix

In addition to the PEST analysis, firms carry out a Strengths, Weakness, Opportunities, and Threats (SWOT) analysis. A SWOT analysis identifies:

  • Strengths: distinctive capabilities, competencies, skills or assets that provide a business or project with an advantage over potential rivals; internal factors that are favourable to achieving company objectives
  • Weaknesses: internal deficiencies that place the business or project at a disadvantage relative to rivals; or deficiencies that prevent an entity from moving in a new direction or acting on opportunities. internal factors that are unfavorable to achieving company objectives
  • Opportunities: elements in the environment that the business or project could exploit to its advantage; external factors of the organization including: new products, new markets, new demand, foreign market barriers, competitors' mistakes, etc.
  • Threats: elements in the environment that could erode the firm's market position; external factors that prevent or hinder an entity from moving in a desired direction or achieving its goals.

Typically the firm will attempt to leverage those opportunities that can be matched with internal strengths; that is to say the firm has a capability in any area where strengths are matched with external opportunities. It may need to build capability if it wishes to leverage opportunities in areas of weakness. An area of weakness that is matched with an external threat represents a vulnerability, and the firm may need to develop contingency plans.

Developing the vision and mission

The vision and mission address the second central question, 'Where are we going?' At the conclusion of the research and analysis stage, the firm will typically review its vision statement, mission statement and, if necessary, devise a new vision and mission for the outlook period. At this stage, the firm will also devise a generic competitive strategy as the basis for maintaining a sustainable competitive advantage for the forthcoming planning period.

A vision statement is a realistic, long-term future scenario for the organization. (Vision statements should not be confused with slogans or mottos.) A vision statement is designed to present a realistic long-term future scenario for the organization. It is a "clearly articulated statement of the business scope." A strong vision statement typically includes the following:

  • Competitive scope
  • Market scope
  • Geographic scope
  • Vertical scope

Some scholars point out the market visioning is a skill or competency that encapsulates the planners' capacity "to link advanced technologies to market opportunities of the future, and to do so through a shared understanding of a given product market.

A mission statement is a clear and concise statement of the organization's reason for being and its scope of operations, while the generic strategy outlines how the company intends to achieve both its vision and mission.

Mission statements should include detailed information and must be more than a simple motherhood statement. A mission statement typically includes the following:

This mission statement might be described as a "motherhood statement" because it lacks sufficient detail to be meaningful.
  • Specification of target customers
  • Identification of principal products or services offered
  • Specification of the geographic scope of operations
  • Identification of core technologies and/or core capabilities
  • An outline of the firm's commitment to long-term survival, growth and profitability
  • An outline of the key elements in the company's philosophy and core values
  • Identification of the company's desired public image

Developing the generic competitive strategy

The generic competitive strategy outlines the fundamental basis for obtaining a sustainable competitive advantage within a category. Firms can normally trace their competitive position to one of three factors:

  • Superior skills (e.g. coordination of individual specialists, created through the interplay of investment in training and professional development, work and learning)
  • Superior resources (e.g. patents, trade-mark protection, specialized physical assets and relationships with suppliers and distribution infrastructure.)
  • Superior position (the products or services offered, the market segments served, and the extent to which the product-market can be isolated from direct competition.)

It is essential that the internal analysis provide a frank and open evaluation of the firm's superiority in terms of skills, resources or market position since this will provide the basis for competing over the forthcoming planning period. For this reason, some companies engage external consultants, often advertising or marketing agencies, to provide an independent assessment of the firm's capabilities and resources.

Porter and the positioning school: approach to strategy formulation

Porter's Three Generic Strategies

In 1980, Michael Porter developed an approach to strategy formulation that proved to be extremely popular with both scholars and practitioners. The approach became known as the positioning school because of its emphasis on locating a defensible competitive position within an industry or sector. In this approach, strategy formulation consists of three key strands of thinking: analysis of the five forces to determine the sources of competitive advantage; the selection of one of three possible positions which leverage the advantage and the value chain to implement the strategy. In this approach, the strategic choices involve decisions about whether to compete for a share of the total market or for a specific target group (competitive scope) and whether to compete on costs or product differences (competitive advantage). This type of thinking leads to three generic strategies:

  • Cost leadership – the firm targets the mass market and attempts to be the lowest-cost producer in the market
  • Differentiation – the firm targets the mass market and tries to maintain unique points of product difference perceived as desirable by customers and for which they are prepared to pay premium prices
  • Focus – the firm does not compete for head to head, but instead selects a narrow target market and focuses its efforts on satisfying the needs of that segment

According to Porter, these strategies are mutually exclusive and the firm must select one approach to the exclusion of all others. Firms that try to be all things to all people can present a confusing market position which ultimately leads to below-average returns. Any ambiguity about the firm's approach is a recipe for "strategic mediocrity" and any firm that tries to pursue two approaches simultaneously is said to be "stuck in the middle" and destined for failure.

Porter's approach was the dominant paradigm throughout the 1980s. However, the approach has attracted considerable criticism. One important criticism is that it is possible to identify successful companies that pursue a hybrid strategy – such as low-cost positions and differentiated positions simultaneously. Toyota is a classic example of this hybrid approach. Other scholars point to the simplistic nature of the analysis and the overly prescriptive nature of the strategic choices which limits strategies to just three options. Yet others point to research showing that many practitioners find the approach to be overly theoretical and not applicable to their business.

Resource-based view (RBV)

During the 1990s, the resource-based view (also known as the resource-advantage theory) of the firm became the dominant paradigm. It is an inter-disciplinary approach that represents a substantial shift in thinking. It focuses attention on an organization's internal resources as a means of organizing processes and obtaining a competitive advantage. The resource-based view suggests that organizations must develop unique, firm-specific core competencies that will allow them to outperform competitors by doing things differently and in a superior manner.

Barney stated that for resources to hold potential as sources of sustainable competitive advantage, they should be valuable, rare, and imperfectly imitable. A key insight arising from the resource-based view is that not all resources are of equal importance nor possess the potential to become a source of sustainable competitive advantage. The sustainability of any competitive advantage depends on the extent to which resources can be imitated or substituted. Barney and others point out that understanding the causal relationship between the sources of advantage and successful strategies can be very difficult in practice. Barney uses the term "causally ambiguous" which he describes as a situation when "the link between the resources controlled by the firm and the firm's sustained competitive advantage is not understood or understood only very imperfectly." Thus, a great deal of managerial effort must be invested in identifying, understanding, and classifying core competencies. In addition, management must invest in organizational learning to develop and maintain key resources and competencies.

Market Based Resources include:

  • Organizational culture e.g. market orientation, research orientation, culture of innovation, etc.
  • Assets e.g. brands, Mktg IS, databases, etc.
  • Capabilities (or competencies) e.g. market sensing, marketing research, relationships, know-how, tacit knowledge, etc.


After more than two decades of advancements in marketing strategy and in the resource-based view paradigm, Cacciolatti & Lee (2016) proposed a novel resource-advantage theory based framework that builds on those organizational capabilities that are relevant to marketing strategy and shows how they have an effect on firm performance. The capabilities-performance model proposed by Cacciolatti & Lee (2016) illustrates the mechanism whereby market orientation, strategic orientation, and organizational power moderate the capabilities-performance relationship. Such a logic of analysis was implicit in the original formulation of RA theory and although it was taken into consideration by several scholars, it was never been articulated explicitly and tested empirically.

In the resource-based view, strategists select the strategy or competitive position that best exploits the internal resources and capabilities relative to external opportunities. Given that strategic resources represent a complex network of inter-related assets and capabilities, organizations can adopt many possible competitive positions. Although scholars debate the precise categories of competitive positions that are used, there is general agreement, within the literature, that the resource-based view is much more flexible than Porter's prescriptive approach to strategy formulation.

Hooley et al., suggest the following classification of competitive positions:

  • Price positioning
  • Quality positioning
  • Innovation positioning
  • Service positioning
  • Benefit positioning
  • Tailored positioning (one-to-one marketing)

Other approaches

The choice of competitive strategy often depends on a variety of factors including: the firm's market position relative to rival firms, the stage of the product life cycle. A well-established firm in a mature market will likely have a different strategy than a start-up.

Growth strategies

Growth of a business is critical for business success. A firm may grow by developing the market or by developing new products. The Ansoff product and market growth matrix illustrates the two broad dimensions for achieving growth. The Ansoff matrix identifies four specific growth strategies: market penetration, product development, market development and diversification.

The Ansoff Product/market Growth Matrix
Market penetration involves selling existing products to existing consumers. This is a conservative, low risk approach since the product is already on the established market.
Product development is the introduction of a new product to existing customers. This can include modifications to an already existing market which can create a product that has more appeal.
Market development involves the selling of existing products to new customers in order to identify and build a new clientele base. This can include new geographical markets, new distribution channels, and different pricing policies that bring the product price within the competence of new market segments.
Diversification is the riskiest area for a business. This is where a new product is sold to a new market. There are two type of Diversification; horizontal and vertical. 'Horizontal diversification focuses more on the product(s) where the business is knowledgeable, whereas vertical diversification focuses more on the introduction of new product into new markets, where the business could have less knowledge of the new market.
Horizontal integration

A horizontal integration strategy may be indicated in fast-changing work environments as well as providing a broad knowledge base for the business and employees. A benefit of horizontal diversification is that it is an open platform for a business to expand and build away from the already existing market.

High levels of horizontal integration lead to high levels of communication within the business. Another benefit of using this strategy is that it leads to a larger market for merged businesses, and it is easier to build good reputations for a business when using this strategy. A disadvantage of using a diversification strategy is that the benefits could take a while to start showing, which could lead the business to believe that the strategy in ineffective. Another disadvantage or risk is, it has been shown that using the horizontal diversification method has become harmful for stock value, but using the vertical diversification had the best effects.

A disadvantage of using the horizontal integration strategy is that this limits and restricts the field of interest that the business. Horizontal integration can affect a business's reputation, especially after a merge has happened between two or more businesses. There are three main benefits to a business's reputation after a merge. A larger business helps the reputation and increases the severity of the punishment. As well as the merge of information after a merge has happened, this increases the knowledge of the business and marketing area they are focused on. The last benefit is more opportunities for deviation to occur in merged businesses rather than independent businesses.

Vertical integration

Vertical integration is when business is expanded through the vertical production line on one business. An example of a vertically integrated business could be Apple. Apple owns all their own software, hardware, designs and operating systems instead of relying on other businesses to supply these. By having a highly vertically integrated business this creates different economies therefore creating a positive performance for the business. Vertical integration is seen as a business controlling the inputs of supplies and outputs of products as well as the distribution of the final product. Some benefits of using a Vertical integration strategy is that costs may be reduced because of the reducing transaction costs which include finding, selling, monitoring, contracting and negotiating with other firms. Also by decreasing outside businesses input it will increase the efficient use of inputs into the business. Another benefit of vertical integration is that it improves the exchange of information through the different stages of the production line. Some competitive advantages could include; avoiding foreclosures, improving the business marketing intelligence, and opens up opportunities to create different products for the market. Some disadvantages of using a Vertical Integration Strategy include the internal costs for the business and the need for overhead costs. Also if the business is not well organized and fully equipped and prepared the business will struggle using this strategy. There are also competitive disadvantages as well, which include; creates barriers for the business, and loses access to information from suppliers and distributors.

Market position and strategy

In terms of market position, firms may be classified as market leaders, market challengers, market followers or market nichers.

Market position and strategic implications
Market leader: The market leader dominates the market by objective measure of market share. Their overall posture is defensive because they have more to lose. Their objectives are to reinforce their prominent position through the use of PR to develop corporate image and to block competitors brand for brand, matching distribution through tactics such as the use of “fighting” brands, pre-emptive strikes, use of regulation to block competitors and even to spread rumours about competitors. Market leaders may adopt unconventional or unexpected approaches to building growth and their tactical responses are likely to include: product proliferation; diversification; multi-branding; erecting barriers to entry; vertical and horizontal integration and corporate acquisitions.
Market challenger: The market challenger holds the second highest market share in the category, following closely behind the dominant player. Their market posture is generally offensive because they have less to lose and more to gain by taking risks. They will compete head to head with the market leader in an effort to grow market share. Their overall strategy is to gain market share through product, packaging and service innovations; new market development and redefinition of the product to broaden its scope and their position within it.
Market follower: Followers are generally content to play second fiddle. They rarely invest in R & D and tend to wait for market leaders to develop innovative products and subsequently adopt a “me-too” approach. Their market posture is typically neutral. Their strategy is to maintain their market position by maintaining existing customers and capturing a fair share of any new segments. They tend to maintain profits by controlling costs.
Market nicher: The market nicher occupies a small niche in the market in order to avoid head to head competition. Their objective is to build strong ties with the customer base and develop strong loyalty with existing customers. Their market posture is generally neutral. Their strategy is to develop and build the segment and protect it from erosion. Tactically, nichers are likely to improve the product or service offering, leverage cross-selling opportunities, offer value for money and build relationships through superior after-direct sales service, service quality and other related value-adding activities.

As the speed of change in the marketing environment quickens, time horizons are becoming shorter. Nevertheless, most firms carry out strategic planning every 3– 5 years and treat the process as a means of checking whether the company is on track to achieve its vision and mission.[44] Ideally, strategies are both dynamic and interactive, partially planned and partially unplanned. Strategies are broad in their scope in order to enable a firm to react to unforeseen developments while trying to keep focused on a specific pathway. A key aspect of marketing strategy is to keep marketing consistent with a company's overarching mission statement.

Strategies often specify how to adjust the marketing mix; firms can use tools such as Marketing Mix Modeling to help them decide how to allocate scarce resources, as well as how to allocate funds across a portfolio of brands. In addition, firms can conduct analyses of performance, customer analysis, competitor analysis, and target market analysis.

Entry strategies

Marketing strategies may differ depending on the unique situation of the individual business. According to Lieberman and Montgomery, every entrant into a market – whether it is new or not – is classified under a Market Pioneer, Close Follower or a Late follower

Pioneers

Market pioneers are known to often open a new market to consumers based on a major innovation. They emphasize these product developments, and in a significant number of cases, studies have shown that early entrants – or pioneers – into a market have serious market-share advantages above all those who enter later. Pioneers have the first-mover advantage, and in order to have this advantage, business’ must ensure they have at least one or more of three primary sources: Technological Leadership, Preemption of Assets or Buyer Switching Costs. Technological Leadership means gaining an advantage through either Research and Development or the “learning curve”. This lets a business use the research and development stage as a key point of selling due to primary research of a new or developed product. Preemption of Assets can help gain an advantage through acquiring scarce assets within a certain market, allowing the first-mover to be able to have control of existing assets rather than those that are created through new technology. Thus allowing pre-existing information to be used and a lower risk when first entering a new market. By being a first entrant, it is easy to avoid higher switching costs compared to later entrants. For example, those who enter later would have to invest more expenditure in order to encourage customers away from early entrants. However, while Market Pioneers may have the “highest probability of engaging in product development” and lower switching costs, to have the first-mover advantage, it can be more expensive due to product innovation being more costly than product imitation. It has been found that while Pioneers in both consumer goods and industrial markets have gained “significant sales advantages”, they incur larger disadvantages cost-wise.

Close followers

Being a Market Pioneer can, more often than not, attract entrepreneurs and/or investors depending on the benefits of the market. If there is an upside potential and the ability to have a stable market share, many businesses would start to follow in the footsteps of these pioneers. These are more commonly known as Close Followers. These entrants into the market can also be seen as challengers to the Market Pioneers and the Late Followers. This is because early followers are more than likely to invest a significant amount in Product Research and Development than later entrants. By doing this, it allows businesses to find weaknesses in the products produced before, thus leading to improvements and expansion on the aforementioned product. Therefore, it could also lead to customer preference, which is essential in market success. Due to the nature of early followers and the research time being later than Market Pioneers, different development strategies are used as opposed to those who entered the market in the beginning, and the same is applied to those who are Late Followers in the market. By having a different strategy, it allows the followers to create their own unique selling point and perhaps target a different audience in comparison to that of the Market Pioneers. Early following into a market can often be encouraged by an established business’ product that is “threatened or has industry-specific supporting assets”.

Late Entrants

Those who follow after the Close Followers are known as the Late Entrants. While being a Late Entrant can seem very daunting, there are some perks to being a latecomer. For example, Late Entrants have the ability to learn from those who are already in the market or have previously entered. Late Followers have the advantage of learning from their early competitors and improving the benefits or reducing the total costs. This allows them to create a strategy that could essentially mean gaining market share and most importantly, staying in the market. In addition to this, markets evolve, leading to consumers wanting improvements and advancements on products. Late Followers have the advantage of catching the shifts in customer needs and wants towards the products. When bearing in mind customer preference, customer value has a significant influence. Customer value means taking into account the investment of customers as well as the brand or product. It is created through the “perceptions of benefits” and the “total cost of ownership”. On the other hand, if the needs and wants of consumers have only slightly altered, Late Followers could have a cost advantage over early entrants due to the use of product imitation. However, if a business is switching markets, this could take the cost advantage away due to the expense of changing markets for the business. Late Entry into a market does not necessarily mean there is a disadvantage when it comes to market share, it depends on how the marketing mix is adopted and the performance of the business. If the marketing mix is not used correctly – despite the entrant time – the business will gain little to no advantages, potentially missing out on a significant opportunity.

The differentiated strategy

The customized target strategy

The requirements of individual customer markets are unique, and their purchases sufficient to make viable the design of a new marketing mix for each customer.

If a company adopts this type of market strategy, a separate marketing mix is to be designed for each customer.

Specific marketing mixes can be developed to appeal to most of the segments when market segmentation reveals several potential targets.

Developing marketing goals and objectives

Whereas the vision and mission provide the framework, the "goals define targets within the mission, which, when achieved, should move the organization toward the performance of that mission." Goals are broad primary outcomes whereas, objectives are measurable steps taken to achieve a goal or strategy. In strategic planning, it is important for managers to translate the overall strategy into goals and objectives. Goals are designed to inspire action and focus attention on specific desired outcomes. Objectives, on the other hand, are used to measure an organization's performance on specific dimensions, thereby providing the organization with feedback on how well it is achieving its goals and strategies.

Managers typically establish objectives using the balanced scorecard approach. This means that objectives do not include desired financial outcomes exclusively, but also specify measures of performance for customers (e.g. satisfaction, loyalty, repeat patronage), internal processes (e.g., employee satisfaction, productivity) and innovation and improvement activities.

After setting the goals marketing strategy or marketing plan should be developed. The marketing strategy plan provides an outline of the specific actions to be taken over time to achieve the objectives. Plans can be extended to cover many years, with sub-plans for each year. Plans usually involve monitoring, to assess progress, and prepare for contingencies if problems arise. Simultaneous such as customer lifetime value models can be used to help marketers conduct "what-if" analyses to forecast what potential scenarios arising from possible actions, and to gauge how specific actions might affect such variables as the revenue-per-customer and the churn rate.

Strategy typologies

Developing competitive strategy requires significant judgement and is based on a deep understanding of the firm's current situation, its past history and its operating environment. No heuristics have yet been developed to assist strategists choose the optimal strategic direction. Nevertheless, some researchers and scholars have sought to classify broad groups of strategy approaches that might serve as broad frameworks for thinking about suitable choices.

Strategy types

In 2003, Raymond E. Miles and Charles C. Snow, based on an in-depth cross-industry study of a sample of large corporations, proposed a detailed scheme using four categories:

  • Prospectors: proactively seek to locate and exploit new market opportunities
  • Analyzers: are very innovative in their product-market choices; tend to follow prospectors into new markets; often introduce new or improved product designs. This type of organisation works in two types of market, one generally stable, one subject to more change
  • Defenders: are relatively cautious in their initiatives; seek to seal off a portion of the market which they can defend against competitive incursions; often market highest quality offerings and position as a quality leader
  • Reactors: tend to vacillate in their responses to environmental changes and are generally the least profitable organizations

Marketing strategy

Marketing warfare strategies are competitor-centered strategies drawn from analogies with the field of military science. Warfare strategies were popular in the 1980s, but interest in this approach has waned in the new era of relationship marketing. An increased awareness of the distinctions between business and military cultures also raises questions about the extent to which this type of analogy is useful. In spite of its limitations, the typology of marketing warfare strategies is useful for predicting and understanding competitor responses.

In the 1980s, Kotler and Singh developed a typology of marketing warfare strategies:

  • Frontal attack: where an aggressor goes head to head for the same market segments on an offer by offer, price by price basis; normally used by a market challenger against a more dominant player
  • Flanking attack: attacking an organization on its weakest front; used by market challengers
  • Bypass attack: bypassing the market leader by attacking smaller, more vulnerable target organizations in order to broaden the aggressor's resource base
  • Encirclement attack: attacking a dominant player on all fronts
  • Guerilla warfare: sporadic, unexpected attacks using both conventional and unconventional means to attack a rival; normally practiced by smaller players against the market leader

Relationship between the marketing strategy and the marketing mix

Marketing strategy and marketing mix are related elements of a comprehensive marketing plan. While marketing strategy is aligned with setting the direction of a company or product/service line, the marketing mix is majorly tactical in nature and is employed to carry out the overall marketing strategy. The 4P's of the marketing mix (Price, Product, Place and Promotion) represent the tools that marketers can leverage while defining their marketing strategy to create a marketing plan.

Remote sensing

From Wikipedia, the free encyclopedia

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object, in contrast to in situ or on-site observation. The term is applied especially to acquiring information about the Earth and other planets. Remote sensing is used in numerous fields, including geography, land surveying and most Earth science disciplines (for example, hydrology, ecology, meteorology, oceanography, glaciology, geology); it also has military, intelligence, commercial, economic, planning, and humanitarian applications, among others.

In current usage, the term "remote sensing" generally refers to the use of satellite or aircraft-based sensor technologies to detect and classify objects on Earth. It includes the surface and the atmosphere and oceans, based on propagated signals (e.g. electromagnetic radiation). It may be split into "active" remote sensing (when a signal is emitted by a satellite or aircraft to the object and its reflection detected by the sensor) and "passive" remote sensing (when the reflection of sunlight is detected by the sensor).

Overview

This video is about how Landsat was used to identify areas of conservation in the Democratic Republic of the Congo, and how it was used to help map an area called MLW in the north.

Remote sensing can be divided into two types of methods: Passive remote sensing and Active remote sensing. Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object.

Illustration of remote sensing

Remote sensing makes it possible to collect data of dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial features in Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed.

Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation, greenhouse gas monitoring, oil spill detection and monitoring, and national security and overhead, ground-based and stand-off collection on border areas.

Types of data acquisition techniques

The basis for multispectral collection and analysis is that of examined areas or objects that reflect or emit radiation that stand out from surrounding areas. For a summary of major remote sensing satellite systems see the overview table.

Applications of remote sensing

  • Conventional radar is mostly associated with aerial traffic control, early warning, and certain large-scale meteorological data. Doppler radar is used by local law enforcements' monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems in addition to precipitation location and intensity. Other types of active collection includes plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain (See RADARSAT, TerraSAR-X, Magellan).
  • Laser and radar altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wavelength of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions.
  • Ultrasound (acoustic) and radar tide gauges measure sea level, tides and wave direction in coastal and offshore tide gauges.
  • Light detection and ranging (LIDAR) is well known in examples of weapon ranging, laser illuminated homing of projectiles. LIDAR is used to detect and measure the concentration of various chemicals in the atmosphere, while airborne LIDAR can be used to measure the heights of objects and features on the ground more accurately than with radar technology. Vegetation remote sensing is a principal application of LIDAR.
  • Radiometers and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies. The most common are visible and infrared sensors, followed by microwave, gamma-ray, and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere.
Examples of remote sensing equipment deployed by
or interfaced with oceanographic research vessels.
  • Radiometers are also used at night, because artificial light emissions are a key signature of human activity. Applications include remote sensing of population, GDP, and damage to infrastructure from war or disasters.
  • Radiometers and radar onboard of satellites can be used to monitor volcanic eruptions.
  • Spectropolarimetric Imaging has been reported to be useful for target tracking purposes by researchers at the U.S. Army Research Laboratory. They determined that manmade items possess polarimetric signatures that are not found in natural objects. These conclusions were drawn from the imaging of military trucks, like the Humvee, and trailers with their acousto-optic tunable filter dual hyperspectral and spectropolarimetric VNIR Spectropolarimetric Imager.
  • Stereographic pairs of aerial photographs have often been used to make topographic maps by imagery and terrain analysts in trafficability and highway departments for potential routes, in addition to modelling terrestrial habitat features.
  • Simultaneous multi-spectral platforms such as Landsat have been in use since the 1970s. These thematic mappers take images in multiple wavelengths of electromagnetic radiation (multi-spectral) and are usually found on Earth observation satellites, including (for example) the Landsat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or monitor land usage, detect invasive vegetation, deforestation, and examine the health of indigenous plants and crops (satellite crop monitoring), including entire farming regions or forests. Prominent scientists using remote sensing for this purpose include Janet Franklin and Ruth DeFries. Landsat images are used by regulatory agencies such as KYDOW to indicate water quality parameters including Secchi depth, chlorophyll density, and total phosphorus content. Weather satellites are used in meteorology and climatology.
  • Hyperspectral imaging produces an image where each pixel has full spectral information with imaging narrow spectral bands over a contiguous spectral range. Hyperspectral imagers are used in various applications including mineralogy, biology, defence, and environmental measurements.
  • Within the scope of the combat against desertification, remote sensing allows researchers to follow up and monitor risk areas in the long term, to determine desertification factors, to support decision-makers in defining relevant measures of environmental management, and to assess their impacts.
  • Remote sensing has been used to detect rare plants to aid in conservation efforts. Prediction, detection, and the ability to record biophysical conditions were possible from medium to very high resolutions.

Geodetic

  • Geodetic remote sensing can be gravimetric or geometric. Overhead gravity data collection was first used in aerial submarine detection. This data revealed minute perturbations in the Earth's gravitational field that may be used to determine changes in the mass distribution of the Earth, which in turn may be used for geophysical studies, as in GRACE. Geometric remote sensing includes position and deformation imaging using InSAR, LIDAR, etc.

Acoustic and near-acoustic

  • Sonar: passive sonar, listening for the sound made by another object (a vessel, a whale etc.); active sonar, emitting pulses of sounds and listening for echoes, used for detecting, ranging and measurements of underwater objects and terrain.
  • Seismograms taken at different locations can locate and measure earthquakes (after they occur) by comparing the relative intensity and precise timings.
  • Ultrasound: Ultrasound sensors, that emit high-frequency pulses and listening for echoes, used for detecting water waves and water level, as in tide gauges or for towing tanks.

To coordinate a series of large-scale observations, most sensing systems depend on the following: platform location and the orientation of the sensor. High-end instruments now often use positional information from satellite navigation systems. The rotation and orientation are often provided within a degree or two with electronic compasses. Compasses can measure not just azimuth (i. e. degrees to magnetic north), but also altitude (degrees above the horizon), since the magnetic field curves into the Earth at different angles at different latitudes. More exact orientations require gyroscopic-aided orientation, periodically realigned by different methods including navigation from stars or known benchmarks.

Data characteristics

The quality of remote sensing data consists of its spatial, spectral, radiometric and temporal resolutions.

Spatial resolution
The size of a pixel that is recorded in a raster image – typically pixels may correspond to square areas ranging in side length from 1 to 1,000 metres (3.3 to 3,280.8 ft).
Spectral resolution
The wavelength of the different frequency bands recorded – usually, this is related to the number of frequency bands recorded by the platform. Current Landsat collection is that of seven bands, including several in the infrared spectrum, ranging from a spectral resolution of 0.7 to 2.1 μm. The Hyperion sensor on Earth Observing-1 resolves 220 bands from 0.4 to 2.5 μm, with a spectral resolution of 0.10 to 0.11 μm per band.
Radiometric resolution
The number of different intensities of radiation the sensor is able to distinguish. Typically, this ranges from 8 to 14 bits, corresponding to 256 levels of the gray scale and up to 16,384 intensities or "shades" of colour, in each band. It also depends on the instrument noise.
Temporal resolution
The frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or those requiring an averaged or mosaic image as in deforesting monitoring. This was first used by the intelligence community where repeated coverage revealed changes in infrastructure, the deployment of units or the modification/introduction of equipment. Cloud cover over a given area or object makes it necessary to repeat the collection of said location.

Data processing

In order to create sensor-based maps, most remote sensing systems expect to extrapolate sensor data in relation to a reference point including distances between known points on the ground. This depends on the type of sensor used. For example, in conventional photographs, distances are accurate in the center of the image, with the distortion of measurements increasing the farther you get from the center. Another factor is that of the platen against which the film is pressed can cause severe errors when photographs are used to measure ground distances. The step in which this problem is resolved is called georeferencing and involves computer-aided matching of points in the image (typically 30 or more points per image) which is extrapolated with the use of an established benchmark, "warping" the image to produce accurate spatial data. As of the early 1990s, most satellite images are sold fully georeferenced.

In addition, images may need to be radiometrically and atmospherically corrected.

Radiometric correction
Allows avoidance of radiometric errors and distortions. The illumination of objects on the Earth's surface is uneven because of different properties of the relief. This factor is taken into account in the method of radiometric distortion correction. Radiometric correction gives a scale to the pixel values, e. g. the monochromatic scale of 0 to 255 will be converted to actual radiance values.
Topographic correction (also called terrain correction)
In rugged mountains, as a result of terrain, the effective illumination of pixels varies considerably. In a remote sensing image, the pixel on the shady slope receives weak illumination and has a low radiance value, in contrast, the pixel on the sunny slope receives strong illumination and has a high radiance value. For the same object, the pixel radiance value on the shady slope will be different from that on the sunny slope. Additionally, different objects may have similar radiance values. These ambiguities seriously affected remote sensing image information extraction accuracy in mountainous areas. It became the main obstacle to the further application of remote sensing images. The purpose of topographic correction is to eliminate this effect, recovering the true reflectivity or radiance of objects in horizontal conditions. It is the premise of quantitative remote sensing application.
Atmospheric correction
Elimination of atmospheric haze by rescaling each frequency band so that its minimum value (usually realised in water bodies) corresponds to a pixel value of 0. The digitizing of data also makes it possible to manipulate the data by changing gray-scale values.

Interpretation is the critical process of making sense of the data. The first application was that of aerial photographic collection which used the following process; spatial measurement through the use of a light table in both conventional single or stereographic coverage, added skills such as the use of photogrammetry, the use of photomosaics, repeat coverage, Making use of objects' known dimensions in order to detect modifications. Image Analysis is the recently developed automated computer-aided application that is in increasing use.

Object-Based Image Analysis (OBIA) is a sub-discipline of GIScience devoted to partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale.

Old data from remote sensing is often valuable because it may provide the only long-term data for a large extent of geography. At the same time, the data is often complex to interpret, and bulky to store. Modern systems tend to store the data digitally, often with lossless compression. The difficulty with this approach is that the data is fragile, the format may be archaic, and the data may be easy to falsify. One of the best systems for archiving data series is as computer-generated machine-readable ultrafiche, usually in typefonts such as OCR-B, or as digitized half-tone images. Ultrafiches survive well in standard libraries, with lifetimes of several centuries. They can be created, copied, filed and retrieved by automated systems. They are about as compact as archival magnetic media, and yet can be read by human beings with minimal, standardized equipment.

Generally speaking, remote sensing works on the principle of the inverse problem: while the object or phenomenon of interest (the state) may not be directly measured, there exists some other variable that can be detected and measured (the observation) which may be related to the object of interest through a calculation. The common analogy given to describe this is trying to determine the type of animal from its footprints. For example, while it is impossible to directly measure temperatures in the upper atmosphere, it is possible to measure the spectral emissions from a known chemical species (such as carbon dioxide) in that region. The frequency of the emissions may then be related via thermodynamics to the temperature in that region.

Data processing levels

To facilitate the discussion of data processing in practice, several processing "levels" were first defined in 1986 by NASA as part of its Earth Observing System and steadily adopted since then, both internally at NASA and elsewhere; these definitions are:

Level Description
0 Reconstructed, unprocessed instrument and payload data at full resolution, with any and all communications artifacts (e. g., synchronization frames, communications headers, duplicate data) removed.
1a Reconstructed, unprocessed instrument data at full resolution, time-referenced, and annotated with ancillary information, including radiometric and geometric calibration coefficients and georeferencing parameters (e. g., platform ephemeris) computed and appended but not applied to the Level 0 data (or if applied, in a manner that level 0 is fully recoverable from level 1a data).
1b Level 1a data that have been processed to sensor units (e. g., radar backscatter cross section, brightness temperature, etc.); not all instruments have Level 1b data; level 0 data is not recoverable from level 1b data.
2 Derived geophysical variables (e. g., ocean wave height, soil moisture, ice concentration) at the same resolution and location as Level 1 source data.
3 Variables mapped on uniform spacetime grid scales, usually with some completeness and consistency (e. g., missing points interpolated, complete regions mosaicked together from multiple orbits, etc.).
4 Model output or results from analyses of lower level data (i. e., variables that were not measured by the instruments but instead are derived from these measurements).

A Level 1 data record is the most fundamental (i. e., highest reversible level) data record that has significant scientific utility, and is the foundation upon which all subsequent data sets are produced. Level 2 is the first level that is directly usable for most scientific applications; its value is much greater than the lower levels. Level 2 data sets tend to be less voluminous than Level 1 data because they have been reduced temporally, spatially, or spectrally. Level 3 data sets are generally smaller than lower level data sets and thus can be dealt with without incurring a great deal of data handling overhead. These data tend to be generally more useful for many applications. The regular spatial and temporal organization of Level 3 datasets makes it feasible to readily combine data from different sources.

While these processing levels are particularly suitable for typical satellite data processing pipelines, other data level vocabularies have been defined and may be appropriate for more heterogeneous workflows.

History

The TR-1 reconnaissance/surveillance aircraft
 
The 2001 Mars Odyssey used spectrometers and imagers to hunt for evidence of past or present water and volcanic activity on Mars.

The modern discipline of remote sensing arose with the development of flight. The balloonist G. Tournachon (alias Nadar) made photographs of Paris from his balloon in 1858. Messenger pigeons, kites, rockets and unmanned balloons were also used for early images. With the exception of balloons, these first, individual images were not particularly useful for map making or for scientific purposes.

Systematic aerial photography was developed for military surveillance and reconnaissance purposes beginning in World War I and reaching a climax during the Cold War with the use of modified combat aircraft such as the P-51, P-38, RB-66 and the F-4C, or specifically designed collection platforms such as the U2/TR-1, SR-71, A-5 and the OV-1 series both in overhead and stand-off collection. A more recent development is that of increasingly smaller sensor pods such as those used by law enforcement and the military, in both manned and unmanned platforms. The advantage of this approach is that this requires minimal modification to a given airframe. Later imaging technologies would include infrared, conventional, Doppler and synthetic aperture radar.

The development of artificial satellites in the latter half of the 20th century allowed remote sensing to progress to a global scale as of the end of the Cold War. Instrumentation aboard various Earth observing and weather satellites such as Landsat, the Nimbus and more recent missions such as RADARSAT and UARS provided global measurements of various data for civil, research, and military purposes. Space probes to other planets have also provided the opportunity to conduct remote sensing studies in extraterrestrial environments, synthetic aperture radar aboard the Magellan spacecraft provided detailed topographic maps of Venus, while instruments aboard SOHO allowed studies to be performed on the Sun and the solar wind, just to name a few examples.

Recent developments include, beginning in the 1960s and 1970s with the development of image processing of satellite imagery. Several research groups in Silicon Valley including NASA Ames Research Center, GTE, and ESL Inc. developed Fourier transform techniques leading to the first notable enhancement of imagery data. In 1999 the first commercial satellite (IKONOS) collecting very high resolution imagery was launched.

Training and education

Remote Sensing has a growing relevance in the modern information society. It represents a key technology as part of the aerospace industry and bears increasing economic relevance – new sensors e.g. TerraSAR-X and RapidEye are developed constantly and the demand for skilled labour is increasing steadily. Furthermore, remote sensing exceedingly influences everyday life, ranging from weather forecasts to reports on climate change or natural disasters. As an example, 80% of the German students use the services of Google Earth; in 2006 alone the software was downloaded 100 million times. But studies have shown that only a fraction of them know more about the data they are working with. There exists a huge knowledge gap between the application and the understanding of satellite images. Remote sensing only plays a tangential role in schools, regardless of the political claims to strengthen the support for teaching on the subject. A lot of the computer software explicitly developed for school lessons has not yet been implemented due to its complexity. Thereby, the subject is either not at all integrated into the curriculum or does not pass the step of an interpretation of analogue images. In fact, the subject of remote sensing requires a consolidation of physics and mathematics as well as competences in the fields of media and methods apart from the mere visual interpretation of satellite images.

Many teachers have great interest in the subject "remote sensing", being motivated to integrate this topic into teaching, provided that the curriculum is considered. In many cases, this encouragement fails because of confusing information. In order to integrate remote sensing in a sustainable manner organizations like the EGU or Digital Earth encourage the development of learning modules and learning portals. Examples include: FIS – Remote Sensing in School Lessons, Geospektiv, Ychange, or Spatial Discovery, to promote media and method qualifications as well as independent learning.

Software

Remote sensing data are processed and analyzed with computer software, known as a remote sensing application. A large number of proprietary and open source applications exist to process remote sensing data. Remote sensing software packages include:

Open source remote sensing software includes:

According to an NOAA Sponsored Research by Global Marketing Insights, Inc. the most used applications among Asian academic groups involved in remote sensing are as follows: ERDAS 36% (ERDAS IMAGINE 25% & ERMapper 11%); ESRI 30%; ITT Visual Information Solutions ENVI 17%; MapInfo 17%.

Among Western Academic respondents as follows: ESRI 39%, ERDAS IMAGINE 27%, MapInfo 9%, and AutoDesk 7%.

In education, those that want to go beyond simply looking at satellite images print-outs either use general remote sensing software (e.g. QGIS), Google Earth, StoryMaps or a software/ web-app developed specifically for education (e.g. desktop: LeoWorks, online: BLIF).

Trikaya

From Wikipedia, the free encyclopedia The Trikāya ( Sanskrit : त्रिकाय , lit...