Search This Blog

Friday, April 24, 2026

Green computing

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Green_computing

Green computing, green IT (information technology), or Information and Communication Technology Sustainability, is the study and practice of environmentally sustainable computing or IT.

The goals of green computing include optimising energy efficiency during the product's lifecycle; leveraging greener energy sources to power the product and its network; improving the reusability, maintainability, and repairability of the product to extend its lifecycle; improving the recyclability or biodegradability of e-waste to support circular economy ambitions; and aligning the manufacture and use of IT systems with environmental and social goals. Green computing is important for all classes of systems, ranging from handheld systems to large-scale data centers. According to the International Energy Agency, data centres accounted for about 1.5% of global electricity consumption in 2024 (~415 TWh), and under its central scenario, demand could roughly double to ~945 TWh by 2030, with AI workloads a major driver of growth. Sustainable development is a concept that redefines the notion of the general interest by integrating environmental, social, and economic considerations. Many corporate IT departments have green computing initiatives to reduce the environmental effect of their IT operations. Yet it is also clear that the environmental footprint of the sector is significant, estimated at 5-9% of the world's total electricity use and more than 2% of all emissions. Data centers and telecommunications networks will need to become more energy efficient, reuse waste energy, use more renewable energy sources, and use less water for cooling to stay competitive. In the European Union, policy efforts and industry initiatives aim for climate-neutral data centers by 2030.

The carbon emissions associated with manufacturing devices and network infrastructures is also a key factor.

Green computing can involve complex trade-offs. It can be useful to distinguish between IT for environmental sustainability and the environmental sustainability of IT. Although green IT focuses on the environmental sustainability of IT, in practice these two aspects are often interconnected. For example, launching an online shopping platform may increase the carbon footprint of a company's own IT operations, while at the same time helping customers to purchase products remotely, without requiring them to drive, in turn reducing greenhouse gas emission related to travel. The company might be able to take credit for these decarbonisation benefits under its Scope 3 emissions reporting, which includes emissions from across the entire value chain.

Origins

Energy Star logo

In 1992, the U.S. Environmental Protection Agency launched Energy Star, a voluntary labeling program that is designed to promote and recognize the energy efficiency in monitors, climate control equipment, and other technologies. This resulted in the widespread adoption of sleep mode among consumer electronics. Concurrently, the Swedish organization TCO Development launched the TCO Certified program to promote low magnetic and electrical emissions from CRT-based computer displays; this program was later expanded to include criteria on energy consumption, ergonomics, and the use of hazardous materials in construction.

Regulations and industry initiatives

In 2009 the Organisation for Economic Co-operation and Development (OECD) published a survey of over 90 government and industry initiatives on "Green ICTs" (Information and Communication Technologies), the environment and climate change. The report concluded that initiatives tended to concentrate on the greening ICTs themselves, rather than on their actual implementation to reduce global warming and environmental degradation. In general, only 20% of initiatives had measurable targets, with government programs tending to include targets more frequently than business associations.

Government

Many governmental agencies have continued to implement standards and regulations that encourage green computing. The Energy Star program was revised in October 2006 to include stricter efficiency requirements for computer equipment, along with a tiered ranking system for approved products.

By 2008, 26 US states established statewide recycling programs for obsolete computers and consumer electronics equipment. The statutes either impose an "advance recovery fee" for each unit sold at retail or require the manufacturers to reclaim the equipment at disposal.

In 2010, the American Recovery and Reinvestment Act (ARRA) was signed into legislation by President Obama. The bill allocated over $90 billion to be invested in green initiatives (renewable energy, smart grids, energy efficiency, etc.) In January 2010, the U.S. Energy Department granted $47 million of the ARRA money towards projects to improve the energy efficiency of data centers. The projects provided research to optimize data center hardware and software, improve power supply chain, and data center cooling technologies.

Green Digital Governance

Green digital governance refers to the use of information and communication technology (ICT) to support environmentally sustainable policies and practices. It describes a strategy with which an organisation strives to align its information and communications technology with sustainability goals. This can include using digital tools and platforms to monitor and regulate environmental impact, as well as promoting the development and use of clean and renewable energy sources in the technology sector. The goal of green digital governance is to reduce the carbon footprint of the digital economy and to support the transition to a more sustainable and resilient society.

Both the green and the digital transitions are on the agenda for most European countries, as well as the EU as a whole. Documents and goals such as the European Green Deal and the Sustainable Development Goals, fit for 55, Digital Europe and others have begun the transitions. These two transitions often contradict each other, as digital technologies have substantial environmental footprints that go against the targets of the green transition.

The European Union sees digitalisation and the adoption of ICT (Information and Communications Technology) solutions as an important tool for creating greener solutions, while also acknowledging that in order to achieve the desired positive environmental impact, the tools themselves must be environmentally sustainable. The green transition may accelerate innovation and adoption of digital solutions offering the ICT sector new opportunities for becoming more competitive. The synergy created as a result of the green transition and digitalisation brings social, economic and environmental benefits, which is a goal of environmentally friendly digital governments and the creation of green ICT solutions in general.

The digital component is expected to also be used to reach the ambitions of the European Green Deal and Sustainable Development Goals. As powerful enablers for the sustainability transition, digital solutions can advance the circular economy, support the decarbonisation of all sectors and reduce the environmental and social footprint of products placed on the EU market. For example, key sectors such as precision agriculture, transport and energy can benefit from digital solutions in pursuing the sustainability objectives of the European Green Deal.

E-government services can provide solutions to the environmental problem. The possibility for a citizen to fully request and get a service online would render, in addition to cost savings for the public authorities and increased citizen satisfaction, reductions of carbon emissions and paper consumption.

Industry

  • iMasons Climate Accord Founded in 2022, the (ICA) is a historic cooperative of companies committed to reducing carbon in digital infrastructure materials, products, and power.
  • Climate Savers Computing Initiative (CSCI) is an effort to reduce the electric power consumption of PCs in active and inactive states. The CSCI provides a catalog of green products from its member organizations, and information for reducing PC power consumption. It was started on June 12, 2007. The name stems from the World Wildlife Fund's Climate Savers program, which began in 1999. The WWF is a member of the Computing Initiative.
  • The Green Electronics Council offers the Electronic Product Environmental Assessment Tool (EPEAT) to assist in the purchase of "greener" computing systems. The Council evaluates computing equipment on 51 criteria – 23 required and 28 optional - that measure a product's efficiency and sustainability attributes. Products are rated Gold, Silver, or Bronze, depending on how many optional criteria they meet. On January 24, 2007, President George W. Bush issued Executive Order 13423, which requires all United States Federal agencies to use EPEAT when purchasing computer systems.
  • The Green Grid is a global consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems. It was founded in February 2007 by several key companies in the industry – AMD, APC, Dell, HP, IBM, Intel, Microsoft, Rackable Systems, SprayCool (purchased in 2010 by Parker), Sun Microsystems and VMware. The Green Grid has since grown to hundreds of members, including end-users and government organizations focused on improving data center infrastructure efficiency (DCIE).
  • The Green500 list rates supercomputers by energy efficiency (megaflops/watt), encouraging a focus on efficiency rather than absolute performance.
  • Green Comm Challenge is an organization that promotes the development of energy conservation technology and practices in the field of ICT.
  • The Transaction Processing Performance Council (TPC) Energy specification augments existing TPC benchmarks by allowing optional publications of energy metrics alongside performance results.
  • SPECpower is the first industry standard benchmark that measures power consumption in relation to performance for server-class computers. Other benchmarks which measure energy efficiency include SPECweb, SPECvirt, and VMmark.

Approaches

Modern IT systems rely on a complicated mix of people, networks, and hardware; as such, a green computing initiative ideally covers these areas. A solution may also need to address end user satisfaction, management restructuring, regulatory compliance, and return on investment (ROI). There are also fiscal motivations for companies to take control of their own power consumption; "of the power management tools available, one of the most powerful may still be simple, plain, common sense."

Product longevity

Gartner maintains that the PC manufacturing process accounts for 70% of the natural resources used in the life cycle of a PC. In 2011, Fujitsu released a life-cycle assessment (LCA) of a desktop that show that manufacturing and end of life accounts for the majority of this desktop's ecological footprint. Therefore, the biggest contribution to green computing usually is to prolong the equipment's lifetime. A recent life-cycle assessment comparing a desktop and a laptop for a four-year use case with similar performance found total carbon footprints of 679.1 kg CO2e for the desktop versus 286.1 kg CO2e for the laptop; for both systems, manufacturing was the largest contributor, followed by the use phase.

Another report from Gartner recommends to "Look for product longevity, including upgradability and modularity." For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one.

Data center design

Data center facilities are heavy consumers of energy, accounting for between 1.1% and 1.5% of the world's total energy use in 2010. The U.S. Department of Energy estimates that data center facilities consume up to 100 to 200 times more energy than standard office buildings.

Energy efficient data center design should address all of the energy use aspects included in a data center: from the IT equipment to the HVAC (Heating, ventilation and air conditioning) equipment to the actual location, configuration and construction of the building.

The U.S. Department of Energy specifies five primary areas on which to focus energy efficient data center design best practices:

  • Information technology (IT) systems
  • Environmental conditions
  • Air management
  • Cooling systems
  • Electrical systems

Additional energy efficient design opportunities specified by the U.S. Department of Energy include on-site electrical generation and recycling of waste heat.

Energy efficient data center design should help to better use a data center's space, and increase performance and efficiency.

Software and deployment optimization

Algorithmic efficiency

The efficiency of algorithms affects the amount of computer resources required for any given computing function and there are many efficiency trade-offs in writing programs. Algorithm changes, such as switching from a slow (e.g. linear) search algorithm to a fast (e.g. hashed or indexed) search algorithm can reduce resource usage for a given task from substantial to close to zero. In 2009, a study by a physicist at Harvard estimated that the average Google search released 7 grams of carbon dioxide (CO2). However, Google disputed this figure, arguing that a typical search produced only 0.2 grams of CO2. Similarly, the environmental footprint of distributed computing is heavily dependent on the algorithmic efficiency of its underlying consensus mechanisms. Mathematical consumption models evaluating Sybil attack resistance schemes indicate that ledger architectures utilizing directed acyclic graphs (DAG) to achieve consensus via virtual voting present lower energy consumption per transaction when compared to traditional proof-of-work systems and standard proof-of-stake blockchains.

Similarly, the environmental footprint of distributed computing is heavily dependent on the algorithmic efficiency of its underlying consensus mechanisms. Mathematical consumption models evaluating Sybil attack resistance schemes indicate that ledger architectures utilizing directed acyclic graphs (DAG) to achieve consensus via virtual voting present lower energy consumption per transaction when compared to traditional proof-of-work systems and standard proof-of-stake blockchains.

Resource allocation

Algorithms can also be used to route data to data centers where electricity is less expensive. Researchers from MIT, Carnegie Mellon University, and Akamai have tested an energy allocation algorithm that routes traffic to the location with the lowest energy costs. The researchers project up to 40 percent savings on energy costs if their proposed algorithm were to be deployed. However, this approach does not actually reduce the amount of energy being used; it reduces only the cost to the company using it. Nonetheless, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centers experiencing warm weather; this allows computers to be shut down to avoid using air conditioning.

Larger server centers are sometimes located where energy and land are inexpensive and readily available. Local availability of renewable energy, climate that allows outside air to be used for cooling, or locating them where the heat they produce may be used for other purposes could be factors in green siting decisions.

Approaches to actually reduce the energy consumption of network devices by proper network/device management techniques have been surveyed Bianzino, et al. The authors grouped the approaches into 4 main strategies, namely (i) Adaptive Link Rate (ALR), (ii) Interface Proxying, (iii) Energy Aware Infrastructure, and (iv) Maximum Energy Aware Applications.

Virtualizing

Computer virtualization refers to the abstraction of computer resources, such as the process of running two or more logical computer systems on one set of physical hardware. The concept originated with the IBM mainframe operating systems of the 1960s, and was commercialized for x86-compatible computers, and other computer systems, in the 1990s. With virtualization, a system administrator can combine several formerly physical systems as virtual machines on one powerful system, thereby conserving resources by removing need for some of the original hardware and reducing power and cooling consumption. Virtualization can assist in distributing work so that servers are either busy or put in a low-power sleep state. Several commercial companies and open-source projects now offer software packages to enable a transition to virtual computing. Intel Corporation and AMD have also built proprietary virtualization enhancements to the x86 instruction set into each of their CPU product lines, in order to facilitate virtual computing.

New virtual technologies, such as operating system-level virtualization can also be used to reduce energy consumption. These technologies make a more efficient use of resources, thus reducing energy consumption by design. Also, the consolidation of virtualized technologies is more efficient than the one done in virtual machines, so more services can be deployed in the same physical machine, reducing the amount of hardware needed.

Terminal servers

Terminal servers have also been used in green computing. When using the system, users at a terminal connect to a central server; all of the actual computing is done on the server, but the end user experiences the system operating as if it were on the terminal. These can be combined with thin clients, which use up to 1/8 the amount of energy of a normal workstation, resulting in a decrease of energy costs and consumption. There has been an increase in using terminal services with thin clients to create virtual labs. Examples of terminal server software include Terminal Services for Windows and the Linux Terminal Server Project (LTSP) for the Linux operating system. Software-based remote desktop clients such as Windows Remote Desktop and RealVNC can provide similar thin-client functions when run on low power hardware that connects to a server.

Data Compression

Data compression, which involves using fewer bits to encode information, may also be used in green computing depending on the structure of the data. Since it is highly data specific, data compression strategies may result in using more energy or resources than necessary in some cases. However, choosing a well-suited compression algorithm for the dataset can yield greater power efficiency and reduce network and storage requirements. There is a tradeoff between compression ratio and energy consumption. Deciding whether or not this is worthwhile depends on the dataset's compressibility. Compression improves energy efficiency for data with a compression ratio much less than roughly 0.3, and hurts for data with higher compression ratios.

Power management

The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, when most components (including the CPU and the system RAM) are turned off. ACPI is a successor to an earlier Intel-Microsoft standard called Advanced Power Management, which allows a computer's BIOS to control power management functions.

Some programs allow the user to manually adjust the voltages supplied to the CPU, which reduces both the amount of heat produced and electricity consumed. This process is called undervolting. Some CPUs can automatically undervolt the processor, depending on the workload; this technology is called "SpeedStep" on Intel processors, "PowerNow!"/"Cool'n'Quiet" on AMD chips, LongHaul on VIA CPUs, and LongRun with Transmeta processors.

Data center power

Data centers, which have been criticized for their extraordinarily high energy demand, are a primary focus for proponents of green computing. According to a Greenpeace study, data centers represent 21% of the electricity consumed by the IT sector, which is about 382 billion kWh a year.

Data centers can potentially improve their energy and space efficiency through techniques such as storage consolidation and virtualization. Many organizations are aiming to eliminate underused servers, resulting in lower energy usage. The U.S. federal government set a minimum 10% reduction target for data center energy usage by 2011. With the aid of a self-styled ultra-efficient evaporative cooling technology. Google Inc. claims to have reduced its energy consumption to 50% of the industry average.

Cryptocurrency mining, particularly for proof-of-work currencies like Bitcoin, also uses significant amounts of energy globally. Advocates have argued that cryptocurrency can help to drive investment in green energy.

Operating system support

Microsoft Windows has included limited PC power management features since Windows 95. These initially provided for stand-by (suspend-to-RAM) and a monitor low power state. Further iterations of Windows added hibernate (suspend-to-disk) and support for the ACPI standard. Windows 2000 was the first NT-based operating system to include power management. This required major changes to the underlying operating system architecture and a new hardware driver model. Windows 2000 also introduced Group Policy, a technology that allowed administrators to centrally configure most Windows features. However, power management was not one of those features. This is probably because the power management settings design relied upon a connected set of per-user and per-machine binary registry values, effectively leaving it up to each user to configure their own power management settings.

This approach, which is not compatible with Windows Group Policy, was repeated in Windows XP. The reasons for this design decision by Microsoft are not known, and it has resulted in heavy criticism. Microsoft significantly improved this in Windows Vista by redesigning the power management system to allow basic configuration by Group Policy. The support offered is limited to a single per-computer policy. Windows 7 retains these limitations but includes refinements for timer coalescing, processor power management, and display panel brightness. The most significant change in Windows 7 is in the user experience. The prominence of the default High Performance power plan has been reduced with the aim of encouraging users to save power.

Third-party PC power management software for adds features beyond those built-in to the Windows operating system. Most products offer Active Directory integration and per-user/per-machine settings with the more advanced offering multiple power plans, scheduled power plans, anti-insomnia features and enterprise power usage reporting.

Linux systems started to provide laptop-optimized power-management in 2005, with power-management options being mainstream since 2009.

Power supply

Desktop computer power supplies are in general 70–75% efficient, dissipating the remaining energy as heat. A certification program called 80 Plus certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.

Storage

Smaller form factor (e.g., 2.5 inch) hard disk drives often consume less power per gigabyte than physically larger drives. Unlike hard disk drives, solid-state drives store data in flash memory or DRAM. With no moving parts, power consumption may be reduced somewhat for low-capacity flash-based devices.

As hard drive prices have fallen, storage farms have tended to increase in capacity to make more data available online. This includes archival and backup data that would formerly have been saved on tape or other offline storage. The increase in online storage has increased power consumption. Reducing the power consumed by large storage arrays, while still providing the benefits of online storage, is a subject of ongoing research.

Video card

A fast GPU may be the largest power consumer in a computer.

Energy-efficient display options include:

  • No video card – use a shared terminal, shared thin client, or desktop sharing software if display is required.
  • Use motherboard video output – typically low 3D performance and low power.
  • Select a GPU based on low idle power, average wattage, or performance per watt.

Display

Unlike other display technologies, electronic paper does not use any power while displaying an image. CRT monitors typically use more power than LCD monitors. They also contain significant amounts of lead. LCD monitors typically use a cold-cathode fluorescent bulb to provide light for the display. Most newer displays use an array of light-emitting diodes (LEDs) in place of the fluorescent bulb, which further reduces the amount of electricity used by the display. Fluorescent back-lights also contain mercury, whereas LED back-lights do not.

A light-on-dark color scheme, also called dark mode, is a color scheme that requires less energy to display on new display technologies, such as OLED. This positively impacts battery life and energy consumption. While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, it can use more than three times as much power to display an image with a white background, such as a document or web site. This can lead to reduced battery life and increased energy use, unless a light-on-dark color scheme is used. A 2018 article in Popular Science suggests that "Dark mode is easier on the eyes and battery" and displaying white on full brightness uses roughly six times as much power as pure black on a Google Pixel, which has an OLED display. Apple's iOS 13 and iPadOS 13 both feature a light-on dark mode, which would allow third-party developers to implement their own dark themes. Google's Android 10 features a system-level dark mode.

Materials recycling

Recycling computing equipment can keep harmful materials such as lead, mercury, and hexavalent chromium out of landfills, and can replace equipment that otherwise would need to be manufactured, saving further energy and emissions. Computer systems that have outlived their original function can be re-purposed, or donated to various charities and non-profit organizations. However, many charities have recently imposed minimum system requirements for donated equipment. Additionally, parts from outdated systems may be salvaged and recycled through certain retail outlets and municipal or private recycling centers. Computing supplies, such as printer cartridges, paper, and batteries may be recycled as well.

A drawback to many of these schemes is that computers gathered through recycling drives are often shipped to developing countries where environmental standards are less strict than in North America and Europe. The Silicon Valley Toxics Coalition has estimated that 80% of the post-consumer e-waste collected for recycling is shipped abroad to countries such as China and India.

In 2011, the collection rate of e-waste remained low, even in the most ecology-responsible countries like France. In the U.S., e-waste collection was at a 14% annual rate between electronic equipment sold and e-waste collected for 2006 to 2009.

The recycling of old computers raises a privacy issue. The old storage devices still hold private information, such as emails, passwords, and credit card numbers, which can be recovered simply by using software available freely on the Internet. Deletion of a file does not actually remove the file from the hard drive. Before recycling a computer, users should remove the hard drive, or hard drives if there is more than one, and physically destroy it or store it somewhere safe. There are some authorized hardware recycling companies to whom the computer may be given for recycling, and they typically sign a non-disclosure agreement.

Cloud computing

Cloud computing may help to address two major ICT challenges related to green computing – energy usage and embodied carbon. Hyperscale data centers such as those operated by AWS, Azure, and GCP can benefit from economies of scale, and virtualization, dynamic provisioning environment, multi-tenancy and green data center approaches can enable more efficient resource allocation. Organizations may be able to reduce their direct energy consumption and carbon emissions by up to 30% and 90% respectively by moving certain on-premises applications into the public cloud.

However, critics point to shortcomings in the carbon tracking and management tools provided by major cloud providers. GreenOps, also known as DevGreenOps, DevSusOps or DevSustainableOps, is emerging as a framework to include sustainability into cloud management. Carbon-aware computing and grid-aware computing can form part of a GreenOps approach. This includes techniques like demand shifting, which means moving computational workloads to locations or times of day with cleaner energy in the grid. Demand shaping is a similar technique, which focuses on adjusting workloads according to the amount of clean energy currently available.

Edge computing

New technologies such as edge and fog computing are a solution to reducing energy consumption. These technologies allow redistributing computation near its use, thus reducing energy costs in the network. Furthermore, having smaller data centers, the energy used in operations such as refrigerating and maintenance is reduced.

Remote work

Remote work using teleconference and telepresence technologies is often implemented in green computing initiatives. The advantages include increased worker satisfaction, reduction of greenhouse gas emissions related to travel, and increased profit margins as a result of lower overhead costs for office space, heat, lighting, etc. The average annual energy consumption for U.S. office buildings is over 23 kilowatt hours per square foot, with heat, air conditioning and lighting accounting for 70% of all energy consumed. Other related initiatives, such as Hoteling, reduce the square footage per employee as workers reserve space only when needed. Many types of jobs, such as sales, consulting, and field service, integrate well with this technique.

Voice over IP (VoIP) reduces the telephony wiring infrastructure by sharing the existing Ethernet copper. VoIP and phone extension mobility also made hot desking more practical. Wi-Fi consume 4 to 10 times less energy than 4G.

Telecommunication network devices energy indices

In 2013 ICT energy consumption, in the US and worldwide, was estimated respectively at 9.4% and 5.3% of the total electricity produced. The energy consumption of ICTs is today significant even when compared with other industries. Some studies have tried to identify the key energy indices that allow a relevant comparison between different devices (network elements). This analysis was focused on how to optimise device and network consumption for carrier telecommunication by itself. The target was to allow an immediate perception of the relationship between the network technology and the environmental effect. These studies are at the start and further research will be necessary.

Supercomputers

The Green500 list was first announced on November 15, 2007, at SC|07. As a complement to the TOP500, the listing of the Green500 began a new era where supercomputers can be compared by performance-per-watt. As of 2019, two Japanese supercomputers topped the Green500 energy efficiency ranking with performance exceeding 16 GFLOPS/watt, and two IBM AC922 systems followed with performance exceeding 15 GFLOPS/watt.

Education and certification

Green computing programs

Degree and postgraduate programs provide training in a range of information technology concentrations along with sustainable strategies to educate students on how to build and maintain systems while reducing its harm to the environment. The Australian National University (ANU) offers "ICT Sustainability" as part of its information technology and engineering masters programs. Athabasca University offers a similar course "Green ICT Strategies", adapted from the ANU course notes by Tom Worthington. In the UK, Leeds Beckett University offers an MSc Sustainable Computing program in both full- and part-time access modes.

Green computing certifications

Some certifications demonstrate that an individual has specific green computing knowledge, including:

  • Green Computing Initiative – GCI offers the Certified Green Computing User Specialist (CGCUS), Certified Green Computing Architect (CGCA) and Certified Green Computing Professional (CGCP) certifications.
  • Information Systems Examination Board (ISEB) Foundation Certificate in Green IT is appropriate for showing an overall understanding and awareness of green computing and where its implementation can be beneficial.
  • Singapore Infocomm Technology Federation (SiTF) Singapore Certified Green IT Professional is an industry endorsed professional level certification offered with SiTF authorized training partners. Certification requires completion of a four-day instructor-led core course, plus a one-day elective from an authorized vendor.
  • Australian Computer Society (ACS) The ACS offers a certificate for "Green Technology Strategies" as part of the Computer Professional Education Program (CPEP). Award of a certificate requires completion of a 12-week e-learning course designed by Tom Worthington, with written assignments.

Ratings

Since 2010, Greenpeace has maintained a list of ratings of prominent technology companies in several countries based on how clean the energy used by that company is, ranging from A (the best) to F (the worst).

ICT and energy demand

Digitalization has brought additional energy consumption; energy-increasing effects have been greater than the energy-reducing effects. Four energy consumption increasing effects are:

  1. Direct effect – Strong increases of (technical) energy efficiency in ICT are countered by the growth of the sector.
  2. Efficiency and rebound effects – Rebound effects are high for ICT and increased productivity often leads to new behaviors that are more energy intensive.
  3. Economic growth – Positive effect of digitalization on economic growth.
  4. Sectoral change – Growth of ICT services tends not to replace, but come on top of existing services.

Web browser

From Wikipedia, the free encyclopedia
A web browser (Safari) displaying the Wikipedia web page

A web browser, often abbreviated as browser, is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers can also display content stored locally on the user's device.

Browsers are used on a range of devices, including desktops, laptops, tablets, smartphones, smartwatches, smart televisions and consoles. As of 2026, the most used browsers worldwide are Google Chrome (~69% market share), Safari (~16%), Edge (~5%), Firefox (~2%), Samsung Internet (~2%), and Opera (~2%). As of 2023, an estimated 5.4 billion people had used a browser.

Function

The purpose of a web browser is to fetch content and display it on the user's device. This process begins when the user inputs a Uniform Resource Locator (URL), such as https://en.wikipedia.org/, into the browser's address bar. Virtually all URLs on the Web start with either http: or https: which means they are retrieved with the Hypertext Transfer Protocol (HTTP). For secure mode (HTTPS), the connection between the browser and web server is encrypted, providing a secure and private data transfer. For this reason, a web browser is often referred to as an HTTP client or a user agent. Requisite materials, including text, style sheets, images, and other types of multimedia, are downloaded from the server. Once the materials have been downloaded, the web browser's engine (also known as a layout engine or rendering engine) is responsible for converting those resources into an interactive visual representation of the page on the user's device. Modern web browsers also contain separate JavaScript engines which enable more complex interactive applications inside the browser. A web browser that does not render a graphical user interface is known as a headless browser.

Web pages usually contain hyperlinks to other pages and resources. Each link contains a URL, and when it is clicked or tapped, the browser navigates to the new resource. Most browsers use an internal cache of web page resources to improve loading times for subsequent visits to the same page. The cache can store many items, such as large images, so they do not need to be downloaded from the server again. Cached items are usually only stored for as long as the web server stipulates in its HTTP response messages.

A web browser is not the same thing as a search engine, though the two are often confused. A search engine is a website that provides links to other websites and allows users to search for specific resources using a textual query. However, web browsers are often used to access search engines, and most modern browsers allow users to access a default search engine directly by typing a query into the address bar.

History

The first web browser, called WorldWideWeb, was created in 1990 by Sir Tim Berners-Lee. He then recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. The Mosaic web browser was released in April 1993, and was later credited as the first web browser to find mainstream popularity. Its innovative graphical user interface made the World Wide Web easy to navigate and thus more accessible to the average person. This, in turn, sparked the Internet boom of the 1990s, when the Web grew at a very rapid rate. The lead developers of Mosaic then founded the Netscape corporation, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator quickly became the most popular browser.

Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. Within a few years, Microsoft gained a dominant position in the browser market for two reasons: it bundled Internet Explorer with its popular Windows operating system and did so as freeware with no restrictions on usage. The market share of Internet Explorer peaked at over 95% in the early 2000s. In 1998, Netscape launched what would become the Mozilla Foundation to create a new browser using the open-source software model. This work evolved into the Firefox browser, first released by Mozilla in 2004. Firefox's market share peaked at 32% in 2010. Apple released its Safari browser in 2003; it remains the dominant browser on Apple devices, though it did not become popular elsewhere.

Google debuted its Chrome browser in 2008, which steadily took market share from Internet Explorer and became the most popular browser in 2012. Chrome has remained dominant ever since. In 2015, Microsoft replaced Internet Explorer with Edge [Legacy] for the Windows 10 release. In 2020, this legacy version was replaced by a new Chromium-based version of Edge.

Since the early 2000s, browsers have greatly expanded their HTML, CSS, JavaScript, and multimedia capabilities. One reason has been to enable more sophisticated websites, such as web apps. Another factor is the significant increase of broadband connectivity in many parts of the world, enabling people to access data-intensive content, such as streaming HD video on YouTube, that was not possible during the era of dial-up modems.

Starting in the mid-2020s, browsers with integrated artificial intelligence (AI) capabilities, known as AI browsers, have become increasingly common. This includes both new entrants to the browser market, such as Perplexity Comet and ChatGPT Atlas, and established browsers that added AI features, such as Chrome with the Gemini chatbot and Edge with the Copilot chatbot.

Features

The most popular browsers share many features. They automatically log users' browsing history, unless the users turn off their browsing history or use the non-logging private mode. They also allow users to set bookmarks, customize the browser with extensions, and manage their downloads and passwords. Some provide a sync service and web accessibility features.

Traditional browser arrangement has user interface features above page content.

Common user interface (UI) features:

  • Allowing the user to have multiple pages open at the same time, either in different windows or in different tabs of the same window.
  • Back and forward buttons to go back to the previous page visited or forward to the next one.
  • A refresh or reload and a stop button to reload and cancel loading the current page. (In most browsers, the stop button is merged with the reload button.)
  • A home button to return to the start page.
  • An address bar to input the URL of a page and display it, and a search bar to input queries into a search engine. (In most browsers, the search bar is merged with the address bar.)

While mobile browsers have similar UI features as desktop versions, the limitations of the often-smaller touch screens require mobile UIs to be simpler. The difference is significant for users accustomed to keyboard shortcutsResponsive web design is used to create websites that offer a consistent experience across the desktop and mobile versions of the website and across varying screen sizes. The most popular desktop browsers also have sophisticated web development tools.

Access to some web content — particularly streaming services like Netflix, Disney+, and Spotify — is restricted by Digital Rights Management (DRM) software. A web browser is able to access DRM-restricted content through the use of a Content Decryption Module (CDM) such as Widevine. As of 2020, the CDMs used by dominant web browsers require browser providers to pay costly license fees, making it unfeasible for most independent open-source browsers to offer access to DRM-restricted content.

Browser market

Google Chrome has been the dominant browser since the mid-2010s and currently has a 69% global market share on all devices. The vast majority of its source code comes from Google's open-source Chromium project; this code is also the basis for many other browsers, including Microsoft Edge, currently in third place with about a 5% share, as well as Samsung Internet and Opera in fifth and sixth places respectively with approximately 2% market share each.

The other two browsers in the top four are made from different codebases. Safari, based on Apple's WebKit code, is the second most popular web browser and is dominant on Apple devices, resulting in an 16% global share. Firefox, in fourth place, with about 2% market share, is based on Mozilla's code. Both of these codebases are open-source, so a number of small niche browsers are also made from them.

The following table details the top web browsers by market share, as of February, 2025:

Web browser Market share Reference

Market share by type of device

Prior to late 2016, the majority of web traffic came from desktop computers. However, since then, mobile devices (smartphones) have represented the majority of web traffic.[40] As of February 2025, mobile devices represent a 62% share of Internet traffic, followed by desktop at 36% and tablet at 2%.[41]

Security

Web browsers are popular targets for hackers, who exploit security holes to steal information, destroy files, and partake in other malicious activities. Browser vendors regularly patch these security holes, so users are strongly encouraged to keep their browser software updated. Other protection measures are antivirus software and being aware of scams.

Privacy

During the course of browsing, cookies received from various websites are stored by the browser. Some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods of time, so browsers typically provide a section in the menu for deleting cookies.

Some browsers have more proactive protection against cookies and trackers that limit their functionality and ability to track user behaviour. Finer-grained management of cookies usually requires a browser extensionTelemetry data is collected by most popular web browsers, which can usually be opted out of by the user.

A study from 2020 portrays that there are two tiers of browsers in terms of privacy: the privacy-focused ones (Brave, DuckDuckGo and Firefox-Focus) perform better than popular ones (Chrome, Firefox, and Safari) and recommend the first ones. Blocking fingerprinting, cookies, tracking scripts, ads, etc. seems to explain that difference.

Cloud computing

From Wikipedia, the free encyclopedia
Cloud computing metaphor: the group of networked elements providing services does not need to be addressed or managed individually by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is defined by the International Organization for Standardization (ISO) as "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on demand". It is commonly referred to as "the cloud".

Characteristics

In 2011, the National Institute of Standards and Technology (NIST) identified five "essential characteristics" for cloud systems. Below are the exact definitions according to NIST:

  • On-demand self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider."
  • Broad network access: "Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations)."
  • Resource pooling: " The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand."
  • Rapid elasticity: "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time."
  • Measured service: "Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

By 2023, the International Organization for Standardization (ISO) had expanded and refined the list.

History

The history of cloud computing extends to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This period saw broad experimentation with making large-scale computing power more accessible through time-sharing, while optimizing infrastructure, platforms, and applications to improve efficiency for end users.

The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom. The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.

In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.

The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.

Value proposition

Cloud computing can shorten time to market by offering pre-configured tools, scalable resources, and managed services, allowing users to focus on core business value rather than maintaining infrastructure. Cloud platforms can enable organizations and individuals to reduce upfront capital expenditures on physical infrastructure by shifting to an operational expenditure model, where costs scale with usage. Cloud platforms also offer managed services and tools, such as artificial intelligence, data analytics, and machine learning, which might otherwise require significant in-house expertise and infrastructure investment.

While cloud computing can offer cost advantages through effective resource optimization, organizations often face challenges such as unused resources, inefficient configurations, and hidden costs without proper oversight and governance. Many cloud platforms provide cost management tools, such as AWS Cost Explorer and Azure Cost Management, and frameworks like FinOps have emerged to standardize financial operations in the cloud. Cloud computing also facilitates collaboration, remote work, and global service delivery by enabling secure access to data and applications from any location with an internet connection.

Cloud providers offer various redundancy options for core services, such as managed storage and managed databases, though redundancy configurations often vary by service tier. Advanced redundancy strategies, such as cross-region replication or failover systems, typically require explicit configuration and may incur additional costs or licensing fees.

Cloud environments operate under a shared responsibility model, where providers are typically responsible for infrastructure security, physical hardware, and software updates, while customers are accountable for data encryption, identity and access management (IAM), and application-level security. These responsibilities vary depending on the cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS)—with customers typically having more control and responsibility in IaaS environments and progressively less in PaaS and SaaS models, often trading control for convenience and managed services.

Adoption and suitability

The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization.

Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups.

On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs.

In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations.

Challenges and limitations

One of the primary challenges of cloud computing, compared with traditional on-premises systems, is maintaining data security and privacy. Cloud users entrust their sensitive data to third-party providers, who may not have adequate measures to protect it from unauthorized access, breaches, or leaks. Cloud users also face compliance risks if they have to adhere to certain regulations or standards regarding data protection, such as GDPR or HIPAA.

Another challenge of cloud computing is reduced visibility and control. Cloud users may not have full insight into how their cloud resources are managed, configured, or optimized by their providers. They may also have limited ability to customize or modify their cloud services according to their specific needs or preferences. Complete understanding of all technology may be impossible, especially given the scale, complexity, and deliberate opacity of contemporary systems; however, there is a need for understanding complex technologies and their interconnections to have power and agency within them. The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.

Additionally, cloud migration is a significant challenge. This process involves transferring data, applications, or workloads from one cloud environment to another, or from on-premises infrastructure to the cloud. Cloud migration can be complicated, time-consuming, and expensive, particularly when there are compatibility issues between different cloud platforms or architectures. If not carefully planned and executed, cloud migration can lead to downtime, reduced performance, or even data loss.

Cloud migration challenges

According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:

  1. "Understanding application dependencies"
  2. "Comparing on-premise and cloud costs"
  3. "Assessing technical feasibility."

Implementation challenges

Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.

Cloud cost overruns

In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization.

The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns.

Service Level Agreements

Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.

Leaky abstractions

Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize.

Service lock-in within the same vendor

Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change.

Security and privacy

Cloud suppliers security and privacy agreements must be aligned to the demand(s) requirements and regulations.

Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end-users' choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.

According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users, there may be a possibility that information belonging to different customers resides on the same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak. Dropbox had been breached in October 2014, having over seven million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).

There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership. Physical control of the computer equipment (private cloud) is more secure than having the equipment off-site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services. Some small businesses that do not have expertise in IT security could find that it is more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes do not read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.

The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems.

Extraterritorial data access

The CLOUD Act allows United States authorities to request data from cloud providers, and courts can impose nondisclosure requirements preventing providers from notifying affected users. This framework is in legal tension with Article 48 of the European General Data Protection Regulation (GDPR), which restricts the transfer of personal data in response to foreign court or administrative orders unless based on an international agreement. As a result, cloud service providers operating in both Europe and the U.S. may face competing legal obligations.

According to Laura K. Donohue writing for the Harvard Journal of Law and Public Policy, cloud service providers also fall within the broader category of service providers subject to Section 702 of the Foreign Intelligence Surveillance Act (FISA), which has had documented effects on cloud providers and their customers.

Service models

Comparison of on-premise, IaaS, PaaS, and SaaS
Cloud computing service models arranged as layers in a stack

The National Institute of Standards and Technology recognized three cloud service models in 2011: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The International Organization for Standardization (ISO) later identified additional models in 2023, including "Network as a Service", "Communications as a Service", "Compute as a Service", and "Data Storage as a Service".

Infrastructure as a service (IaaS)

Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. The use of containers offers higher performance than virtualization because there is no hypervisor overhead. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.

The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."

IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the number of resources allocated and consumed.

Platform as a service (PaaS)

The NIST's definition of cloud computing defines Platform as a Service as:

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including an operating system, programming-language execution environment, database, and the web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.

Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools.

Software as a service (SaaS)

The NIST's definition of cloud computing defines Software as a Service as:

The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.

Serverless computing

Serverless computing allows customers to use various cloud capabilities without the need to provision, deploy, or manage hardware or software resources, apart from providing their application code or data. ISO/IEC 22123-2:2023 classifies serverless alongside Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) under the broader category of cloud service categories. Notably, while ISO refers to these classifications as cloud service categories, the National Institute of Standards and Technology (NIST) refers to them as service models.

Deployment models

Cloud computing types

"A cloud deployment model represents the way in which cloud computing can be organized based on the control and sharing of physical or virtual resources." Cloud deployment models define the fundamental patterns of interaction between cloud customers and cloud providers. They do not detail implementation specifics or the configuration of resources.

Private

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".

Public

Cloud services are considered "public" when they are delivered over the public Internet, and they may be offered as a paid subscription, or free of charge. Architecturally, there are few differences between public- and private-cloud services, but security concerns increase substantially when services (applications, storage, and other resources) are shared by multiple customers. Most public-cloud providers offer direct-connection services that allow customers to securely link their legacy data centers to their cloud-resident applications.

Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.

Hybrid

Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed or dedicated services with cloud resources. Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it cannot be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.

Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.

Community

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether it is managed internally or by a third-party, and hosted internally or externally, the costs are distributed among fewer users compared to a public cloud (but more than a private cloud). As a result, only a portion of the potential cost savings of cloud computing is achieved.

Multi cloud

According to ISO/IEC 22123-1: "multi-cloud is a cloud deployment model in which a customer uses public cloud services provided by two or more cloud service providers".   Poly cloud refers to the use of multiple public clouds for the purpose of leveraging specific services that each provider offers. It differs from Multi cloud in that it is not designed to increase flexibility or mitigate against failures but is rather used to allow an organization to achieve more than could be done with a single provider.

Market

According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025. Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023. According to a McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030. In 2022, more than $1.3 trillion in enterprise IT spending was at stake from the shift to the cloud, growing to almost $1.8 trillion in 2025, according to Gartner.

The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:

The Communication set out a series of "digital agenda actions" which the Commission proposed to undertake in order to support the development of a fair and effective market for cloud computing services.

Cloud Computing Vendors

As of 2025, the three largest cloud computing providers by market share, commonly referred to as hyperscalers, are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These companies dominate the global cloud market due to their extensive infrastructure, broad service offerings, and scalability.

In recent years, organizations have increasingly adopted alternative cloud providers, which offer specialized services that distinguish them from hyperscalers. These providers may offer advantages such as lower costs, improved cost transparency and predictability, enhanced data sovereignty (particularly within regions such as the European Union to comply with regulations like the General Data Protection Regulation (GDPR)), stronger alignment with local regulatory requirements, or industry-specific services.

Alternative cloud providers are often part of multi-cloud strategies, where organizations use multiple cloud services—both from hyperscalers and specialized providers—to optimize performance, compliance, and cost efficiency. However, they do not necessarily serve as direct replacements for hyperscalers, as their offerings are typically more specialized.

Similar concepts

The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacles. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system-level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.

Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.

Cloud computing shares characteristics with:

  • Client–server modelClient–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
  • Computer bureau – A service bureau providing computer services, particularly from the 1960s to 1980s.
  • Grid computing – A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
  • Fog computing – Distributed computing paradigm that provides data, compute, storage and application services closer to the client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client-side (e.g. mobile devices), instead of sending data to a remote location for processing.
  • Utility computing – The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
  • Peer-to-peer – A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
  • Cloud sandbox – A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.

Normal distribution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Normal_distribution Normal d...