A web browser, often abbreviated as browser, is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers can also display content stored locally on the user's device.
The purpose of a web browser is to fetch content and display it on the user's device. This process begins when the user inputs a Uniform Resource Locator (URL), such as https://en.wikipedia.org/, into the browser's address bar. Virtually all URLs on the Web start with either http: or https: which means they are retrieved with the Hypertext Transfer Protocol (HTTP). For secure mode (HTTPS), the connection between the browser and web server is encrypted, providing a secure and private data transfer. For this reason, a web browser is often referred to as an HTTP client or a user agent. Requisite materials, including text, style sheets, images, and other types of multimedia, are downloaded from the server. Once the materials have been downloaded, the web browser's engine
(also known as a layout engine or rendering engine) is responsible for
converting those resources into an interactive visual representation of
the page on the user's device. Modern web browsers also contain separate JavaScript engines which enable more complex interactive applications inside the browser. A web browser that does not render a graphical user interface is known as a headless browser.
Web pages usually contain hyperlinks to other pages and resources. Each link contains a URL, and when it is clicked or tapped, the browser navigates to the new resource. Most browsers use an internal cache
of web page resources to improve loading times for subsequent visits to
the same page. The cache can store many items, such as large images, so
they do not need to be downloaded from the server again. Cached items
are usually only stored for as long as the web server stipulates in its
HTTP response messages.
A web browser is not the same thing as a search engine, though the two are often confused. A search engine is a website that provides links to other websites and allows users to search for specific resources using a textual query.
However, web browsers are often used to access search engines, and most
modern browsers allow users to access a default search engine directly
by typing a query into the address bar.
The first web browser, called WorldWideWeb, was created in 1990 by Sir Tim Berners-Lee. He then recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. The Mosaic web browser was released in April 1993, and was later credited as the first web browser to find mainstream popularity. Its innovative graphical user interface made the World Wide Web
easy to navigate and thus more accessible to the average person. This,
in turn, sparked the Internet boom of the 1990s, when the Web grew at a
very rapid rate. The lead developers of Mosaic then founded the Netscape corporation, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator quickly became the most popular browser.
Microsoft debuted Internet Explorer in 1995, leading to a browser war
with Netscape. Within a few years, Microsoft gained a dominant position
in the browser market for two reasons: it bundled Internet Explorer
with its popular Windowsoperating system and did so as freeware with no restrictions on usage. The market share of Internet Explorer peaked at over 95% in the early 2000s. In 1998, Netscape launched what would become the Mozilla Foundation to create a new browser using the open-source software model. This work evolved into the Firefox browser, first released by Mozilla in 2004. Firefox's market share peaked at 32% in 2010. Apple released its Safari browser in 2003; it remains the dominant browser on Apple devices, though it did not become popular elsewhere.
Google debuted its Chrome browser in 2008, which steadily took market share from Internet Explorer and became the most popular browser in 2012. Chrome has remained dominant ever since. In 2015, Microsoft replaced Internet Explorer with Edge [Legacy] for the Windows 10 release. In 2020, this legacy version was replaced by a new Chromium-based version of Edge.
Since the early 2000s, browsers have greatly expanded their HTML, CSS, JavaScript, and multimedia capabilities. One reason has been to enable more sophisticated websites, such as web apps. Another factor is the significant increase of broadband connectivity in many parts of the world, enabling people to access data-intensive content, such as streamingHD video on YouTube, that was not possible during the era of dial-up modems.
Starting in the mid-2020s, browsers with integrated artificial intelligence (AI) capabilities, known as AI browsers, have become increasingly common. This includes both new entrants to the browser market, such as Perplexity Comet and ChatGPT Atlas, and established browsers that added AI features, such as Chrome with the Geminichatbot and Edge with the Copilot chatbot.
Allowing the user to have multiple pages open at the same time, either in different windows or in different tabs of the same window.
Back and forward buttons to go back to the previous page visited or forward to the next one.
A refresh or reload and a stop button to reload and cancel loading the current page. (In most browsers, the stop button is merged with the reload button.)
An address bar to input the URL of a page and display it, and a search bar to input queries into a search engine. (In most browsers, the search bar is merged with the address bar.)
While mobile browsers have similar UI features as desktop versions, the limitations of the often-smaller touch screens require mobile UIs to be simpler. The difference is significant for users accustomed to keyboard shortcuts. Responsive web design
is used to create websites that offer a consistent experience across
the desktop and mobile versions of the website and across varying screen
sizes. The most popular desktop browsers also have sophisticated web development tools.
Access to some web content — particularly streaming services like Netflix, Disney+, and Spotify — is restricted by Digital Rights Management (DRM) software. A web browser is able to access DRM-restricted content through the use of a Content Decryption Module (CDM) such as Widevine.
As of 2020, the CDMs used by dominant web browsers require browser
providers to pay costly license fees, making it unfeasible for most
independent open-source browsers to offer access to DRM-restricted
content.
Google Chrome has been the dominant browser since the mid-2010s and currently has a 69% global market share on all devices. The vast majority of its source code comes from Google's open-sourceChromium project; this code is also the basis for many other browsers, including Microsoft Edge, currently in third place with about a 5% share, as well as Samsung Internet and Opera in fifth and sixth places respectively with approximately 2% market share each.
The other two browsers in the top four are made from different codebases. Safari, based on Apple's WebKit code, is the second most popular web browser and is dominant on Apple devices, resulting in an 16% global share. Firefox, in fourth place, with about 2% market share, is based on Mozilla's code. Both of these codebases are open-source, so a number of small niche browsers are also made from them.
The following table details the top web browsers by market share, as of February, 2025:
Web browser
Market share
Reference
Market share by type of device
Prior to late 2016, the majority of web traffic came from desktop computers. However, since then, mobile devices (smartphones) have represented the majority of web traffic.[40] As of February 2025, mobile devices represent a 62% share of Internet traffic, followed by desktop at 36% and tablet at 2%.[41]
Web browsers are popular targets for hackers, who exploit security holes to steal information, destroy files, and partake in other malicious
activities. Browser vendors regularly patch these security holes, so
users are strongly encouraged to keep their browser software updated.
Other protection measures are antivirus software and being aware of scams.
During the course of browsing, cookies received from various websites are stored by the browser. Some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods
of time, so browsers typically provide a section in the menu for
deleting cookies.
Some browsers have more proactive protection against cookies and
trackers that limit their functionality and ability to track user
behaviour. Finer-grained management of cookies usually requires a browser extension. Telemetry data is collected by most popular web browsers, which can usually be opted out of by the user.
A study from 2020 portrays that there are two tiers of browsers in terms of privacy: the privacy-focused ones (Brave, DuckDuckGo and Firefox-Focus) perform better than popular ones (Chrome, Firefox, and Safari) and recommend the first ones. Blocking fingerprinting, cookies, tracking scripts, ads, etc. seems to explain that difference.
Cloud
computing metaphor: the group of networked elements providing services
does not need to be addressed or managed individually by users; instead,
the entire provider-managed suite of hardware and software can be
thought of as an amorphous cloud.
Cloud computing is defined by the International Organization for Standardization
(ISO) as "a paradigm for enabling network access to a scalable and
elastic pool of shareable physical or virtual resources with
self-service provisioning and administration on demand". It is commonly referred to as "the cloud".
On-demand self-service: "A consumer can unilaterally provision
computing capabilities, such as server time and network storage, as
needed automatically without requiring human interaction with each
service provider."
Broad network access: "Capabilities are available over the network
and accessed through standard mechanisms that promote use by
heterogeneous thin or thick client platforms (e.g., mobile phones,
tablets, laptops, and workstations)."
Resource pooling:
" The provider's computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and
virtual resources dynamically assigned and reassigned according to
consumer demand."
Rapid elasticity: "Capabilities can be elastically provisioned and
released, in some cases automatically, to scale rapidly outward and
inward commensurate with demand. To the consumer, the capabilities
available for provisioning often appear unlimited and can be
appropriated in any quantity at any time."
Measured service: "Cloud systems automatically control and optimize
resource use by leveraging a metering capability at some level of
abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the
provider and consumer of the utilized service.
The history of cloud computing extends to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry
(RJE). The "data center" model, where users submitted jobs to operators
to run on mainframes, was predominantly used during this era. This
period saw broad experimentation with making large-scale computing power
more accessible through time-sharing, while optimizing infrastructure, platforms, and applications to improve efficiency for end users.
The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript
environment could "go". The metaphor is credited to David Hoffman, a
General Magic communications specialist, based on its long-standing use
in networking and telecom. The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales
with "cloud computing-enabled applications". The business plan foresaw
that online consumer file storage would likely be commercially
successful. As a result, Compaq decided to sell server hardware to internet service providers.
In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.
The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.
Value proposition
Cloud computing can shorten time to market by offering pre-configured
tools, scalable resources, and managed services, allowing users to
focus on core business value rather than maintaining infrastructure.
Cloud platforms can enable organizations and individuals to reduce
upfront capital expenditures on physical infrastructure by shifting to
an operational expenditure model, where costs scale with usage. Cloud
platforms also offer managed services and tools, such as artificial
intelligence, data analytics, and machine learning, which might
otherwise require significant in-house expertise and infrastructure
investment.
While cloud computing can offer cost advantages through effective
resource optimization, organizations often face challenges such as
unused resources, inefficient configurations, and hidden costs without
proper oversight and governance. Many cloud platforms provide cost
management tools, such as AWS Cost Explorer and Azure Cost Management,
and frameworks like FinOps have emerged to standardize financial
operations in the cloud. Cloud computing also facilitates collaboration,
remote work, and global service delivery by enabling secure access to
data and applications from any location with an internet connection.
Cloud providers offer various redundancy options for core
services, such as managed storage and managed databases, though
redundancy configurations often vary by service tier. Advanced
redundancy strategies, such as cross-region replication or failover
systems, typically require explicit configuration and may incur
additional costs or licensing fees.
Cloud environments operate under a shared responsibility model,
where providers are typically responsible for infrastructure security,
physical hardware, and software updates, while customers are accountable
for data encryption, identity and access management (IAM), and
application-level security. These responsibilities vary depending on the
cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service
(SaaS)—with customers typically having more control and responsibility
in IaaS environments and progressively less in PaaS and SaaS models,
often trading control for convenience and managed services.
Adoption and suitability
The decision to adopt cloud computing or maintain on-premises
infrastructure depends on factors such as scalability, cost structure,
latency requirements, regulatory constraints, and infrastructure
customization.
Organizations with variable or unpredictable workloads, limited
capital for upfront investments, or a focus on rapid scalability benefit
from cloud adoption. Startups, SaaS companies, and e-commerce platforms
often prefer the pay-as-you-go operational expenditure (OpEx) model of
cloud infrastructure. Additionally, companies prioritizing global
accessibility, remote workforce enablement, disaster recovery, and
leveraging advanced services such as AI/ML and analytics are well-suited
for the cloud. In recent years, some cloud providers have started
offering specialized services for high-performance computing and
low-latency applications, addressing some use cases previously exclusive
to on-premises setups.
On the other hand, organizations with strict regulatory
requirements, highly predictable workloads, or reliance on deeply
integrated legacy systems may find cloud infrastructure less suitable.
Businesses in industries like defense, government, or those handling
highly sensitive data often favor on-premises setups for greater control
and data sovereignty. Additionally, companies with ultra-low latency
requirements, such as high-frequency trading (HFT) firms, rely on custom
hardware (e.g., FPGAs) and physical proximity to exchanges, which most
cloud providers cannot fully replicate despite recent advancements.
Similarly, tech giants like Google, Meta, and Amazon build their own
data centers due to economies of scale, predictable workloads, and the
ability to customize hardware and network infrastructure for optimal
efficiency. However, these companies also use cloud services selectively
for certain workloads and applications where it aligns with their
operational needs.
In practice, many organizations are increasingly adopting hybrid
cloud architectures, combining on-premises infrastructure with cloud
services. This approach allows businesses to balance scalability,
cost-effectiveness, and control, offering the benefits of both
deployment models while mitigating their respective limitations.
One of the primary challenges of cloud computing, compared with
traditional on-premises systems, is maintaining data security and
privacy. Cloud users entrust their sensitive data to third-party
providers, who may not have adequate measures to protect it from
unauthorized access, breaches, or leaks. Cloud users also face
compliance risks if they have to adhere to certain regulations or
standards regarding data protection, such as GDPR or HIPAA.
Another challenge of cloud computing is reduced visibility and
control. Cloud users may not have full insight into how their cloud
resources are managed, configured, or optimized by their providers. They
may also have limited ability to customize or modify their cloud
services according to their specific needs or preferences. Complete understanding of all technology may be impossible, especially
given the scale, complexity, and deliberate opacity of contemporary
systems; however, there is a need for understanding complex technologies
and their interconnections to have power and agency within them. The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.
Additionally, cloud migration is a significant challenge. This
process involves transferring data, applications, or workloads from one
cloud environment to another, or from on-premises infrastructure to the
cloud. Cloud migration can be complicated, time-consuming, and
expensive, particularly when there are compatibility issues between
different cloud platforms or architectures. If not carefully planned and
executed, cloud migration can lead to downtime, reduced performance, or
even data loss.
Cloud migration challenges
According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:
"Understanding application dependencies"
"Comparing on-premise and cloud costs"
"Assessing technical feasibility."
Implementation challenges
Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.
Cloud cost overruns
In a report by Gartner,
a survey of 200 IT leaders revealed that 69% experienced budget
overruns in their organizations' cloud expenditures during 2023.
Conversely, 31% of IT leaders whose organizations stayed within budget
attributed their success to accurate forecasting and budgeting,
proactive monitoring of spending, and effective optimization.
The 2024 Flexera State of Cloud Report identifies the top cloud
challenges as managing cloud spend, followed by security concerns and
lack of expertise. Public cloud expenditures exceeded budgeted amounts
by an average of 15%. The report also reveals that cost savings is the
top cloud initiative for 60% of respondents. Furthermore, 65% measure
cloud progress through cost savings, while 42% prioritize shorter
time-to-market, indicating that cloud's promise of accelerated
deployment is often overshadowed by cost concerns.
Service Level Agreements
Typically, cloud providers' Service Level Agreements
(SLAs) do not encompass all forms of service interruptions. Exclusions
typically include planned maintenance, downtime resulting from external
factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches.
Typically, customers bear the responsibility of monitoring SLA
compliance and must file claims for any unmet SLAs within a designated
timeframe. Customers should be aware of how deviations from SLAs are
calculated, as these parameters may vary by service. These requirements
can place a considerable burden on customers. Additionally, SLA
percentages and conditions can differ across various services within the
same provider, with some services lacking any SLA altogether. In cases
of service interruptions due to hardware failures in the cloud provider,
the company typically does not offer monetary compensation. Instead,
eligible users may receive credits as outlined in the corresponding SLA.
Leaky abstractions
Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture.
Mitigating leaky abstractions requires users to understand the
implementation details and limitations of the cloud services they
utilize.
Service lock-in within the same vendor
Service lock-in within the same vendor occurs when a customer becomes
dependent on specific services within a cloud vendor, making it
challenging to switch to alternative services within the same vendor
when their needs change.
Security and privacy
Cloud suppliers security and privacy agreements must be aligned to the demand(s) requirements and regulations.
Cloud computing poses privacy concerns because the service provider
can access the data that is in the cloud at any time. It could
accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if
necessary for purposes of law and order without a warrant. That is
permitted in their privacy policies, which users must agree to before
they start using cloud services. Solutions to privacy include policy and
legislation as well as end-users' choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems
can also provide practical solutions to privacy concerns in cloud
computing. These systems distinguish between authorized and unauthorized
users and determine the amount of data that is accessible to each
entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.
According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which
accounted for 29%, 25% and 10% of all cloud security outages
respectively. Together, these form shared technology vulnerabilities. In
a cloud provider platform being shared by different users, there may be
a possibility that information belonging to different customers resides
on the same data server. Additionally, Eugene Schultz,
chief technology officer at Emagined Security, said that hackers are
spending substantial time and effort looking for ways to penetrate the
cloud. "There are some real Achilles' heels in the cloud infrastructure
that are making big holes for the bad guys to get into". Because data
from hundreds or thousands of companies can be stored on large cloud
servers, hackers can theoretically gain control of huge stores of
information through a single attack—a process he called "hyperjacking".
Some examples of this include the Dropbox security breach, and iCloud
2014 leak. Dropbox had been breached in October 2014, having over seven million of
its users passwords stolen by hackers in an effort to get monetary
value from it by Bitcoins (BTC). By having these passwords, they are
able to read private data as well as have this data be indexed by search engines (making the information public).
There is the problem of legal ownership of the data (If a user
stores some data in the cloud, can the cloud provider profit from it?).
Many Terms of Service agreements are silent on the question of
ownership. Physical control of the computer equipment (private cloud) is more
secure than having the equipment off-site and under someone else's
control (public cloud). This delivers great incentive to public cloud
computing service providers to prioritize building and maintaining
strong management of secure services. Some small businesses that do not have expertise in IT
security could find that it is more secure for them to use a public
cloud. There is the risk that end users do not understand the issues
involved when signing on to a cloud service (persons sometimes do not
read the many pages of the terms of service agreement, and just click
"Accept" without reading). This is important now that cloud computing is
common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant).
Fundamentally, private cloud is seen as more secure with higher levels
of control for the owner, however public cloud is seen to be more
flexible and requires less time and money investment from the user.
The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses.
Recent research conducted in 2022 has revealed that the Trojan horse
injection method is a serious problem with harmful impacts on cloud
computing systems.
The CLOUD Act
allows United States authorities to request data from cloud providers,
and courts can impose nondisclosure requirements preventing providers
from notifying affected users. This framework is in legal tension with Article 48 of the European General Data Protection Regulation
(GDPR), which restricts the transfer of personal data in response to
foreign court or administrative orders unless based on an international
agreement. As a result, cloud service providers operating in both Europe
and the U.S. may face competing legal obligations.
Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract
various low-level details of underlying network infrastructure like
physical computing resources, location, data partitioning, scaling,
security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces
are the underlying Linux kernel technologies used to isolate, secure
and manage the containers. The use of containers offers higher
performance than virtualization because there is no hypervisor overhead.
IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.
The NIST's
definition of cloud computing describes IaaS as "where the consumer is
able to deploy and run arbitrary software, which can include operating
systems and applications. The consumer does not manage or control the
underlying cloud infrastructure but has control over operating systems,
storage, and deployed applications; and possibly limited control of
select networking components (e.g., host firewalls)."
IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks).
To deploy their applications, cloud users install operating-system
images and their application software on the cloud infrastructure. In
this model, the cloud user patches and maintains the operating systems
and the application software. Cloud providers typically bill IaaS
services on a utility computing basis: cost reflects the number of
resources allocated and consumed.
The NIST's definition of cloud computing defines Platform as a Service as:
The capability provided to the
consumer is to deploy onto the cloud infrastructure consumer-created or
acquired applications created using programming languages, libraries,
services, and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the
deployed applications and possibly configuration settings for the
application-hosting environment.
PaaS vendors offer a development environment to application
developers. The provider typically develops toolkit and standards for
development and channels for distribution and payment. In the PaaS
models, cloud providers deliver a computing platform,
typically including an operating system, programming-language execution
environment, database, and the web server. Application developers
develop and run their software on a cloud platform instead of directly
buying and managing the underlying hardware and software layers. With
some PaaS, the underlying computer and storage resources scale
automatically to match application demand so that the cloud user does
not have to allocate resources manually.
Some integration and data management providers also use
specialized applications of PaaS as delivery models for data. Examples
include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and
deployment of integrations without installing or managing any hardware
or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the
development and execution of programs by building data applications for
the customer. dPaaS users access data through data-visualization tools.
The NIST's definition of cloud computing defines Software as a Service as:
The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure.
The applications are accessible from various client devices through
either a thin client interface, such as a web browser (e.g., web-based
email), or a program interface. The consumer does not manage or control
the underlying cloud infrastructure including network, servers,
operating systems, storage, or even individual application capabilities,
with the possible exception of limited user-specific application
configuration settings.
In the software as a service (SaaS) model, users gain access to application software and databases.
Cloud providers manage the infrastructure and platforms that run the
applications. SaaS is sometimes referred to as "on-demand software" and
is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application
software in the cloud and cloud users access the software from cloud
clients. Cloud users do not manage the cloud infrastructure and platform
where the application runs. This eliminates the need to install and run
the application on the cloud user's own computers, which simplifies
maintenance and support. Cloud applications differ from other
applications in their scalability—which can be achieved by cloning tasks
onto multiple virtual machines at run-time to meet changing work
demand. Load balancers
distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access-point. To
accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing
hardware and software maintenance and support to the cloud provider.
This enables the business to reallocate IT operations costs away from
hardware/software spending and from personnel expenses, towards meeting
other goals. In addition, with applications hosted centrally, updates
can be released without the need for users to install new software. One
drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.
Serverless computing allows customers to use various cloud
capabilities without the need to provision, deploy, or manage hardware
or software resources, apart from providing their application code or
data. ISO/IEC 22123-2:2023 classifies serverless alongside
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and
Software as a Service (SaaS) under the broader category of cloud service
categories. Notably, while ISO refers to these classifications as cloud
service categories, the National Institute of Standards and Technology
(NIST) refers to them as service models.
Deployment models
Cloud computing types
"A cloud deployment model represents the way in which cloud computing
can be organized based on the control and sharing of physical or
virtual resources." Cloud deployment models define the fundamental patterns of interaction
between cloud customers and cloud providers. They do not detail
implementation specifics or the configuration of resources.
Private
Private cloud is cloud infrastructure operated solely for a single
organization, whether managed internally or by a third party, and hosted
either internally or externally. Undertaking a private cloud project requires significant engagement to
virtualize the business environment, and requires the organization to
reevaluate decisions about existing resources. It can improve business,
but every step in the project raises security issues that must be
addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical
footprint, requiring allocations of space, hardware, and environmental
controls. These assets have to be refreshed periodically, resulting in
additional capital expenditures. They have attracted criticism because
users "still have to buy, build, and manage them" and thus do not
benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
Cloud services are considered "public" when they are delivered over
the public Internet, and they may be offered as a paid subscription, or
free of charge. Architecturally, there are few differences between public- and
private-cloud services, but security concerns increase substantially
when services (applications, storage, and other resources) are shared by
multiple customers. Most public-cloud providers offer direct-connection
services that allow customers to securely link their legacy data
centers to their cloud-resident applications.
Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources, that remain distinct entities but are bound together, offering the
benefits of multiple deployment models. Hybrid cloud can also mean the
ability to connect collocation, managed or dedicated services with cloud
resources. Gartner
defines a hybrid cloud service as a cloud computing service that is
composed of some combination of private, public and community cloud
services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so
that it cannot be simply put in one category of private, public, or
community cloud service. It allows one to extend either the capacity or
the capability of a cloud service, by aggregation, integration or
customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example,
an organization may store sensitive client data in house on a private
cloud application, but interconnect that application to a business
intelligence application provided on a public cloud as a software
service. This example of hybrid cloud extends the capabilities of the enterprise
to deliver a specific business service through the addition of
externally available public cloud services. Hybrid cloud adoption
depends on a number of factors such as data security and compliance
requirements, level of control needed over data, and the applications an
organization uses.
Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting
is an application deployment model in which an application runs in a
private cloud or data center and "bursts" to a public cloud when the
demand for computing capacity increases. A primary advantage of cloud
bursting and a hybrid cloud model is that an organization pays for extra
compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT
infrastructure that supports average workloads, and use cloud resources
from public or private clouds, during spikes in processing demands.
Community
Community cloud
shares infrastructure between several organizations from a specific
community with common concerns (security, compliance, jurisdiction,
etc.), whether it is managed internally or by a third-party, and hosted
internally or externally, the costs are distributed among fewer users
compared to a public cloud (but more than a private cloud). As a result,
only a portion of the potential cost savings of cloud computing is
achieved.
According to ISO/IEC
22123-1: "multi-cloud is a cloud deployment model in which a customer
uses public cloud services provided by two or more cloud service
providers". Poly cloud refers to the use of multiple public clouds for the purpose
of leveraging specific services that each provider offers. It differs
from Multi cloud in that it is not designed to increase flexibility or
mitigate against failures but is rather used to allow an organization to
achieve more than could be done with a single provider.
Market
According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025. Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023. According to a McKinsey & Company
report, cloud cost-optimization levers and value-oriented business use
cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030. In 2022, more than $1.3 trillion in enterprise IT spending was at stake
from the shift to the cloud, growing to almost $1.8 trillion in 2025,
according to Gartner.
The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:
variations in standards applicable to cloud computing
The Communication set out a series of "digital agenda actions"
which the Commission proposed to undertake in order to support the
development of a fair and effective market for cloud computing services.
Cloud Computing Vendors
As of 2025, the three largest cloud computing providers by market
share, commonly referred to as hyperscalers, are Amazon Web Services
(AWS), Microsoft Azure, and Google Cloud. These companies dominate the global cloud market due to their extensive
infrastructure, broad service offerings, and scalability.
In recent years, organizations have increasingly adopted
alternative cloud providers, which offer specialized services that
distinguish them from hyperscalers. These providers may offer advantages
such as lower costs, improved cost transparency and predictability,
enhanced data sovereignty (particularly within regions such as the
European Union to comply with regulations like the General Data
Protection Regulation (GDPR)), stronger alignment with local regulatory
requirements, or industry-specific services.
Alternative cloud providers are often part of multi-cloud
strategies, where organizations use multiple cloud services—both from
hyperscalers and specialized providers—to optimize performance,
compliance, and cost efficiency. However, they do not necessarily serve
as direct replacements for hyperscalers, as their offerings are
typically more specialized.
Similar concepts
The goal of cloud computing is to allow users to take benefit from
all of these technologies, without the need for deep knowledge about or
expertise with each one of them. The cloud aims to cut costs and helps
the users focus on their core business instead of being impeded by IT
obstacles. The main enabling technology for cloud computing is virtualization.
Virtualization software separates a physical computing device into one
or more "virtual" devices, each of which can be easily used and managed
to perform computing tasks. With operating system-level virtualization
essentially creating a scalable system of multiple independent
computing devices, idle computing resources can be allocated and used
more efficiently. Virtualization provides the agility required to speed
up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand.
By minimizing user involvement, automation speeds up the process,
reduces labor costs and reduces the possibility of human errors.
Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.
Cloud computing shares characteristics with:
Client–server model – Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
Grid computing – A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
Fog computing
– Distributed computing paradigm that provides data, compute, storage
and application services closer to the client or near-user edge devices,
such as network routers. Furthermore, fog computing handles data at the
network level, on smart devices and on the end-user client-side (e.g.
mobile devices), instead of sending data to a remote location for
processing.
Utility computing – The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer
– A distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of resources (in contrast
to the traditional client-server model).
Cloud sandbox
– A live, isolated computer environment in which a program, code or
file can run without affecting the application in which it runs.
In computing, virtual memory, or virtual storage, is enabled by a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".
The computer's operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit
(MMU), automatically translates virtual addresses to physical
addresses. Software within the operating system may extend these
capabilities, utilizing, e.g., disk storage,
to provide a virtual address space that can exceed the capacity of real
memory and thus reference more memory than is physically present in the
computer.
The primary benefits of virtual memory include freeing
applications from having to manage a shared memory space, ability to
share memory used by libraries
between processes, increased security due to memory isolation, and
being able to conceptually use more memory than might be physically
available, using the technique of paging or segmentation.
Properties
Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization can be considered a generalization of the concept of virtual memory.
Usage
Virtual memory is an integral part of a modern computer architecture; implementations usually require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations.
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. Some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even later ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Embedded systems
and other special-purpose computer systems that require very fast
and/or very consistent response times may opt not to use virtual memory
due to decreased determinism; virtual memory systems trigger unpredictable traps
that may produce unwanted and unpredictable delays in response to
input, especially if the trap requires that data be read into main
memory from secondary memory. The hardware to translate virtual
addresses to physical addresses typically requires a significant chip
area to implement, and not all chips used in embedded systems include
that hardware, which is another reason some of those systems do not use
virtual memory.
History
During the 1950s, 1960s, and early 1970s, computer memory was very
expensive. Larger programs for which the available memory was not large
enough to hold all the code and data had to contain logic for managing
primary and secondary storage, such as overlaying.
Virtual memory was therefore introduced not only to extend primary
memory, but to make such an extension as easy as possible for
programmers to use.
The University of Manchester Atlas Computer was the first computer to use true virtual memory.
The first true virtual memory system was that implemented at the University of Manchester to create a one-level storage system as part of the Atlas Computer. It used a paging
mechanism to map the virtual addresses available to the programmer onto
the real memory that consisted of 16,384 words of primary core memory with an additional 98,304 words of secondary drum memory. The addition of virtual memory into the Atlas also eliminated a looming
programming problem: planning and scheduling data transfers between
main and secondary memory and recompiling programs for each change of
size of main memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.
A claim that the concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation, does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 105 words which mapped exactly onto the 105
words of the drums, i.e. the addresses were real addresses and there
was no form of indirect mapping, a key feature of virtual memory. What
Güntsch did invent was a form of cache memory,
since his high-speed memory was intended to contain a copy of some
blocks of code or data taken from the drums. Indeed, he wrote (as quoted
in translation):
"The programmer need not respect the existence of the primary memory
(he need not even know that it exists), for there is only one sort of
addresses [sic]
by which one can program as if there were only one storage." This is
exactly the situation in computers with cache memory, one of the
earliest commercial examples of which was the IBM System/360 Model 85. In the Model 85 all addresses were real addresses referring to the main
core store. A semiconductor cache store, invisible to the user, held
the contents of parts of the main store in use by the currently
executing program. This is exactly analogous to Güntsch's system,
designed as a means to improve performance, rather than to solve the
problems involved in multi-programming.
As early as 1958, Robert S. Barton, working at Shell Research,
suggested that main storage should be allocated automatically rather
than have the programmer being concerned with overlays from secondary
memory, in effect virtual memory.By 1960 Barton was lead architect on the BurroughsB5000 project. From 1959 to 1961, W. R. Lonergan was manager of the Burroughs Product Planning Group which included Barton, Donald Knuth
as consultant, and Paul King. In May 1960, UCLA ran a two-week seminar
"Using and Exploiting Giant Computers" to which Paul King and two others
were sent. Stan Gill gave a presentation on virtual memory in the Atlas
I computer. Paul King took the ideas back to Burroughs and it was
determined that virtual memory should be designed into the core of the
B5000.. Burroughs Corporation released the B5000 in 1964 as the first commercial computer with virtual memory.
IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it.
Before virtual memory could be implemented in mainstream
operating systems, many problems had to be addressed. Dynamic address
translation required expensive and difficult-to-build specialized
hardware; initial implementations slowed down access to memory slightly. There were worries that new system-wide algorithms utilizing secondary
storage would be less effective than previously used
application-specific algorithms. By 1969, the debate over virtual memory
for commercial computers was over; an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems.
Operating systems supporting virtual memory on mainframes of the 1960s include:
The introduction of virtual memory provided an ability for software
systems with large memory demands to run on computers with less real
memory. The savings from this provided a strong incentive to switch to
virtual memory for all systems. The additional capability of providing
virtual address spaces added another level of security and reliability,
thus making virtual memory even more attractive to the marketplace.
Throughout the 1970s, the IBM System/370
series running their virtual-storage based operating systems provided a
means for business users to migrate multiple older systems into fewer,
more powerful, mainframes that had improved price/performance. The first
minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.
Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault.
However, loading segment descriptors was an expensive operation,
causing operating system designers to rely strictly on paging rather
than a combination of paging and segmentation.
Nearly all current implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.
Page tables
Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit.
Each entry in the page table holds a flag indicating whether the
corresponding page is in real memory or not. If it is in real memory,
the page table entry will contain the real memory address at which the
page is stored. When a reference is made to a page by the hardware, if
the page table entry for the page indicates that it is not currently in
real memory, the hardware raises a page faultexception, invoking the paging supervisor component of the operating system.
Systems can have, e.g., one page table for the whole system,
separate page tables for each address space or process, separate page
tables for each segment; similarly, systems can have, e.g., no segment
table, one segment table for the whole system, separate segment tables
for each address space or process, separate segment tables for each region in a tree of region tables for each address space or process. If there is only one page table, different applications running at the same time
use different parts of a single range of virtual addresses. If there
are multiple page or segment tables, there are multiple virtual address
spaces and concurrent applications with separate page tables redirect to
different real addresses.
Some earlier systems with smaller real memory sizes, such as the SDS 940, used page registers instead of page tables in memory for address translation.
Paging supervisor
This part of the operating system creates and manages page tables and
lists of free page frames. In order to ensure that there will be enough
free page frames to quickly resolve page faults, the system may
periodically steal allocated page frames, using a page replacement algorithm, e.g., a least recently used
(LRU) algorithm. Stolen page frames that have been modified are written
back to auxiliary storage before they are added to the free queue. On
some systems the paging supervisor is also responsible for managing
translation registers that are not automatically loaded from page
tables.
Typically, a page fault that cannot be resolved results in an
abnormal termination of the application. However, some systems allow the
application to have exception handlers for such errors. The paging
supervisor may handle a page fault exception in several different ways,
depending on the details:
If the virtual address is invalid, the paging supervisor treats it as an error.
If the page is valid and the page information is not loaded into the
MMU, the page information will be stored into one of the page
registers.
If the page is uninitialized, a new page frame may be assigned and cleared.
If there is a stolen page frame containing the desired page, that page frame will be reused.
For a fault due to a write attempt into a read-protected page, if it
is a copy-on-write page then a free page frame will be assigned and the
contents of the old page copied; otherwise it is treated as an error.
If the virtual address is a valid page in a memory-mapped file or a
paging file, a free page frame will be assigned and the page read in.
In most cases, there will be an update to the page table, possibly
followed by purging the Translation Lookaside Buffer (TLB), and the
system restarts the instruction that causes the exception.
If the free page frame queue is empty then the paging supervisor must free a page frame using the same page replacement algorithm for page stealing.
Pinned pages
Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are locked, fixed, or wired pages. For example, interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault.
If the pages containing these pointers or the code that they invoke
were pageable, interrupt-handling would become far more complex and
time-consuming, particularly in the case of page fault interruptions.
Hence, some part of the page table structures is not pageable.
Some pages may be pinned for short periods of time, others may be
pinned for long periods of time, and still others may need to be
permanently pinned. For example:
The paging supervisor code and drivers for secondary storage
devices on which pages reside must be permanently pinned, as otherwise
paging would not even work because the necessary code would not be
available.
Timing-dependent components may be pinned to avoid variable paging delays.
Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses
to which they are attached expect to find data buffers located at
physical memory addresses; regardless of whether the bus has a memory management unit for I/O,
transfers cannot be stopped if a page fault occurs and then restarted
when the page fault has been processed. For example, the data could come
from a measurement sensor unit and lost real time data that got lost
because of a page fault can not be recovered.
In IBM's operating systems for System/370
and successor systems, the term is "fixed", and such pages may be
long-term fixed, or may be short-term fixed, or may be unfixed (i.e.,
pageable). System control structures are often long-term fixed (measured
in wall-clock time, i.e., time measured in seconds, rather than time
measured in fractions of one second) whereas I/O buffers are usually
short-term fixed (usually measured in significantly less than wall-clock
time, possibly for tens of milliseconds). Indeed, the OS has a special
facility for "fast fixing" these short-term fixed data buffers (fixing
which is performed without resorting to a time-consuming Supervisor Call instruction).
Multics used the term "wired". OpenVMS and Windows
refer to pages temporarily made nonpageable (as for I/O buffers) as
"locked", and simply "nonpageable" for those that are never pageable.
The Single UNIX Specification also uses the term "locked" in the specification for mlock(), as do the mlock()man pages on many Unix-like systems.
Virtual-real operation
In OS/VS1
and similar OSes, some parts of systems memory are managed in
"virtual-real" mode, called "V=R". In this mode every virtual address
corresponds to the same real address. This mode is used for interrupt
mechanisms, for the paging supervisor and page tables in older systems,
and for application programs using non-standard I/O management. For
example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and
virtual-fixed).
Thrashing
When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of
time transferring pages to and from a backing store, hence slowing down
useful work. A task's working set
is the minimum set of pages that should be in memory in order for it to
make useful progress. Thrashing occurs when there is insufficient
memory available to store the working sets of all active programs.
Adding real memory is the simplest response, but improving application
design, scheduling, and memory usage can help. Another solution is to
reduce the number of active tasks on the system. This reduces demand on
real memory by swapping out the entire working set of one or more
processes.
A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token is a lightweight and dynamic thrashing protection mechanism. The basic
idea is to set a token in the system, which is randomly given to a
process that has page faults when thrashing happens. The process that
has the token is given a privilege to allocate more physical memory
pages to build its working set, which is expected to quickly finish its
execution and to release the memory pages to other processes. A time
stamp is used to handover the token one by one. The first version of
swap-token was implemented in Linux 2.6. The second version is called preempt swap-token and is also in Linux 2.6. In this updated swap-token implementation, a priority counter is set
for each process to track the number of swap-out pages. The token is
always given to the process with a high priority, which has a high
number of swap-out pages. The length of the time stamp is not a constant
but is determined by the priority: the higher the number of swap-out
pages of a process, the longer the time stamp for it will be.
Segmented virtual memory
Some systems, such as the Burroughs B5500,[30] and the current Unisys MCP systems use segmentation instead of paging, dividing virtual address spaces
into variable-length segments. Using segmentation matches the allocated
memory blocks to the logical needs and requests of the programs, rather
than the physical view of a computer, although pages themselves are an
artificial division in memory. The designers of the B5000 would have
found the artificial size of pages to be Procrustean in nature, a story they would later use for the exact data sizes in the B1700.
In the Burroughs and Unisys systems, each memory segment is described by a master descriptor
which is a single absolute descriptor which may be referenced by other
relative (copy) descriptors, effecting sharing either within a process
or between processes. Descriptors are central to the working of virtual
memory in MCP systems. Descriptors contain not only the address of a
segment, but the segment length and status in virtual memory indicated
by the 'p-bit' or 'presence bit' which indicates if the address is to a
segment in main memory or to a secondary-storage block. When a
non-resident segment (p-bit is off) is accessed, an interrupt occurs to
load the segment from secondary storage at the given address, or if the
address itself is 0 then allocate a new block. In the latter case, the
length field in the descriptor is used to allocate a segment of that
length.
A further problem to thrashing in using a segmented scheme is checkerboarding, where all free segments become too small to satisfy requests for new
segments. The solution is to perform memory compaction to pack all used
segments together and create a large free block from which further
segments may be allocated. Since there is a single master descriptor for
each segment the new block address only needs to be updated in a single
descriptor, since all copies refer to the master descriptor.
Paging is not free from fragmentation – the fragmentation is internal to pages (internal fragmentation).
If a requested block is smaller than a page, then some space in the
page will be wasted. If a block requires larger than a page, a small
area in another page is required resulting in large wasted space. The
fragmentation thus becomes a problem passed to programmers who may well
distort their program to match certain page sizes. With segmentation,
the fragmentation is external to segments (external fragmentation)
and thus a system problem, which was the aim of virtual memory in the
first place, to relieve programmers of such memory considerations. In
multi-processing systems, optimal operation of the system depends on the
mix of independent processes at any time. Hybrid schemes of
segmentation and paging may be used.
The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used.
Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection.
In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit
linear, paged address space. Segments can be moved in and out of that
space; pages there can "page" in and out of main memory, providing two
levels of virtual memory; few if any operating systems do so, instead
using only paging. Early non-hardware-assisted x86 virtualization
solutions combined paging and segmentation because x86 paging offers
only two protection domains whereas a VMM, guest OS or guest application
stack needs three.The difference between paging and segmentation systems is not only
about memory division; segmentation is visible to user processes, as
part of memory model semantics. Hence, instead of memory that looks like
a single large space, it is structured into multiple spaces.
This difference has important consequences; a segment is not a
page with variable length or a simple way to lengthen the address space.
Segmentation that can provide a single-level memory model in which
there is no differentiation between process memory and file system
consists of only a list of segments (files) mapped into the process's
potential address space.
This is not the same as the mechanisms provided by calls such as mmap and Win32's
MapViewOfFile, because inter-file pointers do not work when mapping
files into semi-arbitrary places. In Multics, a file (or a segment from a
multi-segment file) is mapped into a segment in the address space, so
files are always mapped at a segment boundary. A file's linkage section
can contain pointers for which an attempt to load the pointer into a
register or make an indirect reference through it causes a trap. The
unresolved pointer contains an indication of the name of the segment to
which the pointer refers and an offset within the segment; the handler
for the trap maps the segment into the address space, puts the segment
number into the pointer, changes the tag field in the pointer so that it
no longer causes a trap, and returns to the code where the trap
occurred, re-executing the instruction that caused the trap. This eliminates the need for a linker completely and works when different processes map the same file into different places in their private address spaces.
Address space swapping
Some operating systems provide for swapping entire address spaces,
in addition to whatever facilities they have for paging and
segmentation. When this occurs, the OS writes those pages and segments
currently in real memory to swap files. In a swap-in, the OS reads back
the data from the swap files but does not automatically read back pages
that had been paged out at the time of the swap out operation.
IBM's MVS, from OS/VS2 Release 2 through z/OS,
provides for marking an address space as unswappable; doing so does not
pin any pages in the address space. This can be done for the duration
of a job by entering the name of an eligible main program in the Program Properties Table with an unswappable flag.
In addition, privileged code can temporarily make an address space
unswappable using a SYSEVENT Supervisor Call instruction (SVC); certain changes in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.
Swapping does not necessarily require memory management hardware,
if, for example, multiple jobs are swapped in and out of the same area
of storage.