Search This Blog

Wednesday, July 26, 2023

Memory management

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Memory_management
https://en.wikipedia.org/wiki/Memory_management

Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time.

Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance.

In some operating systems, e.g. OS/360 and successors, memory is managed by the operating system. In other operating systems, e.g. Unix-like operating systems, memory is managed at the application level.

Memory management within an address space is generally categorized as either manual memory management or automatic memory management.

Manual memory management

An example of external fragmentation

The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is often managed by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "memory leaks").

Efficiency

The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).

Implementations

Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:

Fixed-size blocks allocation

Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffers from fragmentation, especially with long memory addresses. However, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games.

Buddy blocks

In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sorted linked list or tree and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list.

Slab allocation

This memory allocation mechanism preallocates memory chunks suitable to fit objects of a certain type or size. These chunks are called caches and the allocator only has to keep track of a list of free cache slots. Constructing an object will use any one of the free cache slots and destructing an object will add a slot back to the free cache slot list. This technique alleviates memory fragmentation and is efficient as there is no need to search for a suitable portion of memory, as any open slot will suffice.

Stack allocation

Many Unix-like systems as well as Microsoft Windows implement a function called alloca for dynamically allocating stack memory in a way similar to the heap-based malloc. A compiler typically translates it to inlined instructions manipulating the stack pointer. Although there is no need of manually freeing memory allocated this way as it is automatically freed when the function that called alloca returns, there exists a risk of overflow. And since alloca is an ad hoc expansion seen in many systems but never in POSIX or the C standard, its behavior in case of a stack overflow is undefined.

A safer version of alloca called _malloca, which reports errors, exists on Microsoft Windows. It requires the use of _freea. gnulib provides an equivalent interface, albeit instead of throwing an SEH exception on overflow, it delegates to malloc when an overlarge size is detected. A similar feature can be emulated using manual accounting and size-checking, such as in the uses of alloca_account in glibc.

Automatic memory management

In many programming language implementations, the runtime environment for the program automatically allocates memory in the call stack for non-static local variables of a subroutine, called automatic variables, when the subroutine is called, and automatically releases that memory when the subroutine is exited. Special declarations may allow local variables to retain values between invocations of the procedure, or may allow local variables to be accessed by other subroutines. The automatic allocation of local variables makes recursion possible, to a depth limited by available memory.

Garbage collection

Garbage collection is a strategy for automatically detecting memory allocated to objects that are no longer usable in a program, and returning that allocated memory to a pool of free memory locations. This method is in contrast to "manual" memory management where a programmer explicitly codes memory requests and memory releases in the program. While automatic garbage collection has the advantages of reducing programmer workload and preventing certain kinds of memory allocation bugs, garbage collection does require memory resources of its own, and can compete with the application program for processor time.

Systems with virtual memory

Virtual memory is a method of decoupling the memory organization from the physical hardware. The applications operate on memory via virtual addresses. Each attempt by the application to access a particular virtual memory address results in the virtual memory address being translated to an actual physical address. In this way the addition of virtual memory enables granular control over memory systems and methods of access.

In virtual memory systems the operating system limits how a process can access the memory. This feature, called memory protection, can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.

Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.

Memory is usually classified by access rate into primary storage and secondary storage. Memory management systems, among other operations, also handle the moving of information between these two levels of memory.

Memory management in OS/360 and successors

IBM System/360 does not support virtual memory. Memory isolation of jobs is optionally accomplished using protection keys, assigning storage for each job a different key, 0 for the supervisor or 1–15. Memory management in OS/360 is a supervisor function. Storage is requested using the GETMAIN macro and freed using the FREEMAIN macro, which result in a call to the supervisor (SVC) to perform the operation.

In OS/360 the details vary depending on how the system is generated, e.g., for PCP, MFT, MVT.

In OS/360 MVT, suballocation within a job's region or the shared System Queue Area (SQA) is based on subpools, areas a multiple of 2 KB in size—the size of an area protected by a protection key. Subpools are numbered 0–255. Within a region subpools are assigned either the job's storage protection or the supervisor's key, key 0. Subpools 0–127 receive the job's key. Initially only subpool zero is created, and all user storage requests are satisfied from subpool 0, unless another is specified in the memory request. Subpools 250–255 are created by memory requests by the supervisor on behalf of the job. Most of these are assigned key 0, although a few get the key of the job. Subpool numbers are also relevant in MFT, although the details are much simpler. MFT uses fixed partitions redefinable by the operator instead of dynamic regions and PCP has only a single partition.

Each subpool is mapped by a list of control blocks identifying allocated and free memory blocks within the subpool. Memory is allocated by finding a free area of sufficient size, or by allocating additional blocks in the subpool, up to the region size of the job. It is possible to free all or part of an allocated memory area.

The details for OS/VS1 are similar to those for MFT and for MVT; the details for OS/VS2 are similar to those for MVT, except that the page size is 4 KiB. For both OS/VS1 and OS/VS2 the shared System Queue Area (SQA) is nonpageable.

In MVS the address space includes an additional pageable shared area, the Common Storage Area (CSA), and an additional private area, the System Work area (SWA). Also, the storage keys 0-7 are all reserved for use by privileged code.

VM (operating system)

From Wikipedia, the free encyclopedia
The default login screen on VM/370 Release 6.

VM (often: VM/CMS) is a family of IBM virtual machine operating systems used on IBM mainframes System/370, System/390, zSeries, System z and compatible systems, including the Hercules emulator for personal computers.

The following versions are known:

Virtual Machine Facility/370
VM/370, released in 1972, is a System/370 reimplementation of earlier CP/CMS operating system.
VM/370 Basic System Extensions Program Product
VM/BSE (BSEPP) is an enhancement to VM/370 that adds support for more devices (such as 3370-type fixed-block-architecture DASD drives), improvements to the CMS environment (such as an improved editor), and some stability enhancements to CP.
VM/370 System Extensions Program Product
VM/SE (SEPP) is an enhancement to VM/370 that includes the facilities of VM/BSE, as well as a few additional fixes and features.
Virtual Machine/System Product
VM/SP, a milestone version, replaces VM/370, VM/BSE and VM/SE. Release 1 added EXEC2 and XEDIT System Product Editor; Release 3 added REXX; Release 6 added the shared filesystem.
Virtual Machine/System Product High Performance Option 
VM/SP HPO adds additional device support and functionality to VM/SP, and allows certain S/370 machines that can utilize more than 16 MB of real storage to do so, up to 64 MB. This version was intended for users that would be running multiple S/370 guests at once.
Virtual Machine/Extended Architecture Migration Aid 
VM/XA MA is intended to ease the migration from MVS/370 to MVS/XA by allowing both to run concurrently on the same processor complex.
Virtual Machine/Extended Architecture System Facility 
VM/XA SF is an upgraded VM/XA MA with improved functionality and performance.
Virtual Machine/Extended Architecture System Product 
VM/XA SP is an upgraded VM/XA MA with improved functionality and performance, offered as a replacement for VM/SP HPO on machines supporting S/370-XA. It includes a version of CMS that can run in either S/370 or S/370-XA mode.
Virtual Machine/Enterprise Systems Architecture 
VM/ESA provides the facilities of VM/SP, VM/SP HPO and VM/XA SP. VM/ESA version 1 can run in S/370, ESA/370 or ESA/390 mode; it does not support S/370 XA mode. Version 2 only runs in ESA/390 mode. The S/370-capable versions of VM/ESA were actually their own separate version from the ESA/390 versions of VM/ESA, as the S/370 versions are based on the older VM/SP HPO codebase, and the ESA/390 versions are based on the newer VM/XA codebase.
z/VM, the last version still widely used as one of the main full virtualization solutions for the mainframe market. z/VM 4.4 was the last version that could run in ESA/390 mode; subsequent versions only run in z/Architecture mode.

The CMS in the name refers to the Conversational Monitor System, a component of the product that is a single-user operating system that runs in a virtual machine and provides conversational time-sharing in VM.

Overview

The heart of the VM architecture is the Control Program or hypervisor abbreviated CP, VM-CP and sometimes, ambiguously, VM. It runs on the physical hardware, and creates the virtual machine environment. VM-CP provides full virtualization of the physical machine – including all I/O and other privileged operations. It performs the system's resource-sharing, including device management, dispatching, virtual storage management, and other traditional operating system tasks. Each VM user is provided with a separate virtual machine having its own address space, virtual devices, etc., and which is capable of running any software that could be run on a stand-alone machine. A given VM mainframe typically runs hundreds or thousands of virtual machine instances. VM-CP began life as CP-370, a reimplementation of CP-67, itself a reimplementation of CP-40.

Running within each virtual machine is another operating system, a guest operating system. This might be:

  • CMS (Conversational Monitor System, renamed from the Cambridge Monitor System of CP/CMS). Most virtual machines run CMS, a lightweight, single-user operating system. Its interactive environment is comparable to that of a single-user PC, including a file system, programming services, device access, and command-line processing. (While an earlier version of CMS was uncharitably described as "CP/M on a mainframe", the comparison is an anachronism; the author of CP/M, Gary Kildall, was an experienced CMS user.)
  • GCS (Group Control System), which provides a limited simulation of the MVS API. IBM originally provided GCS in order to run VTAM without a service OS/VS1 virtual machine and VTAM Communications Network Application (VCNA). RSCS V2 also ran under GCS.
  • A mainstream operating system. IBM's mainstream operating systems (e.g., the MVS and DOS/VSE families, OS/VS1, TSS/370, or another layer of VM/370 itself (see below)) can be loaded and run without modification. The VM hypervisor treats guest operating systems as application programs with exceptional privileges – it prevents them from directly using privileged instructions (those which would let applications take over the whole system or significant parts of it), but simulates privileged instructions on their behalf. Most mainframe operating systems terminate a normal application which tries to usurp the operating system's privileges. The VM hypervisor can simulate several types of console terminals for the guest operating system, such as the hardcopy line-mode 3215, the graphical 3270 family, and the integrated console on newer System/390 and System Z machines. Other users can then access running virtual machines using the DIAL command at the logon screen, which will connect their terminal to the first available emulated 3270 device, or the first available 2703 device if the user is DIALing from a typewriter terminal.
  • Another copy of VM. A second level instance of VM can be fully virtualized inside a virtual machine. This is how VM development and testing is done (a second-level VM can potentially implement a different virtualization of the hardware). This technique was used to develop S/370 software before S/370 hardware was available, and it has continued to play a role in new hardware development at IBM. The literature cites practical examples of virtualization five levels deep. Levels of VM below the top are also treated as applications but with exceptional privileges.
  • A copy of the mainframe version of AIX or Linux. In the mainframe environment, these operating systems often run under VM, and are handled like other guest operating systems. (They can also run as 'native' operating systems on the bare hardware.) There was also the short-lived IX/370, as well as S/370 and S/390 versions of AIX (AIX/370 and AIX/ESA).
  • A specialized VM subsystem. Several non-CMS systems run within VM-CP virtual machines, providing services to CMS users such as spooling, interprocess communications, specialized device support, and networking. They operate behind the scenes, extending the services available to CMS without adding to the VM-CP control program. By running in separate virtual machines, they receive the same security and reliability protections as other VM users. Examples include:
    • RSCS (Remote Spooling and Communication Subsystem, aka VNET) – communication and information transfer facilities between virtual machines and other systems
    • RACF (Resource Access Control Facility) — a security system
    • Shared File System (SFS), which organizes shared files in a directory tree (the servers are commonly named "VMSERVx"
    • VTAM (Virtual Telecommunications Access Method) – a facility that provides support for a Systems Network Architecture network
    • PVM (VM/Pass-Through Facility) – a facility that provides remote access to other VM systems
    • TCPIP, SMTP, FTPSERVE, PORTMAP, VMNFS – a set of service machines that provide TCP/IP networking to VM/CMS
    • Db2 Server for VM – a SQL database system, the servers are often named similarly to "SQLMACH" and "SQLMSTR"
    • DIRMAINT – A simplified user directory management system (the directory is a listing of every account on the system, including virtual hardware configuration, user passwords, and minidisks).
    • MUMPS/VM — an implementation of the MUMPS database and programming language which could run as guest on VM/370. MUMPS/VM was introduced in 1987 and discontinued in 1991.
  • A user-written or modified operating system, such as National CSS's CSS or Boston University's VPS/VM.

Hypervisor interface

IBM coined the term hypervisor for the 360/65 and later used it for the DIAG handler of CP-67.

The Diagnose instruction ('83'x—no mnemonic) is a privileged instruction originally intended by IBM to perform "built-in diagnostic functions, or other model-dependent functions." IBM repurposed DIAG for "communication between a virtual machine and CP." The instruction contains two four-bit register numbers, called Rx and Ry, which can "contain operand storage addresses or return codes passed to the DIAGNOSE interface," and a two-byte code "that CP uses to determine what DIAGNOSE function to perform." A few of the available diagnose functions are listed below.

Hexadecimal code Function
0000 Store Extended-Identification Code
0004 Examine Real Storage
0008 Virtual Console Function—Execute a CP command
0018 Standard DASD I/O
0020 General I/O—Execute any valid CCW chain on a tape or disk device
003C Update the VM/370 directory
0058 3270 Virtual Console Interface—perform full-screen I/O on an IBM 3270 terminal
0060 Determine Virtual Machine Storage Size
0068 Virtual Machine Communication Facility (VMCF)

CMS use of DIAGNOSE

At one time, CMS was capable of running on a bare machine, as a true operating system (though such a configuration would be unusual). It now runs only as a guest OS under VM. This is because CMS relies on a hypervisor interface to VM-CP, to perform file system operations and request other VM services. This paravirtualization interface:

  • Provides a fast path to VM-CP, to avoid the overhead of full simulation.
  • Was first developed as a performance improvement for CP/CMS release 2.1, an important early milestone in CP's efficiency.
  • Uses a non-virtualized, model-dependent machine instruction as a signal between CMS and CP: DIAG (diagnose).

Minidisks

CMS starting up after the user MAINT (system administrator) has logged in.

CMS and other operating systems often have DASD requirements much smaller than the sizes of actual volumes. For this reason CP allows an installation to define virtual disks of any size up to the capacity of the device. For CKD volumes, a minidisk must be defined in full cylinders. A minidisk has the same attributes as the underlying real disk, except that it is usually smaller and the beginning of each minidisk is mapped to cylinder or block 0. The minidisk may be accessed using the same channel programs as the real disk.

A minidisk that has been initialized with a CMS file system is referred to as a CMS minidisk, although CMS is not the only system that can use them.

It is common practice to define full volume minidisks for use by such guest operating systems as z/OS instead of using DEDICATE to assign the volume to a specific virtual machine. In addition, "full-pack links" are often defined for every DASD on the system, and are owned by the MAINT userid. These are used for backing up the system using the DASD Dump/Restore program, where the entire contents of a DASD are written to tape (or another DASD) exactly.

The CMS editor on VM/370, editing a COBOL program source file.

Shared File System

VM/SP Release 6 introduced the Shared File System  which vastly improved CMS file storage capabilities. The CMS minidisk file system does not support directories (folders) at all, however, the SFS does. SFS also introduces more granular security. With CMS minidisks, the system can be configured to allow or deny users read-only or read-write access to a disk, but single files cannot have the same security. SFS alleviates this, and vastly improves performance.

The SFS is provided by service virtual machines. On a modern VM system, there are usually three that are required: VMSERVR, the "recovery machine" that does not actually serve any files; VMSERVS, the server for the VMSYS filepool; and VMSERVU, the server for the VMSYSU (user) filepool. The file pool server machines own several minidisks, usually including a CMS A-disk (virtual device address 191, containing the file pool configuration files), a control disk, a log disk, and any number of data disks that actually store user files.

Invoking the System/360 COBOL compiler on VM/370 CMS, then loading and running the program.

With modern VM versions, most of the system can be installed to SFS, with the few remaining minidisks being the ones absolutely necessary for the system to start up, and the ones being owned by the filepool server machines.

An example of a non-CMS guest operating system running under VM/370: DOS/VS Release 34. The DOS/VS system is now prompting the operator to enter a supervisor name to continue loading.

If a user account is configured to only use SFS (and does not own any minidisks), the user's A-disk will be FILEPOOL:USERID. and any subsequent directories that the user creates will be FILEPOOL:USERID.DIR1.DIR2.DIR3 where the equivalent UNIX file path is /dir1/dir2/dir3. SFS directories can have much more granular access controls when compared to minidisks (which, as mentioned above, can often only have a read password, a write password, and a multi-write password). SFS directories also solve the issues that may arise when two users write to the same CMS minidisk at the same time, which may cause disk corruption (as the CMS VM performing the writes may be unaware that another CMS instance is also writing to the minidisk).

The file pool server machines also serve a closely related filesystem: the Byte File System. BFS is used to store files on a UNIX-style filesystem. Its primary use is for the VM OpenExtensions POSIX environment for CMS. The CMS user virtual machines themselves communicate with the SFS server virtual machines through the IUCV mechanism.

History

The early history of VM is described in the articles CP/CMS and History of CP/CMS. VM/370 is a reimplementation of CP/CMS, and was made available in 1972 as part of IBM's System/370 Advanced Function announcement (which added virtual memory hardware and operating systems to the System/370 series). Early releases of VM through VM/370 Release 6 continued in open source through 1981, and today are considered to be in the public domain. This policy ended in 1977 with the chargeable VM/SE and VM/BSE upgrades and in 1980 with VM/System Product (VM/SP). However, IBM continued providing updates in source form for existing code for many years, although the upgrades to all but the free base required a license. As with CP-67, privileged instructions in a virtual machine cause a program interrupt, and CP simulated the behavior of the privileged instruction.

OS/VS1 starting under VM/370.

VM remained an important platform within IBM, used for operating system development and time-sharing use; but for customers it remained IBM's "other operating system". The OS and DOS families remained IBM's strategic products, and customers were not encouraged to run VM. Those that did formed close working relationships, continuing the community-support model of early CP/CMS users. In the meantime, the system struggled with political infighting within IBM over what resources should be available to the project, as compared with other IBM efforts. A basic problem with the system was seen at IBM's field sales level: VM/CMS demonstrably reduced the amount of hardware needed to support a given number of time-sharing users. IBM was, after all, in the business of selling computer systems.

Melinda Varian provides this fascinating quote, illustrating VM's unexpected success:

The marketing forecasts for VM/370 predicted that no more than one 168 would ever run VM during the entire life of the product. In fact, the first 168 delivered to a customer ran only CP and CMS. Ten years later, ten percent of the large processors being shipped from Poughkeepsie would be destined to run VM, as would a very substantial portion of the mid-range machines that were built in Endicott. Before fifteen years had passed, there would be more VM licenses than MVS licenses.

Using DASD Dump/Restore (DDR) to back up a VM/370 system.

A PC DOS version that runs CMS on the XT/370 (and later on the AT/370) is called VM/PC. VM/PC 1.1 was based on VM/SP release 3. When IBM introduced the P/370 and P/390 processor cards, a PC could now run full VM systems, including VM/370, VM/SP, VM/XA, and VM/ESA (these cards were fully compatible with S/370 and S/390 mainframes, and could run any S/370 operating system from the 31-bit era, e.g., MVS/ESA, VSE/ESA).

In addition to the base VM/SP releases, IBM also introduced VM/SP HPO (High Performance Option). This add-on (which is installed over the base VM/SP release) improved several key system facilities, including allowing the usage of more than 16 MB of storage (RAM) on supported models (such as the IBM 4381). With VM/SP HPO installed, the new limit was 64 MB; however, a single user (or virtual machine) could not use more than 16 MB. The functions of the spool filesystem were also improved, allowing 9900 spool files to be created per user, rather than 9900 for the whole system. The architecture of the spool filesystem was also enhanced, each spool file now had a unique user ID associated with it, and reader file control blocks were now held in virtual storage. The system could also be configured to deny certain users access to the vector facility (by means of user directory entries).

Releases of VM since VM/SP Release 1 supported multiprocessor systems. System/370 versions of VM (such as VM/SP and VM/SP HPO) supported a maximum of two processors, with the system operating in either UP (uniprocessor) mode, MP (multiprocessor) mode, or AP (attached processor) mode. AP mode is the same as MP mode, except the second processor lacks I/O capability. System/370-XA releases of VM (such as VM/XA) supported more. System/390 releases (such as VM/ESA) almost removed the limit entirely, and some modern z/VM systems can have as many as 80 processors. The per-VM limit for defined processors is 64.

When IBM introduced the System/370 Extended Architecture on the 3081, customers were faced with the need to run a production MVS/370 system while testing MVS/XA on the same machine. IBM's solution was VM/XA Migration Aid, which used the new Start Interpretive Execution (SIE) instruction to run the virtual machine. SIE automatically handled some privileged instructions and returned to CP for cases that it couldn't handle. The Processor Resource/System Manager (PR/SM) of the later 3090 also used SIE. There were several VM/XA products before it was eventually supplanted by VM/ESA and z/VM.

In addition to RSCS networking, IBM also provided users with VTAM networking. ACF/VTAM for VM was fully compatible with ACF/VTAM on MVS and VSE. Like RSCS, VTAM on VM ran under the specialized GCS operating system. However, VM also supported TCP/IP networking. In the late 1980s, IBM produced a TCP/IP stack for VM/SP and VM/XA. The stack supported IPv4 networks, and a variety of network interface systems (such as inter-mainframe channel-to-channel links, or a specialized IBM RT PC that would relay traffic out to a Token Ring or Ethernet network). The stack provided support for Telnet connections, from either simple line-mode terminal emulators or VT100-compatible emulators, or proper IBM 3270 terminal emulators. The stack also provided an FTP server. IBM also produced an optional NFS server for VM; early versions were rather primitive, but modern versions are much more advanced.

There was also a fourth networking option, known as VM/Pass-Through Facility (or more commonly called, PVM). PVM, like VTAM, allowed for connections to remote VM/CMS systems, as well as other IBM systems. If two VM/CMS nodes were linked together over a channel-to-channel link or bisync link (possibly using a dialup modem or leased line), a user could remotely connect to either system by entering "DIAL PVM" on the VM login screen, then entering the system node name (or choosing it from a list of available nodes). Alternatively, a user running CMS could use the PASSTHRU program that was installed alongside PVM, allowing for quick access to remote systems without having to log out of the user's session. PVM also supported accessing non-VM systems, by utilizing a 3x74 emulation technique. Later releases of PVM also featured a component that could accept connections from a SNA network.

VM was also the cornerstone operating system of BITNET, as the RSCS system available for VM provided a simple network that was easy to implement, and somewhat reliable. VM sites were interlinked by means of an RSCS VM on each VM system communicating with one another, and users could send and receive messages, files, and batch jobs through RSCS. The "NOTE" command used XEDIT to display a dialog to create an email, from which the user could send it. If the user specified an address in the form of user at node, the email file would be delivered to RSCS, which would then deliver it to the target user on the target system. If the site has TCP/IP installed, RSCS could work with the SMTP service machine to deliver notes (emails) to remote systems, as well as receive them. If the user specified user at some.host.name, the NOTE program would deliver the email to the SMTP service machine, which would then route it out to the destination site on the Internet.

VM's role changed within IBM when hardware evolution led to significant changes in processor architecture. Backward compatibility remained a cornerstone of the IBM mainframe family, which still uses the basic instruction set introduced with the original System/360; but the need for efficient use of the 64-bit zSeries made the VM approach much more attractive. VM was also utilized in data centers converting from DOS/VSE to MVS and is useful when running mainframe AIX and Linux, platforms that were to become increasingly important. The current z/VM platform has finally achieved the recognition within IBM that VM users long felt it deserved. Some z/VM sites run thousands of simultaneous virtual machine users on a single system. z/VM was first released in October 2000 and remains in active use and development.

IBM and third parties have offered many applications and tools that run under VM. Examples include RAMIS, FOCUS, SPSS, NOMAD, DB2, REXX, RACF, and OfficeVision. Current VM offerings run the gamut of mainframe applications, including HTTP servers, database managers, analysis tools, engineering packages, and financial systems.

CP commands

As of release 6, the VM/370 Control Program has a number of commands for General Users, concerned with defining and controlling the user's virtual machine. Lower-case portions of the command are optional

Command Description
#CP Allows the user to issue a CP command from a command environment, or any other virtual machine after pressing the break key (defaults to PA1)
ADSTOP Sets an address stop to halt the virtual machine at a specific instruction
ATTN Causes an attention interruption allowing CP to take control in a command environment
Begin Continue or resume execution of the user's virtual machine, optionally at a specified address
CHange Alter attributes of a spool file or files. For example, the output class or the name of the file can be changed, or printer-specific attributes set
Close Closes an open printer, punch, reader, or console file and releases it to the spooling system
COUPLE Connect a virtual channel-to-channel adapter (CTCA) to another. Also used to connect simulated QDIO Ethernet cards to a virtual switch.
CP Execute a CP command in a CMS environment
DEFine Alter the current virtual machine configuration. Add virtual devices or change available storage size
DETach Remove a virtual device or channel from the current configuration
DIAL Connect your terminal at the logon screen to a logged-on multi-access virtual machine's simulated 3270 or typewriter terminals
DISConn Disconnect your terminal while allowing your virtual machine to continue running
Display Display virtual machine storage or (virtual) hardware registers
DUMP Print a snapshot dump of the current virtual machine on the virtual spooled printer
ECHO Set the virtual machine to echo typed lines
EXTernal Cause an external interrupt to the virtual machine
INDicate Display current system load or your resource usage
Ipl IPL (boot) an operating system on your virtual machine
LINK Attach a device from another virtual machine, if that machine's definition allows sharing
LOADVFCB Specify a forms control buffer (FCB) for a virtual printer
LOGoff
LOGout
Terminate execution of the current virtual machine and disconnect from the system
Logon
Login
Sign on to the system
Message
MSG
Send a one-line message to the system operator or another user
NOTReady Cause a virtual device to appear not ready
ORDer Reorder closed spool files by ID or class
PURge Delete closed spool files for a device by class,m ID, or ALL
Query Display status information for your virtual machine, or the message of the day, or number or names of logged-in users
READY Cause a device end interruption for a device
REQuest Cause an interrupt on your virtual console
RESET Clear all pending interrupts for a device
REWind Rewind a real (non virtual) magnetic tape unit
SET Set various attributes for your virtual machine, including messaging or terminal function keys
SLeep Place your virtual machine in a dormant state indefinitely or for a specified period of time
SMsg Send a one-line special message to another virtual machine (usually used to control the operation of the virtual machine; commonly used with RSCS)
SPool Set options for a spooled virtual device (printer, reader, or punch)
STore Alter the contents of registers or storage of your virtual machine
SYStem Reset or restart your virtual machine or clear storage
TAg Set a tag associated with a spooled device or file. The tag is usually used by VM's Remote Spooling Communications Subystem (RSCS) to identify the destination of a file
TERMinal Set characteristics of your terminal
TRace Start or stop tracing of specified virtual machine activities
TRANsfer Transfer a spool file to or from another user
VMDUMP Dump your virtual machine in a format readable by the Interactive Problem Control System (IPCS) program product

OpenEdition Extensions

Starting with VM/ESA Version 2, IBM introduced the chargeable optional feature OpenEdition for VM/ESA Shell and Utilities Feature, which provides POSIX compatibility for CMS. The stand-out feature was a UNIX shell for CMS. The C compiler for this UNIX environment is provided by either C/370 or C for VM/ESA. Neither the CMS filesystem nor the standard VM Shared File System has any support for UNIX-style files and paths; instead, the Byte File System is used. Once a BFS extent is created in an SFS file pool, the user can mount it using the OPENVM MOUNT /../VMBFS:fileservername:filepoolname /path/to/mount/point. The user must also mount the root filesystem, done with OPENVM MOUNT /../VMBFS:VMSYS:ROOT/ /, a shell can then be started with OPENVM SHELL. Unlike the normal SFS, access to BFS filesystems is controlled by POSIX permissions (with chmod and chown).

Starting with z/VM Version 3, IBM integrated OpenEdition into z/VM and renamed it OpenExtensions. OpenEdition and OpenExtensions provide POSIX.2 compliance to CMS. Programs compiled to run under the OpenExtensions shell are stored in the same format as standard CMS executable modules. Visual editors, such as vi are unavailable, as 3270 terminals are not capable. Users can use ed or XEDIT instead of vi.

VM mascot

In the early 1980s, the VM group within SHARE (the IBM user group) sought a mascot or logo for the community to adopt. This was in part a response to IBM's MVS users selecting the turkey as a mascot (chosen, according to legend, by the MVS Performance Group in the early days of MVS, when its performance was a sore topic). In 1983, the teddy bear became VM's de facto mascot at SHARE 60, when teddy bear stickers were attached to the nametags of "cuddlier oldtimers" to flag them for newcomers as "friendly if approached". The bears were a hit and soon appeared widely. Bears were awarded to inductees of the "Order of the Knights of VM", individuals who made "useful contributions" to the community.

Criticism

While VM was relatively light-weight (when compared to its counterparts, such as MVS), VM was somewhat unstable in its early days. It was considered quite a feat to keep a VM/370 system up for more than a week. Users also criticized the CMS file system, noting that other operating systems in the mid-1980s had directories, symbolic links, and other key features; CMS had none of these until 1988 when VM/SP release 6 came out, which introduced the Shared File System and alleviated these issues.

Some users also noted that VM OpenEdition was somewhat "unnecessary."

Memory paging

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Memory_paging

In computer operating systems, memory paging (or swapping on some Unix-like systems) is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

For simplicity, main memory is called "RAM" (an acronym of random-access memory) and secondary storage is called "disk" (a shorthand for hard disk drive, drum memory or solid-state drive, etc.), but as with many aspects of computing, the concepts are independent of the technology used.

Depending on the memory model, paged memory functionality is usually hardwired into a CPU/MCU by using a Memory Management Unit (MMU) or Memory Protection Unit (MPU) and separately enabled by privileged system code in the operating system's kernel. In CPUs implementing the x86 instruction set architecture (ISA) for instance, the memory paging is enabled via the CR0 control register.

History

In the 1960s, swapping was an early virtual memory technique. An entire program or entire segment would be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would be swapped in (or rolled in). A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in.

A program might include multiple overlays that occupy the same memory at different times. Overlays are not a method of paging RAM to disk but merely of minimizing the program's RAM use. Subsequent architectures used memory segmentation, and individual program segments became the units exchanged between disk and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to be contiguous when resident in RAM, requiring additional computation and movement to remedy fragmentation.

Ferranti's Atlas, and the Atlas Supervisor developed at the University of Manchester, (1962), was the first system to implement memory paging. Subsequent early machines, and their operating systems, supporting paging include the IBM M44/44X and its MOS operating system (1964), the SDS 940 and the Berkeley Timesharing System (1966), a modified IBM System/360 Model 40 and the CP-40 operating system (1967), the IBM System/360 Model 67 and operating systems such as TSS/360 and CP/CMS (1967), the RCA 70/46 and the Time Sharing Operating System (1967), the GE 645 and Multics (1969), and the PDP-10 with added BBN-designed paging hardware and the TENEX operating system (1969).

Those machines, and subsequent machines supporting memory paging, use either a set of page address registers or in-memory page tables to allow the processor to operate on arbitrary pages anywhere in RAM as a seemingly contiguous logical address space. These pages became the units exchanged between disk and RAM.

Page faults

When a process tries to reference a page not currently mapped to a page frame in RAM, the processor treats this invalid memory reference as a page fault and transfers control from the program to the operating system. The operating system must:

  1. Determine whether a stolen page frame still contains an unmodified copy of the page; if so, use that page frame.
  2. Otherwise, obtain an empty page frame in RAM to use as a container for the data, and:
    • Determine whether the page was ever initialized
    • If so determine the location of the data on disk.
    • Load the required data into the available page frame.
  3. Update the page table to refer to the new page frame.
  4. Return control to the program, transparently retrying the instruction that caused the page fault.

When all page frames are in use, the operating system must select a page frame to reuse for the page the program now needs. If the evicted page frame was dynamically allocated by a program to hold data, or if a program modified it since it was read into RAM (in other words, if it has become "dirty"), it must be written out to disk before being freed. If a program later references the evicted page, another page fault occurs and the page must be read back into RAM.

The method the operating system uses to select the page frame to reuse, which is its page replacement algorithm, is important to efficiency. The operating system predicts the page frame least likely to be needed soon, often through the least recently used (LRU) algorithm or an algorithm based on the program's working set. To further increase responsiveness, paging systems may predict which pages will be needed soon, preemptively loading them into RAM before a program references them, and may steal page frames from pages that have been unreferenced for a long time, making them available. Some systems clear new pages to avoid data leaks that compromise security; some set them to installation defined or random values to aid debugging.

Page fetching techniques

Demand paging
When pure demand paging is used, pages are loaded only when they are referenced. A program from a memory mapped file begins execution with none of its pages in RAM. As the program commits page faults, the operating system copies the needed pages from a file, e.g., memory-mapped file, paging file, or a swap partition containing the page data into RAM.

Anticipatory paging
Some systems use only demand paging—waiting until a page is actually requested before loading it into RAM.
Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage).
When a page fault occurs, anticipatory paging systems will not only bring in the referenced page, but also other pages that are likely to be referenced soon. A simple anticipatory paging algorithm will bring in the next few consecutive pages even though they are not yet needed (a prediction using locality of reference); this is analogous to a prefetch input queue in a CPU. Swap prefetching will prefetch recently swapped-out pages if there are enough free pages for them.[7]
If a program ends, the operating system may delay freeing its pages, in case the user runs the same program again.

Page replacement techniques

Free page queue, stealing, and reclamation
The free page queue is a list of page frames that are available for assignment. Preventing this queue from being empty minimizes the computing necessary to service a page fault. Some operating systems periodically look for pages that have not been recently referenced and then free the page frame and add it to the free page queue, a process known as "page stealing". Some operating systems support page reclamation; if a program commits a page fault by referencing a page that was stolen, the operating system detects this and restores the page frame without having to read the contents back into RAM.
Pre-cleaning
The operating system may periodically pre-clean dirty pages: write modified pages back to disk even though they might be further modified. This minimizes the amount of cleaning needed to obtain new page frames at the moment a new program starts or a new data file is opened, and improves responsiveness. (Unix operating systems periodically use sync to pre-clean all dirty pages; Windows operating systems use "modified page writer" threads.)

Thrashing

After completing initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed are called the working set.

When the working set is a small percentage of the system's total number of pages, virtual memory systems work most efficiently and an insignificant amount of computing is spent resolving page faults. As the working set grows, resolving page faults remains manageable until the growth reaches a critical point. Then faults go up dramatically and the time spent resolving them overwhelms time spent on the computing the program was written to do. This condition is referred to as thrashing. Thrashing occurs on a program that works with huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from disk. "Thrashing" is also used in contexts other than virtual memory systems; for example, to describe cache issues in computing or silly window syndrome in networking.

A worst case might occur on VAX processors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. As each fault occurs the operating system needs to go through the extensive memory management routines perhaps causing multiple I/Os which might include writing other process pages to disk and reading pages of the active process from disk. If the operating system could not allocate ten pages to this program, then remedying the page fault would discard another page the instruction needs, and any restart of the instruction would fault again.

To decrease excessive paging and resolve thrashing problems, a user can increase the number of pages available per program, either by running fewer programs concurrently or increasing the amount of RAM in the computer.

Sharing

In multi-programming or in a multi-user environment, many users may execute the same program, written so that its code and data are in separate pages. To minimize RAM use, all users share a single copy of the program. Each process's page table is set up so that the pages that address code point to the single shared copy, while the pages that address data point to different physical pages for each process.

Different programs might also use the same libraries. To save space, only one copy of the shared library is loaded into physical memory. Programs which use the same library have virtual addresses that map to the same pages (which contain the library's code and data). When programs want to modify the library's code, they use copy-on-write, so memory is only allocated when needed.

Shared memory is an efficient means of communication between programs. Programs can share pages in memory, and then write and read to exchange data.

Implementations

Ferranti Atlas

The first computer to support paging was the supercomputer Atlas, jointly developed by Ferranti, the University of Manchester and Plessey in 1963. The machine had an associative (content-addressable) memory with one entry for each 512 word page. The Supervisor handled non-equivalence interruptions and managed the transfer of pages between core and drum in order to provide a one-level store to programs.

Microsoft Windows

Windows 3.x and Windows 9x

Paging has been a feature of Microsoft Windows since Windows 3.0 in 1990. Windows 3.x creates a hidden file named 386SPART.PAR or WIN386.SWP for use as a swap file. It is generally found in the root directory, but it may appear elsewhere (typically in the WINDOWS directory). Its size depends on how much swap space the system has (a setting selected by the user under Control Panel → Enhanced under "Virtual Memory"). If the user moves or deletes this file, a blue screen will appear the next time Windows is started, with the error message "The permanent swap file is corrupt". The user will be prompted to choose whether or not to delete the file (even if it does not exist).

Windows 95, Windows 98 and Windows Me use a similar file, and the settings for it are located under Control Panel → System → Performance tab → Virtual Memory. Windows automatically sets the size of the page file to start at 1.5× the size of physical memory, and expand up to 3× physical memory if necessary. If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default.

Windows NT

The file used for paging in the Windows NT family is pagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for page files. It is required, however, for the boot partition (i.e., the drive containing the Windows directory) to have a page file on it if the system is configured to write either kernel or full memory dumps after a Blue Screen of Death. Windows uses the paging file as temporary storage for the memory dump. When the system is rebooted, Windows copies the memory dump from the page file to a separate file and frees the space that was used in the page file.

Fragmentation

In the default configuration of Windows, the page file is allowed to expand beyond its initial allocation when necessary. If this happens gradually, it can become heavily fragmented which can potentially cause performance problems. The common advice given to avoid this is to set a single "locked" page file size so that Windows will not expand it. However, the page file only expands when it has been filled, which, in its default configuration, is 150% of the total amount of physical memory. Thus the total demand for page file-backed virtual memory must exceed 250% of the computer's physical memory before the page file will expand.

The fragmentation of the page file that occurs when it expands is temporary. As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the page file is back to its original state.

Locking a page file size can be problematic if a Windows application requests more memory than the total size of physical memory and the page file, leading to failed requests to allocate memory that may cause applications and system processes to fail. Also, the page file is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. However, a large page file generally allows the use of memory-heavy applications, with no penalties besides using more disk space. While a fragmented page file may not be an issue by itself, fragmentation of a variable size page file will over time create several fragmented blocks on the drive, causing other files to become fragmented. For this reason, a fixed-size contiguous page file is better, providing that the size allocated is large enough to accommodate the needs of all applications.

The required disk space may be easily allocated on systems with more recent specifications (i.e. a system with 3  GB of memory having a 6  GB fixed-size page file on a 750  GB disk drive, or a system with 6  GB of memory and a 16  GB fixed-size page file and 2  TB of disk space). In both examples, the system uses about 0.8% of the disk space with the page file pre-extended to its maximum.

Defragmenting the page file is also occasionally recommended to improve performance when a Windows system is chronically using much more memory than its total physical memory. This view ignores the fact that, aside from the temporary results of expansion, the page file does not become fragmented over time. In general, performance concerns related to page file access are much more effectively dealt with by adding more physical memory.

Unix and Unix-like systems

Unix systems, and other Unix-like operating systems, use the term "swap" to describe the act of substituting disk space for RAM when physical RAM is full. In some of those systems, it is common to dedicate an entire partition of a hard disk to swapping. These partitions are called swap partitions. Many systems have an entire hard drive dedicated to swapping, separate from the data drive(s), containing only a swap partition. A hard drive dedicated to swapping is called a "swap drive" or a "scratch drive" or a "scratch disk". Some of those systems only support swapping to a swap partition; others also support swapping to files.

Linux

The Linux kernel supports a virtually unlimited number of swap backends (devices or files), and also supports assignment of backend priorities. When the kernel swaps pages out of physical memory, it uses the highest-priority backend with available free space. If multiple swap backends are assigned the same priority, they are used in a round-robin fashion (which is somewhat similar to RAID 0 storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.

Swap files and partitions

From the end-user perspective, swap files in versions 2.6.x and later of the Linux kernel are virtually as fast as swap partitions; the limitation is that swap files should be contiguously allocated on their underlying file systems. To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the cache and avoiding filesystem overhead. When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time. However, the administrative flexibility of swap files can outweigh certain advantages of swap partitions. For example, a swap file can be placed on any mounted file system, can be set to any desired size, and can be added or changed as needed. Swap partitions are not as flexible; they cannot be enlarged without using partitioning or volume management tools, which introduce various complexities and potential downtimes.

Swappiness

Swappiness is a Linux kernel parameter that controls the relative weight given to swapping out of runtime memory, as opposed to dropping pages from the system page cache, whenever a memory allocation request cannot be met from free memory. Swappiness can be set to a value from 0 to 200. A low value causes the kernel to prefer to evict pages from the page cache while a higher value causes the kernel to prefer to swap out "cold" memory pages. The default value is 60; setting it higher can cause high latency if cold pages need to be swapped back in (when interacting with a program that had been idle for example), while setting it lower (even 0) may cause high latency when files that had been evicted from the cache need to be read again, but will make interactive programs more responsive as they will be less likely to need to swap back cold pages. Swapping can also slow down HDDs further because it involves a lot of random writes, while SSDs do not have this problem. Certainly the default values work well in most workloads, but desktops and interactive systems for any expected task may want to lower the setting while batch processing and less interactive systems may want to increase it.

Swap death

When the system memory is highly insufficient for the current tasks and a large portion of memory activity goes through a slow swap, the system can become practically unable to execute any task, even if the CPU is idle. When every process is waiting on the swap, the system is considered to be in swap death.

Swap death can happen due to incorrectly configured memory overcommitment.

The original description of the "swapping to death" problem relates to the X server. If code or data used by the X server to respond to a keystroke is not in main memory, then if the user enters a keystroke, the server will take one or more page faults, requiring those pages to read from swap before the keystroke can be processed, slowing the response to it. If those pages don't remain in memory, they will have to be faulted in again to handle the next keystroke, making the system practically unresponsive even if it's actually executing other tasks normally.

macOS

macOS uses multiple swap files. The default (and Apple-recommended) installation places them on the root partition, though it is possible to place them instead on a separate partition or device.

AmigaOS 4

AmigaOS 4.0 introduced a new system for allocating RAM and defragmenting physical memory. It still uses flat shared address space that cannot be defragmented. It is based on slab allocation method and paging memory that allows swapping. Paging was implemented in AmigaOS 4.1 but may lock up system if all physical memory is used up. Swap memory could be activated and deactivated any moment allowing the user to choose to use only physical RAM.

Performance

The backing store for a virtual memory operating system is typically many orders of magnitude slower than RAM. Additionally, using mechanical storage devices introduces delay, several milliseconds for a hard disk. Therefore, it is desirable to reduce or eliminate swapping, where practical. Some operating systems offer settings to influence the kernel's decisions.

  • Linux offers the /proc/sys/vm/swappiness parameter, which changes the balance between swapping out runtime memory, as opposed to dropping pages from the system page cache.
  • Windows 2000, XP, and Vista offer the DisablePagingExecutive registry setting, which controls whether kernel-mode code and data can be eligible for paging out.
  • Mainframe computers frequently used head-per-track disk drives or drums for page and swap storage to eliminate seek time, and several technologies to have multiple concurrent requests to the same device in order to reduce rotational latency.
  • Flash memory has a finite number of erase-write cycles (see limitations of flash memory), and the smallest amount of data that can be erased at once might be very large (128 KiB for an Intel X25-M SSD ), seldom coinciding with pagesize. Therefore, flash memory may wear out quickly if used as swap space under tight memory conditions. On the attractive side, flash memory is practically delayless compared to hard disks, and not volatile as RAM chips. Schemes like ReadyBoost and Intel Turbo Memory are made to exploit these characteristics.

Many Unix-like operating systems (for example AIX, Linux, and Solaris) allow using multiple storage devices for swap space in parallel, to increase performance.

Swap space size

In some older virtual memory operating systems, space in swap backing store is reserved when programs allocate memory for runtime data. Operating system vendors typically issue guidelines about how much swap space should be allocated.

Addressing limits on 32-bit hardware

Paging is one way of allowing the size of the addresses used by a process, which is the process's "virtual address space" or "logical address space", to be different from the amount of main memory actually installed on a particular computer, which is the physical address space.

Main memory smaller than virtual memory

In most systems, the size of a process's virtual address space is much larger than the available main memory. For example:

  • The address bus that connects the CPU to main memory may be limited. The i386SX CPU's 32-bit internal addresses can address 4 GB, but it has only 24 pins connected to the address bus, limiting installed physical memory to 16 MB. There may be other hardware restrictions on the maximum amount of RAM that can be installed.
  • The maximum memory might not be installed because of cost, because the model's standard configuration omits it, or because the buyer did not believe it would be advantageous.
  • Sometimes not all internal addresses can be used for memory anyway, because the hardware architecture may reserve large regions for I/O or other features.

Main memory the same size as virtual memory

A computer with true n-bit addressing may have 2n addressable units of RAM installed. An example is a 32-bit x86 processor with 4 GB and without Physical Address Extension (PAE). In this case, the processor is able to address all the RAM installed and no more.

However, even in this case, paging can be used to create a virtual memory of over 4 GB. For instance, many programs may be running concurrently. Together, they may require more than 4 GB, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM.

Although the processor in this example cannot address RAM beyond 4 GB, the operating system may provide services to programs that envision a larger memory, such as files that can grow beyond the limit of installed RAM. The operating system lets a program manipulate data in the file arbitrarily, using paging to bring parts of the file into RAM when necessary.

Main memory larger than virtual address space

A few computers have a main memory larger than the virtual address space of a process, such as the Magic-1, some PDP-11 machines, and some systems using 32-bit x86 processors with Physical Address Extension. This nullifies a significant advantage of paging, since a single process cannot use more main memory than the amount of its virtual address space. Such systems often use paging techniques to obtain secondary benefits:

  • The "extra memory" can be used in the page cache to cache frequently used files and metadata, such as directory information, from secondary storage.
  • If the processor and operating system support multiple virtual address spaces, the "extra memory" can be used to run more processes. Paging allows the cumulative total of virtual address spaces to exceed physical main memory.
  • A process can store data in memory-mapped files on memory-backed file systems, such as the tmpfs file system or file systems on a RAM drive, and map files into and out of the address space as needed.
  • A set of processes may still depend upon the enhanced security features page-based isolation may bring to a multitasking environment.

The size of the cumulative total of virtual address spaces is still limited by the amount of secondary storage available.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...