bosscoder_logo
Right arrow

Memory Management and Free Space Management

author image

Ayush Prashar

Date: 11th November, 2024

feature image

Contents

    Agenda:

    • Logical Address
    • Physical Address
    • MMU
    • Memory mapping and Protection
    • Memory Allocation Methods
    • Free Space Management

    Memory Management Techniques | Contiguous Memory Allocation

    In a multiprogramming environment, the CPU is constantly kept busy by having multiple processes loaded in main memory. These processes exist in a Ready Queue and are scheduled for execution to improve CPU utilization and make the computer responsive to users. However, keeping multiple processes in memory simultaneously requires careful memory management to prevent conflicts and to ensure each process runs smoothly.

    To understand this, let’s go over the terms "logical address" and "physical address" and explore how the operating system handles memory through these address spaces.

    Logical Address

    A Logical Address, also referred to as a virtual address, is an address generated by the CPU when a program runs, representing the addresses used by a process to reference instructions and data. In modern computing, logical addresses are integral to virtual memory systems and provide a layer of abstraction, allowing each process to operate in an isolated memory space, independent of the actual, physical memory. 

    Characteristics of Logical Address

    1. A logical address is essentially an identifier created by the CPU to represent memory locations that a process uses to access instructions and data. During execution, programs generate logical addresses to perform various memory operations. These addresses are unique within the program’s address space and form a crucial part of the virtual memory system.Logical addresses are not the actual physical locations in memory where data is stored; instead, they are symbolic references that simplify memory management and enhance security by isolating each process’s memory usage.
    2. AccessibilityLogical addresses allow a process to operate in its own address space without direct interaction with the main memory’s physical layout.
      • User-level Processes: The logical address space is directly accessible to user-level processes, which only need to deal with these addresses to perform computations or access data.
      • Abstraction: Since user-level programs interact with logical addresses instead of physical ones, this abstraction simplifies development by providing a consistent view of memory, irrespective of the actual physical memory layout.
    3. This ensures that processes can operate independently within their logical address spaces and don’t interfere with each other’s memory allocations.
    4. Indirect MappingLogical addresses are not directly mapped to physical memory locations. Instead, they must go through a translation process to access data stored in main memory.
      • Translation: The operating system uses a hardware component called the Memory Management Unit (MMU) to map each logical address to a corresponding physical address in main memory.
      • Isolation: By using this indirect mapping, processes are isolated from each other, as each logical address space is mapped to a unique area in physical memory. This mapping ensures each process operates in its own "virtual" memory without accessing or modifying other processes' memory directly.
    5. ExistenceUnlike physical addresses, which represent actual locations in main memory (such as RAM), logical addresses are abstract and do not exist physically in the memory hardware.
      • Virtual Representation: Logical addresses are pointers within a virtual memory system. They are conceptual addresses created by the CPU during execution and give each process a private, virtual view of memory.
      • Virtual Memory System: The existence of logical addresses as virtual addresses enables the use of memory in ways that extend beyond the available physical memory. For instance, a system with 4 GB of physical RAM can create logical address spaces that allow processes to utilize virtual memory far exceeding this limit.
    6. Address SpaceThe Logical Address Space of a program refers to the entire range of logical addresses that the program generates during its execution. This logical address space:
      • Isolation: Each process has a unique logical address space, which isolates it from other processes and provides a consistent memory view across executions.
      • Larger Than Physical Memory: Virtual memory allows each process to have an address space larger than the actual physical memory, so the system can accommodate more processes and more extensive data structures without being constrained by the physical memory size.
    7. Logical address spaces also enable swapping in operating systems, where parts of a process’s memory can be temporarily stored on the disk and reloaded as needed, thus expanding the system's effective memory capacity.
    8. RangeThe range of logical addresses depends on the memory architecture of the system. For example, in a 32-bit architecture:
      • 32-bit Logical Addresses: Logical addresses can range from 0 up to 232−1232−1, or 4,294,967,295. This provides a theoretical maximum of 4 GB of logical address space for each process, regardless of the actual physical memory available.
      • 64-bit Logical Addresses: In a 64-bit architecture, logical addresses can range up to 264−1264−1, vastly expanding the addressable memory space.
    9. This large range allows complex applications to allocate and access extensive data structures without worrying about the constraints of physical memory, as only active parts of the memory are loaded into physical RAM, while the rest can remain on disk.

    Logical Address in Virtual Memory Systems

    In modern systems, logical addresses are integral to virtual memory, a system that allows processes to use a broader memory space than is physically available. Virtual memory systems work by storing parts of the process's address space on a disk and only loading necessary portions into RAM. Here’s how logical addresses fit into this setup:

    1. Page Tables and Segmentation: The MMU relies on structures like page tables and segmentation tables to keep track of the mappings from logical to physical addresses.
    2. Swapping and Paging: Logical addresses enable swapping, where inactive memory pages are saved to disk to free up physical memory. When these pages are needed, the system retrieves them from disk, updating the mappings in the page table as necessary.
    3. Efficient Memory Utilization: Virtual memory allows multiple processes to share physical memory, increasing system efficiency. By mapping logical addresses to physical memory, the operating system can allocate memory to processes dynamically and prevent memory fragmentation.
    4. Security and Isolation: Each process’s logical address space is isolated, so processes cannot directly access each other’s memory. This ensures that errors or malicious code in one process do not affect others, providing a secure computing environment.

    Benefits of Logical Addressing

    • Simplified Programming Model: Logical addresses create an isolated and consistent address space for each program, so developers can write code without considering the underlying physical memory layout.
    • Process Isolation and Security: Logical addresses isolate each process’s memory, preventing interference and potential security breaches.
    • Efficient Multitasking: In a multiprogramming environment, logical addressing enables the system to load multiple processes simultaneously without requiring them to fit within the available physical memory.
    • Support for Virtual Memory: Logical addressing allows systems to use virtual memory, making it possible to run larger applications and multiple processes even on systems with limited physical memory.

    Physical Address

    The Physical Address is the actual address in main memory (RAM) where data and instructions are stored. When a program operates, it generates logical addresses, but to interact with actual memory, these logical addresses need to be translated into physical addresses by the operating system and hardware. This translation is essential for ensuring data is stored and accessed from the correct locations in memory. Physical addresses are the true addresses that the memory hardware uses, representing the exact location within RAM where specific data or instructions reside.

    Characteristics of Physical Address

    1. A physical address is the actual location in the computer's main memory (RAM) where specific data or instructions are stored. Unlike logical addresses, which are generated by the CPU for each program, physical addresses correspond to actual memory cells within the hardware. Each physical address is unique within the RAM and directly maps to a location within the physical memory hardware.Physical addresses are crucial in making sure that when a process needs data or instructions, it can retrieve them from a specific, verifiable location. This contrasts with logical addresses, which are abstract representations and do not correspond to physical memory until mapped.
    2. AccessibilityPhysical addresses are not directly accessible by user-level processes. Instead, a process generates logical addresses, which are translated by the Memory Management Unit (MMU) into physical addresses.
      • User-Level Restrictions: User processes can access their memory indirectly, as the operating system and MMU control direct access to physical addresses. This restriction enhances security and prevents processes from interfering with each other’s data.
      • Controlled by the OS: Only system-level processes have control over physical addresses, as they manage the allocation and translation of logical to physical addresses. This setup prevents memory overlaps and unauthorized memory access.
    3. Memory UnitPhysical addresses correspond to specific locations within the memory hardware unit, typically RAM. Physical addresses represent the real "layout" of memory, where each address points to a tangible place within the physical memory, directly corresponding to a specific row, column, or segment within the RAM.
      • Direct Hardware Correlation: Each physical address directly ties to a hardware memory cell, providing a permanent reference to a precise location.
      • Efficient Retrieval: Since physical addresses are fixed within the RAM, the system can retrieve data quickly and accurately, optimizing memory access times for the CPU.
    4. Translation by the Memory Management Unit (MMU)The MMU is a specialized hardware component responsible for converting logical addresses generated by a program into physical addresses that the RAM uses to store or retrieve data. The MMU performs the following tasks during translation:
      • Address Mapping: When a program generates a logical address, the MMU translates it into a physical address by adding the base register value (unique for each process) and offsetting it to point to a specific memory location.
      • Process Isolation: The MMU also ensures that processes operate within their own allocated memory space by mapping logical addresses to distinct physical address spaces, protecting the memory integrity between processes.
      • Access Control: The MMU manages access controls, enforcing permissions that prevent processes from directly accessing unauthorized memory locations.
    5. Physical Address SpaceThe Physical Address Space is the set of all physical addresses allocated to a process, representing where its instructions and data actually reside in RAM.
      • Distinct from Logical Address Space: While logical address space is defined by the virtual memory seen by a process, the physical address space is the actual, tangible memory layout in the RAM.
      • Size Constraints: Physical address space is limited by the actual RAM size, meaning that while logical address space can theoretically be larger (using virtual memory), the physical address space is limited by the hardware capacity of the system.
    6. For example, in a system with 16 GB of RAM, the physical address space is limited to addresses within that 16 GB, regardless of how large the logical address space might appear to the processes due to virtual memory.
    7. RangePhysical addresses span a range determined by a base address and a maximum value, specific to each process.
      • Base Register (R): The base register value (R) is a unique identifier for each process, defining where the physical address space for that process begins within the RAM.
      • Offset Calculation: The range of physical addresses for a process is then calculated from R+0R+0 to R+maxR+max, where max is the maximum size of the allocated space.
    8. This range ensures that each process is allocated its own space in memory, which remains isolated from other processes, preventing memory overlap and interference.

    Diagram: Logical Address and Physical Address

    Understanding Physical Addresses in Virtual Memory Systems

    In modern operating systems, virtual memory allows each process to operate as if it has access to a large, contiguous address space. However, this address space is an illusion; the data is physically located within specific addresses in RAM. Here’s how physical addresses operate within this virtual memory framework:

    1. Page Tables and Physical Memory Mapping:To manage memory efficiently, the OS uses page tables, which store mappings between logical (or virtual) and physical addresses. Each entry in a page table corresponds to a page frame in physical memory, where the actual data is stored.
    2. Swapping and Memory Management:In systems with virtual memory, when the physical memory is fully utilized, inactive pages in RAM can be swapped to a storage drive (e.g., SSD or HDD). The OS keeps track of these pages, and when they are needed, swaps them back into RAM, reassigning physical addresses as required. This mechanism expands the effective memory capacity and allows more processes to run concurrently, even if they exceed the available physical RAM.
    3. Efficient Memory Access with Physical Addresses:By mapping logical addresses to precise physical addresses, the OS can optimize memory access times, ensuring that the CPU retrieves data quickly. Physical addresses maintain data integrity, as they ensure processes only access their designated memory cells.
    4. Process Protection and Security:Physical addresses play a key role in process isolation, preventing processes from directly accessing each other’s memory. The OS and MMU enforce strict boundaries by allocating each process a separate region in the physical address space. This isolation protects data privacy and prevents accidental or intentional interference between processes.

    Benefits of Physical Addressing

    • Accurate Memory Access: Physical addresses provide exact locations in memory, ensuring that data is stored and retrieved from a specific place in RAM.
    • Process Security and Isolation: Since physical addresses are controlled by the OS, each process can only access its allocated memory region, protecting memory integrity and preventing unauthorized access.
    • Optimized Performance: Physical addressing in virtual memory systems allows the OS to optimize memory usage, making efficient use of available RAM and enabling faster data access.
    • Support for Virtual Memory Operations: By translating logical addresses into physical ones, the OS supports virtual memory’s flexible use, allowing more processes to run by swapping inactive data to disk storage.

    Memory Management Unit (MMU)

    The Memory Management Unit (MMU) is a hardware component within the CPU that manages the translation of logical (or virtual) addresses to physical addresses in real time. It plays a crucial role in memory management, enabling the operating system to effectively allocate memory resources while maintaining security and efficiency. The MMU allows each process to function within its own logical address space, isolating processes from each other and providing a layer of memory protection. This component is especially important in systems that use virtual memory, where processes might need more memory than is physically available, as the MMU enables efficient memory utilization without processes knowing the specifics of physical memory locations.

    Key Functions of the MMU

    The MMU performs several key functions that make it essential to modern memory management systems. These include Address Translation, Memory Protection, and Relocation.

    1. Address Translation

    The MMU translates logical addresses generated by the CPU into physical addresses where data and instructions are stored in RAM. Address translation is critical in systems with virtual memory, allowing processes to access memory addresses as if they each have their own contiguous memory space, even though they share the same physical memory.

    • Process of Translation: When a program executes an instruction that references memory (such as loading data), it uses a logical address. The MMU takes this logical address and translates it into a physical address using a page table. A page table is a data structure that holds the mappings of logical addresses (or virtual pages) to physical addresses (or page frames).
    • Runtime Translation: This address translation happens in real time, meaning it occurs every time the program makes a memory access request. The CPU generates the logical address, and the MMU quickly translates it before the memory is accessed.
    • Segmentation and Paging: The MMU may use segmentation and/or paging to perform address translation. In segmentation, memory is divided into segments that correspond to logical sections of a program (like code, data, stack). In paging, memory is divided into equal-sized pages, which makes memory allocation more flexible and minimizes fragmentation. The MMU maintains mappings for both segments and pages, which allows for a granular level of control over memory allocation.
    • Transparency to Users and Developers: With the MMU handling address translation, developers and end-users don’t need to manage physical memory locations directly. This abstraction simplifies programming and system management since each process only needs to manage its logical address space.

    2. Memory Protection

    The MMU enforces memory protection, ensuring that each process operates within its own memory region and cannot access another process’s memory directly. This isolation is fundamental for security and system stability, as it protects data integrity and prevents interference between processes.

    • Access Control Mechanisms: The MMU provides access control for different memory regions. It can mark specific pages or segments as read-only, write-only, executable, or non-executable based on permissions set by the operating system. This feature helps prevent errors like buffer overflows or unauthorized access to sensitive data.
    • Process Isolation: Memory protection isolates each process, so it can only access its own address space. The MMU restricts each process to a designated area of memory and enforces boundaries, preventing one process from reading or writing data in another process’s address space.
    • Exception Handling: If a process attempts to access a restricted memory area (e.g., a segment that belongs to another process), the MMU triggers a memory access violation exception. This exception is caught by the operating system, which can then terminate or restrict the offending process, protecting the rest of the system from potential errors or malicious actions.
    • Kernel Mode vs. User Mode: The MMU also helps enforce kernel mode and user mode distinctions. In kernel mode, the operating system has unrestricted access to all memory, while in user mode, user processes are restricted to their own address spaces. The MMU ensures that user-mode processes cannot access kernel memory, providing an additional layer of security.

    3. Relocation

    Relocation allows the operating system to move a process within main memory without affecting its execution. This flexibility improves memory utilization by enabling efficient allocation and reallocation of memory regions, depending on the needs of the system.

    • Logical Independence: When a process is loaded into memory, it generates logical addresses that are mapped to physical addresses by the MMU. This mapping enables the process to be “relocated” within memory without changing its logical address space. As long as the MMU updates the mapping, the program can continue running seamlessly in a new memory location.
    • Efficient Memory Utilization: Memory allocation is dynamic, meaning processes can grow or shrink in memory as needed. The MMU allows the OS to move processes around in physical memory, filling in gaps that might otherwise lead to fragmentation. Relocation enables compaction, where small, free memory segments are combined into larger, contiguous blocks, freeing up space for other processes.
    • Swapping: In a virtual memory system, inactive processes can be swapped out of main memory and stored on disk, freeing up physical memory for active processes. When a swapped-out process is brought back into main memory, it may be loaded at a different physical address than before. The MMU updates its mappings to reflect the new physical location, making the process’s logical address space unaffected by these changes.
    • Load-Time and Execution-Time Binding: Relocation is often categorized by when the address binding occurs. Load-time binding happens when a process is initially loaded into memory, and execution-time binding is managed by the MMU as the process runs. Execution-time binding, as handled by the MMU, allows relocation to be dynamic, enabling real-time adjustments based on the memory needs of the system.

    How the MMU Enhances System Performance

    The MMU plays a critical role in managing system performance, enabling faster data access, efficient memory allocation, and support for multiprogramming. Its functions in address translation, memory protection, and relocation contribute directly to the overall efficiency and stability of the system.

    1. Optimized Address Translation: The MMU uses Translation Lookaside Buffers (TLBs) to speed up the address translation process. TLBs are fast caches within the MMU that store recently translated addresses, allowing the system to quickly retrieve frequently accessed memory locations.
    2. Reduced Fragmentation: Through dynamic relocation and efficient allocation, the MMU reduces fragmentation within physical memory. This compaction increases the availability of contiguous memory, minimizing wasted space and allowing the system to support more processes simultaneously.
    3. Support for Multiprogramming: The MMU allows multiple processes to share the same physical memory by mapping each process’s logical address space to distinct regions of physical memory. This capability supports multitasking, enabling several programs to run concurrently without risk of interference or data corruption.
    4. Enhanced Security and Stability: By enforcing memory protection, the MMU prevents unauthorized access and helps contain errors within their processes. This containment limits the potential impact of malicious or faulty programs, maintaining system stability and security.

    Importance in a Multiprogramming Environment

    In a multiprogramming environment, multiple processes are loaded and executed in main memory simultaneously, allowing the CPU to switch between them quickly and efficiently. This design enhances system responsiveness and optimizes CPU utilization by reducing idle times, as the CPU can continue executing other processes when one process is waiting for resources or input/output operations to complete. Efficient memory management is crucial in a multiprogramming system, and this is achieved through logical and physical addressing, which enable the operating system (OS) to dynamically allocate memory resources across multiple processes while ensuring security, stability, and optimized performance.

    Below is an in-depth look at how logical and physical addressing support memory optimization, process isolation, and efficient memory management in a multiprogramming environment:

    1. Optimizing Memory Utilization

    • Efficient Allocation with Logical Addressing: In a multiprogramming environment, each process is assigned its own logical address space. This abstraction means that processes can operate independently, with each accessing only its assigned logical addresses. Since the processes only interact with their logical addresses (not physical ones), the OS can allocate physical memory as needed, maximizing usage without processes interfering with each other.
    • Flexible Memory Sharing: Logical and physical addressing allows for memory to be dynamically allocated and shared across processes without compromising memory integrity. If two processes need access to the same data (e.g., shared libraries or resources), the OS can map their logical addresses to the same physical memory location, optimizing memory usage.
    • Memory Segmentation and Paging: The OS often divides memory into segments or pages to manage resources effectively. Segmentation allows for memory allocation based on logical sections (code, data, stack), while pagingdivides memory into fixed-sized units that can be easily swapped in and out of main memory. Logical addresses simplify this process, as each process’s memory can be allocated in non-contiguous physical locations if needed, thus reducing memory fragmentation and improving utilization.

    2. Enhancing Process Isolation

    • Logical Address Space Isolation: By giving each process its own logical address space, the OS ensures that processes operate within their own memory boundaries. This isolation is critical because it prevents processes from accessing or modifying the memory space of other processes, whether intentionally or accidentally. Logical addressing ensures that each process only “sees” its allocated memory, creating an isolated environment that enhances security and stability.
    • Memory Protection through the MMU: The Memory Management Unit (MMU) uses logical-to-physical address translation to enforce access control. It restricts each process to its own memory range, meaning any attempt to access a non-allocated or protected memory area (like another process’s memory) will result in a memory access violation, triggering an exception. This process isolation, enabled by logical and physical addressing, protects against unauthorized access and mitigates potential errors or malicious interference.
    • Kernel and User Mode Separation: Multiprogramming systems typically operate with kernel mode and user mode permissions. Logical addressing helps enforce these permission levels by restricting user-mode processes from accessing kernel memory. The MMU’s address translation and protection mechanisms prevent user processes from directly interacting with kernel addresses, thereby maintaining system stability and protecting critical OS components.

    3. Enabling Efficient Memory Management

    • Real-Time Address Translation: Logical addresses must be translated to physical addresses in real-time, a process managed by the MMU. This address translation enables the OS to relocate processes as needed without affecting the process's logical address space. When memory is needed for high-priority processes, the OS can dynamically move lower-priority processes within physical memory, ensuring that active processes have the resources they require.
    • Swapping and Paging for Process Relocation: In systems with virtual memory, paging and swapping are used to optimize memory usage by moving less active or inactive processes to disk (secondary storage) and loading active ones into main memory. When a process is swapped back into main memory, it might not return to the same physical address; however, the logical address space remains unchanged, as the MMU updates mappings to reflect the new location. This flexibility minimizes idle memory and allows the system to handle a higher volume of processes without exhausting physical memory.
    • Reducing Memory Fragmentation: Logical addressing combined with segmentation and paging enables the OS to compact memory by relocating processes or dividing them into smaller, manageable sections. This approach reduces external fragmentation (gaps of free memory between processes), allowing the OS to allocate memory in contiguous or non-contiguous blocks as needed, ensuring that memory is used as efficiently as possible.
    • Multitasking Support: With logical addresses abstracting each process’s memory, the OS can perform context switching smoothly, switching from one process to another without needing to reconfigure memory allocations manually. This capability is essential for multitasking, where multiple processes are managed simultaneously without loss of data integrity or efficiency.

    Memory mapping and protection mechanisms

    The operating system (OS) uses a series of memory mapping and protection mechanisms to manage and secure memory allocation for multiple processes, ensuring that each process operates independently within its own memory space and cannot interfere with others. These mechanisms include Virtual Address Space (VAS), relocation and limit registers, and the Memory Management Unit (MMU). Here’s a detailed breakdown of how each of these mechanisms contributes to memory isolation and protection:

    Virtual Address Space (VAS)

    The Virtual Address Space (VAS) is a logical abstraction created by the OS to provide each process with an independent view of memory, giving the impression that the process has exclusive access to memory.

    • VAS refers to the set of logical addresses that a process can use, which are distinct from physical addresses in RAM. For example, when a process accesses memory, it uses logical addresses that are later translated into actual physical addresses.
    • This abstraction allows processes to operate as though they have their own memory, despite sharing the physical memory with other processes.
    • Process Isolation: Each process operates within its unique virtual address space, which effectively isolates processes from each other and prevents one process from accessing another’s memory.
    • System Stability and Security: By isolating memory access, VAS helps to prevent errors in one process from corrupting the memory of another process or the OS, improving overall system stability and security.

    Separating Memory Spaces with Relocation and Limit Registers

    Relocation and limit registers are essential tools used by the OS and MMU to define the permissible memory range for each process, restricting access to only a specific region in physical memory.

    • Relocation Register (Base Address):
      • The relocation register holds the smallest physical address that a process can access, often referred to as the base address (R).
      • This register essentially shifts the logical address space of a process to start at a specific physical address in RAM, making sure that each process operates within its designated memory boundaries.
    • Limit Register:
      • The limit register defines the range of logical addresses that a process is allowed to use.
      • The limit register ensures that each process can access only a certain amount of memory, protecting the memory space from accidental or malicious access beyond the defined range.
    • Address Check:
      • Each logical address generated by a process is compared against the limit register’s value.
      • Validation: If the logical address falls within the limit range, it is mapped to the corresponding physical address and granted access. However, if the logical address exceeds this limit, an exception is raised, denying the process access to prevent unauthorized memory access.

    This combination of relocation and limit registers is crucial for isolating memory spaces for individual processes, safeguarding the OS, and protecting user data.

    Dynamic Mapping with the Memory Management Unit (MMU)

    The Memory Management Unit (MMU) is a hardware component responsible for the dynamic translation of logical addresses into physical addresses.

    • Address Translation:
      • The MMU performs real-time translation of logical addresses into physical addresses by adding the relocation register’s value (the base address) to each logical address.
      • This process allows each process to operate as if it has access to a contiguous block of memory, while in reality, the MMU maps these addresses to distinct physical locations in RAM.
    • Loading Relocation and Limit Registers:
      • During a context switch (when the CPU shifts from executing one process to another), the CPU schedulerselects a process for execution, and the dispatcher loads the relocation and limit registers with values specific to that process.
      • This setup, unique to each process, ensures that each process’s memory access is restricted to its assigned range, promoting memory isolation and preventing processes from accidentally or intentionally accessing unauthorized memory areas.
    • Protection of OS and User Data:
      • Every address generated by the CPU is checked by the MMU against the relocation and limit registers. This systematic check prevents processes from accessing memory regions that are either reserved for the OS or allocated to other processes.
      • This system-level protection maintains process isolation, secures OS data, and protects sensitive information from unauthorized access.

    Error Handling for Unauthorized Access

    To further enforce memory protection, the OS has mechanisms to trap and handle attempts at unauthorized memory access.

    • User Mode Restrictions:
      • Processes in user mode are restricted from accessing critical memory areas, such as OS memory and the memory allocated to other user processes.
      • Any attempt by a user-mode process to access restricted areas results in a trap, where the OS detects and intercepts the unauthorized access attempt.
    • Trap and Fatal Error Handling:
      • When an illegal memory access attempt is detected, the OS treats it as a fatal error for the offending process.
      • Process Termination: The OS typically terminates the process to prevent potential security threats or system instability. This strict enforcement ensures that no process can modify or interfere with another process’s memory or with the OS itself.

    Address Translation and Memory Allocation Methods

    Address Translation and Memory Allocation Methods are essential components of memory management in operating systems, influencing how efficiently memory is allocated and how processes access memory. The OS can allocate memory either in contiguous blocks or non-contiguous blocks, depending on the allocation strategy and available memory. 

    Physical Memory Allocation Methods

    The OS primarily uses two methods for memory allocation: contiguous and non-contiguous allocation.

    Contiguous Allocation

    • In this method, each process occupies a single, continuous block of memory. The OS reserves a large enough block of adjacent memory addresses to hold the process in one go.
    • Advantages:
      • Memory management and tracking are straightforward, as each process occupies a single block.
      • Suitable for smaller systems or systems where simplicity and speed are prioritized.
    • Disadvantages:
      • Often leads to fragmentation issues, especially as processes are loaded and removed over time.

    Non-Contiguous Allocation

    • Here, processes are allocated in non-contiguous memory blocks. This is often used in systems that utilize virtual memory or paging, where processes do not need to occupy one continuous block in physical memory.We will learn about Paging in further classes.
    • Advantages:
      • Allows better memory utilization as processes can be spread across multiple available blocks.
      • Reduces fragmentation issues by distributing memory based on availability.

    Contiguous Memory Allocation 

    In contiguous memory allocation, each process is allocated a single block in physical memory. While this method simplifies memory tracking, it can create fragmentation challenges as processes of varying sizes are loaded and removed. Two main approaches to contiguous allocation are fixed partitioning and dynamic partitioning.

    Fixed Partitioning

    Fixed partitioning is one of the earliest forms of memory management, where memory is divided into fixed-sized partitions, each allocated for a specific process.

    Partition Structure:

    • Equal-Sized Partitions: All partitions are of the same size, which simplifies allocation but can lead to significant inefficiencies if most processes do not match the partition size.
    • Variable-Sized Partitions: Partitions can be of different sizes, allowing the OS to match larger partitions with larger processes and smaller partitions with smaller processes. This reduces some inefficiencies but still has limitations.

    Once a partition is occupied by a process, no other process can use that space, even if part of the partition remains unused.

    Limitations of Fixed Partitioning:

    1. Internal Fragmentation:
      • Internal fragmentation occurs when the process size is smaller than the partition size. The unused portion of memory in that partition is wasted.
      • For example, if a partition is 100 KB but the process only requires 60 KB, the remaining 40 KB goes unused, leading to inefficient memory use.
    2. External Fragmentation:
      • Over time, as processes are allocated and deallocated, memory may become fragmented into small, non-contiguous blocks. While the total free space may be sufficient, the lack of a large enough contiguous block prevents new, larger processes from being allocated.
      • This “scattering” of memory can be problematic, as free memory exists but isn’t contiguous.
    3. Limitations on Process Size:
      • Fixed partitioning imposes a restriction on the maximum process size. If a process exceeds the size of the largest available partition, it cannot be loaded into memory, regardless of the total free memory available.
    4. Low Degree of Multiprogramming:
      • Since partition sizes are fixed, the OS can only support a limited number of processes concurrently, regardless of the actual memory requirements of each process.
      • As a result, this limits the OS’s ability to handle multiple processes efficiently and reduces the system's responsiveness to user demands.

    Dynamic Partitioning

    Dynamic partitioning is a more adaptable memory allocation technique where partitions are created to fit each process’s specific size requirements. This means that partitions are created dynamically as processes are loaded into memory.

    Partition Creation and Sizing:

    • Each partition is allocated based on the exact memory requirements of the process being loaded, minimizing wasted space.
    • Unlike fixed partitioning, where the partition sizes are predetermined, dynamic partitioning allows for flexible partition sizes, accommodating the varying needs of processes.

    Advantages of Dynamic Partitioning:

    1. No Internal Fragmentation:
      • Since partitions are created to match the exact size of each process, there is no unused space within the partition. This eliminates internal fragmentation, making memory allocation much more efficient.
    2. No Limit on Process Size:
      • Dynamic partitioning allows for more flexibility with process sizes, as partitions are created on demand. This removes the limitation on the maximum size a process can be, as it is no longer confined to fit within fixed partitions.
    3. Better Degree of Multiprogramming:
      • The system can support a higher number of processes at any time, as partitions are created to match each process’s needs, allowing more efficient memory use. This increases the degree of multiprogramming, making the system more responsive and able to handle a larger workload.

    Limitation of Dynamic Partitioning:

    • External Fragmentation:
      • While dynamic partitioning solves the issue of internal fragmentation, it still suffers from external fragmentation. As processes are allocated and freed over time, memory gaps or “holes” may form in physical memory.
      • Although there may be enough total memory for a new process, these gaps may prevent the allocation due to a lack of contiguous space.

    Free Space Management

    Defragmentation/Compaction

    When using dynamic partitioning for memory allocation, external fragmentation can become a significant issue. External fragmentation occurs when free memory spaces are scattered throughout physical memory, making it difficult to allocate contiguous memory for larger processes. Compaction, or defragmentation, helps address this.

    How Compaction Works:

    • To gather all free memory into a single contiguous block, allowing for larger processes to be allocated.
    • Process:
      1. The OS rearranges loaded partitions so that all occupied memory regions are placed contiguously.
      2. After moving all active memory blocks together, the OS moves all free memory blocks together as a single contiguous space.
      3. This makes it possible to allocate large blocks of memory to new processes.
    • Example: Imagine a scenario where three processes, each taking up different memory blocks, are scattered in memory, with smaller free spaces between them. By using compaction, these processes can be reorganized to occupy contiguous memory spaces, and all the free space can be merged into a single large block.

    Advantages of Compaction:

    • Enables Allocation for Larger Processes: With all free memory blocks consolidated, the OS can satisfy memory requests for larger processes.
    • Improves Memory Utilization: Reduces the impact of fragmentation, enabling the OS to use memory more efficiently.

    Disadvantages of Compaction:

    • Reduces System Efficiency: Rearranging memory takes time, which can slow down system performance while compaction is being performed.
    • Complex Implementation: Requires the OS to track and adjust memory locations for all active processes during compaction, which can increase overhead.

    How Free Space is Stored/Represented in the OS

    Operating Systems manage free memory spaces using data structures like a free list, often implemented as a linked list. The free list keeps track of all the unallocated memory blocks, known as holes.

    Free List:

    • Structure: Each node in the linked list represents a free memory block (hole). Nodes contain information about the size of the free block and a pointer to the next free block in memory.
    • Benefits: A linked list allows the OS to efficiently track scattered free blocks, which can be merged or split as needed during allocation or deallocation.
    • Operations:
      • Add: When a process deallocates memory, the free space is added as a new node or merged with adjacent free blocks.
      • Remove: When allocating memory, the OS finds an appropriate hole and removes or adjusts it in the free list based on the algorithm used (e.g., First Fit, Best Fit).

    Memory Allocation Strategies for Satisfying Requests of Size n

    When a process requests memory, the OS must decide how to find and allocate an appropriately sized free block from the free list. Several strategies exist, each with its own advantages and drawbacks:

    First Fit

    • The OS searches from the beginning of the free list and allocates the first hole that is large enough to accommodate the process.
    • Advantages:
      • Simple to Implement: Easy to set up, as it only requires finding the first suitable hole.
      • Efficient: Usually faster than other methods since it stops as soon as a big enough hole is found.
    • Disadvantages:
      • Potential Fragmentation: Leaves behind potentially usable, smaller free spaces, leading to external fragmentation over time.

    Next Fit

    • Similar to First Fit, but instead of always starting from the beginning of the free list, the search begins from the location of the last allocated hole.
    • Advantages:
      • Improves Search Time in Some Cases: Reduces search time by not revisiting the beginning of the list.
      • Reduces Overhead: Like First Fit, it is simple and efficient to implement.
    • Disadvantages:
      • Limited Optimization over First Fit: Since it is only a slight variation of First Fit, it still suffers from external fragmentation.

    Best Fit

    • Best Fit searches the entire free list and selects the smallest free block that can accommodate the process’s request.
    • Advantages:
      • Less Internal Fragmentation: By choosing the smallest hole that fits the request, it reduces unused space within allocated blocks.
    • Disadvantages:
      • Time-Consuming: Best Fit requires scanning the entire list to find the smallest suitable block, increasing time complexity.
      • Increased External Fragmentation: Tends to create many small, unusable free blocks over time, leading to severe external fragmentation.

    Worst Fit

    • The OS allocates the largest available hole that can satisfy the request, aiming to leave larger blocks available for future allocations.
    • Advantages:
      • Larger Remaining Blocks: Leaves behind bigger chunks of memory, which may accommodate more substantial future processes.
    • Disadvantages:
      • Inefficient Time Complexity: Requires scanning the entire free list to find the largest hole, which can be slow.
      • Higher Chance of Fragmentation: While it aims to leave larger blocks, it may create small unusable spaces, especially for large requests.

    Comparison of Allocation Strategies

    Strategy

    How It Works

    Pros

    Cons

    First Fit

    Allocates first sufficient hole

    Fast, simple

    Leaves behind small fragments

    Next Fit

    Like First Fit, but resumes search from the last allocated hole

    Slight optimization over First Fit

    Still has potential fragmentation issues

    Best Fit

    Allocates the smallest suitable hole

    Reduces internal fragmentation

    Slow, creates many small external fragments

    Worst Fit

    Allocates the largest suitable hole

    Leaves larger holes for future use

    Inefficient, may increase fragmentation