Hardware

Potentially hugeVery largeLargeSmallVery smallLeast expensiveVery inexpensiveInexpensiveExpensiveVery expensiveVery slowSlowFastVery fastVery fastSpeedSizeCostCPU RegistersCacheRAMHard DiskOff-Line StorageCost of CPU

Random Access Memory can be treated as an array of bytes, with each byte having a unique index known as a physical address. RAM is a contiguous memory region, meaning that it generally is an interval of consecutive addresses.

Summary

There are two types of data in a process, which can grow/shrink during execution.

  • transient
  • persistent

The operating system handles the following memory related tasks

  • allocate memory space to new process
  • manage memory space for process
  • protect memory space of processes from each other
  • provides memory related system calls to process
  • manage memory space for internal use

Without Memory Abstraction

The general idea is that the program simply saw the physical memory, and directly accessed and modified contents in it.

Example

When executing instructions like MOV REG1, 1000, the computer moved the contents of physical memory location 1000 to REG1.

Pros

  • Simple

In this scenario, it is still possible to run multiple programs at the same time - by saving the entire content of memory to a disk file and then bring in and run the next program (swapping).

Another solution would be to

  • divide memory in to small blocks
  • assign a protection key
  • trap any attempt by running process to access memory with protection code different than Program Status Word (PSW) key

This solution would prevent user processes forom interfering with one another, and with the operating system itself. However, an issue could still happen. Imagine 2 4KB programs:

lw $1, [4]...0484096Program Alw $1, [4]...0484096Program Blw $1, [4]...0484096lw $1, [4]80008004800812096......Refers towrong value

As seen, when Program B is loaded into memory, it refers to the wrong value, referring to 4 in the previous program, instead of the current program.

Address Relocation

To fix this issue, we can recalculate the memory references during loading:

lw $1, [4]...0484096Program Alw $1, [4]...0484096Program Blw $1, [4]...0484096lw $1, [8004]80008004800812096......Recalculated

Cons

  • Slow loading time
  • Challenge in distinguishing memory references from normal integer constants

Base + Limit Register

In this fix, two registers are used:

  • base register
  • limit register

Base register

Base of all memory references.

During compilation time, all memory references are compiled as an offset from this register, and at loading time, the base register is initialised to the starting address of the process memory space.

Limit register

Indicates range of memory space of current process

All memory access is checked against limit to protect memory space integrity.

lw $1, [4]...0484096Program Alw $1, [4]...0484096Program Blw $1, [A + 4]...0484096lw $1, [B + 4]80008004800812096......05000BASELIMIT800013000BASELIMIT

Cons

Accessing address requires additional computations:

With Memory Abstraction

Logical Address

Motivation

Embedding actual physical addresses in a program is a bad idea.

Instead of using the physical space, the idea of a logical space, which refers to how the process views its memory space, allows for each process to have a self-contained, independent space.

In this scenario, generally, logical address is not equal to the physical address, and thus a mapping between the addresses are needed.

Contiguous Memory Management

Note that, during execution, process must be in memory during execution.

Thus, the following holds:

Each process occupies a contiguous memory region.

This means that each process cannot take non-continuous memory regions - for example, Process A from 0 - 4095, Process B from 4096 - 8191, Process A from 8192 - 12287.

Physical memory is large enough to contain one or more processes with complete memory space.

With regards to multitasking, multiple processes must be allowed to be in the physical memory at the same time to allow switching.

When the physical memory is full, memory should be freed up by

  • removing terminated processes
  • swapping blocked processes to secondary storage (swapping)

Memory Partitioning

Memory partition

The contiguous memory region allocated to a single process.

The partition can be allocated using two different allocation schemes:

  • Fixed-size partition
  • Variable-size partition
Fixed-SizeVariable-Size
Fixed number of partitionsCreated based on actual size of process
Process occupies one of partitionOS keeps track of occupied and free memory regions, splitting and merging when necessary

Fixed Partitioning

Pros

  • Easy to manage
  • Fast allocation (no choosing needed)

Cons

  • Partition size needs to be large enough to contain largest of processes

  • Small processes waste memory space (internal fragmentation)

Internal fragmentation

Left over space when a process does not occupy whole partition.

OSProcess AProcess BInternalfragmentationPartition 3

Dynamic Partitioning

Pros

  • Flexible
  • No internal fragmentation

Cons

  • More information to maintain
  • Time to locate appropriate location

While there is less internal fragmentation, there can be a large number of holes in the memory due to process creation/termination/swapping. This is called external fragmentation, but merging the holes here are possible.

OSProcess AProcess BFree Memory

Allocation

Assuming the OS maintains a list of partitions and holes, algorithms can be used to locate a partition of size N for a program of size N.

The algorithms generally

  • Search for hole with size
    • First-fit, best-fit, worst-fit
  • Split hole into and where is the new partition, and will be the left over space (creating a new hole)

The variants of searching are as such:

  • First-fit takes the first hole that is large enough
  • Best-fit finds smallest hole that is large enough
  • Worst-fit finds largest hole

Merging and Compaction

When occupied partition is freed, a merger with adjacent hole if possible is made.

Similarly, compaction can also be used, where occupied partitions can be moved around to consolidate holes, but this cannot be invoked too frequently as it is time consuming.

Buddy System

Pros

  • efficient partition splitting
  • efficient location of good match of free partition
  • efficient partition de-allocation and coalescing

The main idea is that the free block is split into half repeatedly to meet request, forming sibling blocks (buddy blocks). When the buddy blocks are free, they are merged to form larger blocks.

Implementation

Keep array A[0...K] where is the largest allocatable block size.

  • Each array element A[j] is a linked list keeping tracks of free blocks of size
  • Each free block indicated just by starting address

Allocation Algorithm

To allocate a block of size ,

  1. Find smallest , such that
  2. Access A[S] for a free block.
    • If free block exists, remove block from free block list and allocate
    • Else,
      • find smallest from such that A[R] has a free block
      • for (), repeatedly split

To free a block ,

  1. Check in A[S] where
  2. If buddy of exists,
    • Remove from list
    • Merge together to get larger block B'
    • Iterate back to step 1
  3. Else, insert B to list in A[S]