Wednesday, February 23, 2011

3.0 : MEMORY MANAGEMENT


 

3.0: Elaborate on Memory Management Concept.
-         Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.
-         Virtual Memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.
-         Garbage collection is the automated allocation and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory, the explicit allocation and deallocation of computer memory resources. Region-based memory management is an efficient variant of explicit memory management that can deallocate large groups of objects simultaneously.




3.1            : Elaborate on Virtual Memory Implementation
-         The use of virtual memory addressing (such as paging or segmentation) means that the kernel (kernel is the central component of most computer operating system) can choose what memory each program may use at any given time, allowing the operating system  to use the same memory locations for multiple tasks.

-         If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. Under UNIX this kind of interrupt is referred to as a page fault.

(Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory called virtual memory.)
-         When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.

-         In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.

3.1.1: Paging
-         In computer operating system, paging is one of the memory management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.

-         Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical random-access memory (RAM).

3.1.2: Segmentation
-         In computing, memory segmentation is one of the most common ways to achieve memory protection: another common one is paging. In a computer system using segmentation, an instruction operand that refers to a memory location includes a value that identifies a segment and an offset within that segment. A segment has a set of permissions, and a length, associated with it. If the currently running prosess is allowed by the permissions to make the type of reference to memory that it is attempting to make, and the offset within the segment is within the range specified by the length of the segment, the reference is permitted; otherwise, a hardware exception is raised.

-         Moreover, as well as its set of permissions and length, a segment also has associated with it information indicating where the segment is located in memory. It may also have a flag indicating whether the segment is present in main memory or not; if the segment is not present in main memory, an exception is raised, and the operating system will read the segment into memory from secondary storage. The information indicating where the segment is located in memory might be the address of the first location in the segment, or might be the address of a page table for the segment, if the segmentation is implemented with paging. In the first case, if a reference to a location within a segment is made, the offset within the segment will be added to address of the first location in the segment to give the address in memory of the referred-to item; in the second case, the offset of the segment is translated to a memory address using the page table.


-         In most systems in which a segment doesn't have a page table associated with it, the address of the first location in the segment is an address in main memory; in those systems, no paging is done. In the Intel 80386 and later, that address can either be an address in main memory, if paging is not enabled, or an address in a paged "linear" address space, if paging is enabled.

-         A memory management unit (MMU) is responsible for translating a segment and offset within that segment into a memory address, and for performing checks to make sure the translation can be done and that the reference to that segment and offset is permitted.

3.2 : Explain on memory relocation policy.
- In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. The virtual memory management unit must also deal with concurrency. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references and addresses in the code of the program so that they always point to the right location in memory.

3.3 : Location of Outdoor System (best fir, worst fit, first fit).

-         The allocator places a process in the smallest block of unallocated memory in which it will fit.
Problems:
-         It requires an expensive search of the entire free list to find the best hole.
-         More importantly, it leads to the creation of lots of little holes that are not big enough to satisfy any requests. This situation is called fragmentation, and is a problem for all memory-management strategies, although it is particularly bad for best-fit.
-         Solution:One way to avoid making little holes is to give the client a bigger block than it asked for. For example, we might round all requests up to the next larger multiple of 64 bytes. That doesn't make the fragmentation go away, it just hides it.
-         Unusable space in the form of holes is called external fragmentation
-         Unusable space in the form of holes is called external fragmentation

-         The memory manager places process in the largest block of unallocated memory available. The ides is that this placement will create the largest hole after the allocations, thus increasing the possibility that, compared to best fit, another process can use the hole created as a result of external fragmentation.

-         Another strategy is first fit, which simply scans the free list until a large enough hole is found. Despite the name, first-fit is generally better than best-fit because it leads to less fragmentation.
Problems:
-         Small holes tend to accumulate near the beginning of the free list, making the memory allocator search farther and farther each time.

-         The first fit approach tends to fragment the blocks near the beginning of the list without considering blocks further down the list. Next fit is a variant of the first-fit strategy.The problem of small holes accumulating is solved with next fit algorithm, which starts each search where the last one left off, wrapping around to the beginning when the end of the list is reached (a form of one-way elevator).


3.4: Relocation of paging system - Least Recently Used (LRU), First In First Out (FIFO)

Least Recently Used (LRU):
-         Removes page least recently accessed
-         Efficiency
-         Causes either decrease in or same number of interrupts
-         Slightly better (compared to FIFO): 8/11 or 73%
-         LRU is a stack algorithm removal policy
-         Increasing main memory will cause either a decrease in or the same number of page interrupts
-         Does not experience FIFO anomaly

Two variations:
-         Clock replacement technique
-         Paced according to the computer’s clock cycle
-         Bit-shifting technique
-         Uses 8-bit reference byte and bit-shifting technique
-         Tracks usage of each page currently in memory

First In First Out (FIFO):
-         Removes page in memory the longest
-         Efficiency
-         Ratio of page interrupts to page requests
-         FIFO example: not so good
-         Efficiency is 9/11 or 82%

FIFO anomaly:
-         More memory does not lead to better performance