Functions of OS
The Operating System is responsible for the following functions
1) scheduler (time allocation to programs) ,
2) memory manager (space allocation to programs)
3) synchronizer ( avoid conflicts for a resource at the same time)
4) protection (avoid conflicts for memory)
Quick Intro to various topics
Hands on tutorials
basically utilize IO time by scheduling another task
Switching from one thread to another (A –> B ) can occur when
a) A is complete
b) A is blocked on IO
c) Timer interrupt forcefully preempts A
|How the scheduler works|
Synchronization is needed as process need to exchange information using a shared region in memory. However, this shared region cannot be accessed in any manner. They needed to be accessed in correct
sequence. Imagine bank balance being updated by two different process. Need to coordinate access among threads
The part of the program where shared memory is accessed is called critical region. Critical regions should be accessed in mutual exclusion
1) No two process may be simultaneously in the same critical region
2) No process running outside the critical region should block another process
3) No process should forever wait in CS
4) No assumptions can be made about speed/number of CPUs
These are called Mutual Exclusion, Progress, Bounded Waiting, Speed Independence
Approaches to Mutual Exclusion
a) Disable interrupts in Critical Section
b) Lock Variables
c) Strict Alteration
All these are not satisfactory ..due to busy waiting
so use sleep and wake mechanism. The process instead of busy waiting is put on a queue. This leads to the concept of semaphores
Memory is divided into Segments, so that sharing of code/data is easy . Protection is possible
While segmentation is good in the sense that it gives flexibility to load programs anywhere in memory. However, still it suffers from external fragmentation.
i.e, there many be holes in memory that are not large enough to be allocated to any process
To avoid this, Paging is introduced. With paging , the entire memory is divided into fixed size pages. Though there is no wastage due to external fragmentation, there may be wastage within a page. This is called internal fragmentation
In principle it is similar to (fully associative) caching. The pages that are referenced are loaded into RAM, others are kept on disk ( Demand Paging )
Now what happens, when there is no space on RAM, one of the pages has to be evicted. This leads to the concept of Page Replacement Algorithms
The easiest method is to used FIFO. The page that was loaded earliest is evicted. But what happens if that page is the frequently accessed one. This policy is not good. Belady showed in FIFO, increasing memory-size doesn’t lead necessarily to reduced page faults. Though this happens in the rare cases, we cannot increase memory and expect page faults to reduce (Belady’s anamoly doesn’t occur in LIFO algorithms )
File System and Disk Operations
Operating Systems – Summary of Algorithms
* Banker. Algorithm used for deadlock avoidance.
* Page replacement. Selecting the victim page under low memory conditions.
Disk scheduling algorithms.
* Elevator. Disk scheduling algorithm that works like an elevator.
* Shortest seek first. Disk scheduling algorithm to reduce seek time.
Process synchronisation algorithms.
* Peterson. Allows two processes to share a single-use resource without conflict, using shared memory for communication. Can be generalized.
* Lamport’s Bakery algorithm. Improve the robustness of multiple thread-handling processes by means of mutual exclusion.
* Dekker. Another concurrent programming algorithm, as the Peterson’s one.
* Round-Robin scheduling. Simplest algorithm, assigns time slices to each process without priority.
* Shortest job next (or first). Executes next the waiting process with the smallest execution time, is non-preemptive.
* Shortest remaining time. A version of shortest job next scheduling that terminates the running process before to choose another task.