OS

References

Functions of OS

The Operating System is responsible for the following functions

1) scheduler (time allocation to programs) ,

2) memory manager (space allocation to programs)

3) synchronizer ( avoid conflicts for a resource at the same time)

4)  protection (avoid conflicts for memory)

CS Course

http://www.iu.hio.no/~mark/os/

Quick Intro to various topics

http://elsoc.wikia.com/wiki/Scheduling

http://elsoc.wikia.com/wiki/Concurrency_And_Synchronization

http://elsoc.wikia.com/wiki/Memory_Management

http://elsoc.wikia.com/wiki/Virtual_Memory

Hands on tutorials

http://www.osdever.net/tutorials/index

Scheduling Algo

basically utilize IO time by scheduling another task

Switching from one thread to another (A –> B ) can occur when

a) A is complete

b) A is blocked on IO

c) Timer interrupt forcefully preempts A

How the scheduler works

http://www.ibm.com/developerworks/linux/library/l-scheduler/

Synchronization

Synchronization is  needed as  process need to exchange information using a shared region in memory. However, this shared region cannot be accessed in any manner. They needed to be accessed in correct

sequence. Imagine bank balance being updated by two different process. Need to coordinate access among threads

The part of the program where shared memory is accessed is called critical region. Critical regions should be accessed in mutual exclusion

1) No two process may be simultaneously in the same critical region

2) No process running outside the critical region should block another process

3) No process should forever wait in CS

4) No assumptions can be made about speed/number of CPUs

These are called Mutual Exclusion, Progress, Bounded Waiting, Speed Independence

Approaches to Mutual Exclusion

some approaches

a) Disable interrupts in Critical Section

b) Lock Variables

c) Strict Alteration

All these are not satisfactory ..due to busy waiting

so use sleep and wake mechanism. The process instead of busy waiting is put on a queue. This leads to the concept of semaphores

Memory Management

Memory is divided into Segments, so that sharing of code/data is easy . Protection is possible

While segmentation is good in the sense that it gives flexibility to load programs anywhere in memory. However, still it suffers from external fragmentation.

i.e, there many be holes in memory that are not large enough to be allocated to any process

To avoid this, Paging is introduced. With paging , the entire memory is divided into fixed size pages. Though there is no wastage due to external fragmentation, there may be wastage within a page.  This is called internal fragmentation

In principle it is similar to (fully associative) caching.  The pages that are referenced are loaded into RAM, others are kept on disk ( Demand Paging )

Now what happens, when there is no space on RAM, one of the pages has to be evicted. This leads to the concept of Page Replacement Algorithms

The easiest method is to used FIFO. The page that was loaded earliest is evicted. But what happens if that page is the frequently accessed one. This policy is not good.  Belady showed in FIFO, increasing memory-size doesn’t lead necessarily to reduced page faults. Though this happens in the rare cases, we cannot increase memory and expect page faults to reduce  (Belady’s anamoly doesn’t occur in LIFO algorithms )

File System and Disk Operations

https://en.wikipedia.org/wiki/Inode_pointer_structure

http://www.cs.odu.edu/~cs471w/spring11/lectures/MassStorage.htm

http://www.slideshare.net/J.T.A.JONES/disk-scheduling

http://web.archive.org/web/20080606005055/http://www.dcs.ed.ac.uk/home/stg/pub/D/disk.html

Operating Systems – Summary of Algorithms


* Banker. Algorithm used for deadlock avoidance.
* Page replacement. Selecting the victim page under low memory conditions.

Disk scheduling algorithms.
* Elevator. Disk scheduling algorithm that works like an elevator.
* Shortest seek first. Disk scheduling algorithm to reduce seek time.

Process synchronisation algorithms.
* Peterson. Allows two processes to share a single-use resource without conflict, using shared memory for communication. Can be generalized.
* Lamport’s Bakery algorithm. Improve the robustness of multiple thread-handling processes by means of mutual exclusion.
* Dekker. Another concurrent programming algorithm, as the Peterson’s one.

Scheduling algorithms
* Round-Robin scheduling. Simplest algorithm, assigns time slices to each process without priority.
* Shortest job next (or first). Executes next the waiting process with the smallest execution time, is non-preemptive.
* Shortest remaining time. A version of shortest job next scheduling that terminates the running process before to choose another task.

3 Responses to OS

  1. bikram says:

    great blog Sir, wish you all the Best🙂

  2. Great blog!

    I am passionate about CS, OS in particular. I have started a new blog here: bytesoftheday.wordpress.com
    where I post one CS related question a day. Might help people preparing for GATE.

    Good luck everyone!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s