Operating System Notes for Interviews

satejbidvai

Satej Bidvai

Posted on December 12, 2022

Operating System Notes for Interviews

These are just for revision and are not exhaustive. You should learn the concepts in depth before following the notes! Do let me know if I have missed any topics :)

Overview

Kernel v/s OS

  • Kernel: A bridge between the software and hardware of the system.
  • OS: A system program to provide an interface to the computer user so that they can easily operate on the computer.

Types of OS

  1. Batch Operating System
    • An operator takes similar jobs and groups them into batches.
    • The next job gets assigned only after the previous job is executed.
  2. Multiprogramming
    • When the current job is waiting for I/O, another process is assigned to the CPU.
    • The CPU is never idle.
  3. Multitasking / Time-Sharing
    • Fixed time slice for different processes.
    • Quick response time.
  4. Distributed Operating System
    • Remote access is enabled within the network.
    • User has access to files not on his system.
  5. Real-Time Operating System
    • Complete set of tasks within deadlines.
    • eg. Airline traffic control systems

Process Management

Overview

Types of Processing

  1. Preemptive: CPU is allocated to processes for a limited time.
  2. Non Preemptive: CPU is allocated to processes till the process terminates or waits for I/O.

Types of Schedulers

  1. Long Term Schedulers:
    • Loads processes from secondary memory to ready queue.
  2. Short Term Scheduler:
    • Assigns a process from ready queue to the CPU.
    • Selects processes according to a scheduling algorithm.
  3. Mid Term Scheduler
    • Swaps process from main memory to secondary memory.
    • It can again swap the process to resume execution.

System call: fork()

  • Creates a child process
  • For ‘n’ fork() calls: 2^n - 1 child processes are created
  • Returns an integer
    • 0: Child Process
    • +ve: Parent Process
    • -ve: Error; No process created

Context Switching

  • Saving the context of one process and loading another process.
  • Allows multiple processes to share CPU.
  • Interrupts cause CPU to switch to a kernel routine from the current task. (Switch done by OS)

Process v/s Threads

Processes User Level Threads
System calls No System calls
Own memory space Use the memory of parent process
Do no share data or code Share same copy of code and data
Can be divided into multiple threads Smallest unit of execution
Blocking one process does not affect the other Blocking a thread blocks the entire process
Slow context switching Fast context switching

User Level v/s Kernel Level Threads

User Level Threads Kernel Level Threads
Managed by user level library Managed by OS
Fast context switching Slow context switching
Blocking a thread blocks the entire process Blocking a thread has no affect on other threads

CPU Scheduling

Key terms

  1. Throughput: The number of processes that are completed per time unit.
  2. Turnaround Time: Sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
  3. Arrival Time: Time at which a process arrives in the ready queue.
  4. Waiting Time: Sum of periods spent waiting in the ready queue.
  5. Burst Time: Time required for a process to get executed on the CPU.
  6. Completion Time: Time at which a process completes its execution.
  7. Response Time: Time from submission of a request till the first response is produced.

Turn Around Time = Completion Time - Arrival Time

Waiting Time = Turn Around Time - Burst Time

Scheduling Algorithms

  1. First Come First Serve (FCFS)
    • The process that requests CPU first is allocated the CPU first (FIFO).
    • Non preemptive.
    • Avg. waiting time varies substantially according to the burst times of the processes.
    • Not ideal for for time-sharing systems.
  2. Shortest Job First Scheduling (SJFS)
    • Shortest Next CPU Burst Algorithm
    • When the CPU is available, the process that has the smallest CPU burst is assigned first.
    • Preemptive or non preemptive
    • Preemptive SJFS is also called **Shortest Remaining Time First Scheduling.
    • Problems:
      • Knowing the length of next CPU request.
      • Although optimal, cannot be implemented at level of short term scheduling.
      • There is no way to know the length of the next CPU burst.
    • We expect the next CPU burst to be similar in length to the previous ones.

3. Priority Scheduling

  • A priority is associated with each process, and the CPU is allocated to the process with the highest priority.
  • Equal priority processes are scheduled in FCFS order.
  • Preemptive or nonpreemptive
  • Problem of Indefinite Blocking (Starvation)
    • Solution: Aging (Gradually increasing the priority of processes that wait in the system for a long time)
  • Round Robin Scheduling (RR)
    • Similar to FCFS scheduling, but preemption is added to switch between processes.
    • The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
  • Multilevel Queue Scheduling
    • Partitions the ready queue into several separate queues.
    • The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type.
    • Each queue has its own scheduling algorithm.
    • In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling.
  • Multilevel Feedback Queue Scheduling
    • Allows a process to move between queues.
    • Separates processes according to the characteristics of their CPU bursts.
    • If a process uses too much CPU time, it will be moved to a lower-priority queue.
    • This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
    • In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue.

Processes Synchronization

Interprocess Communication (IPC)

Types of Processes

  1. Independent Process: Do not affect / get affected by other processes.
  2. Cooperating Processes: Affect and get affected by other processes.

Shared Memory

  • Processes can exchange information by reading and writing to the shared region.
  • Memory resides in address space of the process creating the shared memory.
  • Other process that want to communicate must attach this to their address space.

Message Passing

  • Communication takes place by exchanging messages.
  • Types of Messages
    • Fixed: Easy system level implementation.
    • Variable: Complex system level implementation.
  • Direct Communication
    1. Explicitly name the sender or receiver.
      • send(P, message)
      • receive(Q, message)
    2. Explicitly naming only the receiver.
      • send(P, message)
      • receive(id, message) : id is set to name of process with which it is communicating
  • Indirect Communication
    • Messages are sent and received from a mailbox.
    • Mailbox may be owned by a process or the OS.
    • send(A, message) : Send to mailbox A
    • receive(A, message) : Receive from mailbox A
  • Synchronous Communication
    • Blocking Send: Sending process is blocked until message is received by the mailbox.
    • Blocking Receive: Receiver is blocked until the message is available.
  • Asynchronous Communication
    • Non-blocking Send: Sending process sends the message and resumes operation.
    • Non-blocking Receive: Receiver retrieves a valid message or NULL.
  • Buffers (Queue) can be implemented in 3 ways:
    1. Zero capacity: Blocking send
    2. Bounded capacity: Blocking send if queue is full
    3. Unbounded capacity: Sender is never blocked

Synchronization

Terms

  1. Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.

Critical Section Problem

Critical section is a code segment where the shared data can be accessed.

Requirements for solution:

  1. Mutual exclusion: Only one process can access the critical section at any time.
  2. Progress: If a process is not using the critical section, then it should not stop any other process from accessing it.
  3. Bounded waiting: Each process must have a limited waiting time. It should not wait endlessly to access the critical section.

Solution:

  1. Lock Variable
    • Executed in user mode.
    • Applicable for more than two processes also.
    • No gurantee for mutual exclusion.
while (lock == 1);
lock = 1;

// Critical section

lock = 0;
Enter fullscreen mode Exit fullscreen mode
  1. Test and Set Lock
    • Satisfies mutual exclusion.
    • Does not satisfy bounded waiting.
while(TestAndSet(&lock));

// critical section

lock = false;
Enter fullscreen mode Exit fullscreen mode
boolean TestAndSet(boolean *target) {
        boolean rv = *target;
        *target = true;
        return rv;
}
Enter fullscreen mode Exit fullscreen mode
  1. Turn Variable
    • Runs in user mode.
    • Applicable only for two processes.
    • Strict Alteration (Alternate turns to execute critical section)
while (turn != 0);
// Critical section.
turn = 1;
Enter fullscreen mode Exit fullscreen mode
while (turn != 1);
// Critical section.
turn = 0;
Enter fullscreen mode Exit fullscreen mode
  1. Semaphores

    Integer variable used by cooperative processes for synchronization.

    1. Counting Semaphore

      • -∞ to +∞
      • Value of semaphore < 0: No. of processes in suspend list
      • Value of semaphore ≥ 0: No. of processes that can enter critical section

        // Entry Code: P / Wait / Down 
        Wait(Semaphore S) {
                --S;
                if (S < 0) {
                    // Put process in suspend list
                }
        }
        
        // Exit Code: V / Release / Up / Signal 
        Signal(Semaphore S) {
                ++S;
                if (S <= 0) {
                    // Wake one process in suspend list
                }
        }
        
    2. Binary Semaphore (Mutex)

      • 0 / 1
      // Entry Code: P / Wait / Down 
      Wait(Semaphore S) {
              if (S == 1) {
                      S = 0;
              } else {
                  // Put process in suspend list
              }
      }
      
      // Exit Code: V / Release / Up / Signal 
      Signal(Semaphore S) {
              if (<Suspend List is empty>) {
                      S = 1;
              } else {
                      // Wake one process in suspend list     
              }
      }
      

Deadlock

Necessary Conditions

  1. Mutual Exclusion - At least one resource must be in a non sharable mode; that is only one process at a time can use the resource.
  2. No preemption - A resource can be released only voluntarily by the process holding it, after that process has completed its task.
  3. Hold & Wait - A process holding at least one resource is waiting to acquire additional resources held by other processes.
  4. Circular Wait
  • In a single instance RAG (Resource Allocation Graph):
    • Circular wait ⇒ Deadlock.
    • No cycle ⇒ No deadlock.

Methods of prevention

  1. Deadlock Ignorance (Ostrich Method) - Ignore the problem and pretend that deadlocks never occur in the system.
  2. Deadlock Prevention - At least one of the 4 necessary conditions is prevented.

    • Mutual Exclusion: Make resources shareable
    • No preemption: Allow preemption
    • Hold and Wait:
      • Must guarantee that whenever a process requests a resource, it does not hold any other resources.
      • Require process to request and be allocated all its resources before it begins execution.
    • Circular Wait: Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.

      Eg. A process cannot request resource no. 2 after resource no. 3.

  3. Deadlock Avoidance - A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock.

    Disadvantages:

    • Low device Utilisation
    • Reduced system throughput
  4. Deadlock Detection - Deadlock situation may occur. The system may provide:

    • An algorithm that examines the state of the system to determine whether a deadlock has occurred.
    • An algorithm to recover from the deadlock.
    • Banker's Algo (Columns: Allocated, Maximum, Remaining)

    Deadlock Free condition:

Total Resources + Total Processes > Total Demand
R > (d1) + (d2 - 1) + ... + (dn - 1)

Memory Management

Terms

  1. Degree of Multiprogramming - The number of processes currently in memory.
  2. Internal Fragmentation - The memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is *****internal fragmentation *-** unused memory that is internal to a partition.
  3. External fragmentation - It exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous: storage is fragmented into a large number of small holes.

Memory Management Techniques

Contiguous Partitioning

  1. Fixed Partitioning (Static)
    • No. of partitions are fixed.
    • Size of each partition may/may not be same.
    • Spanning is not allowed.
    • Problems:
      • Limit in process size.
      • Limit in degree of multiprogramming.
      • Internal Fragmentation: Diff. between allocated and requested memory.
      • External Fragmentation: Process’s memory request cannot be fulfilled because the memory offered is in a non-contiguous manner.
  2. Variable Partitioning (Dynamic)
    • No internal fragmentation.
    • No limit in degree of multiprogramming.
    • No limit in process size.
    • Problems:
      • External Fragmentation - Must use compaction to shift processes so they are contiguous and all free memory is in one block.
      • Allocation/Deallocation is complex.
    • Dynamic Partitioning Placement Algorithm
      1. First Fit - Allocate the first hole that is big enough.
      2. Next Fit - Same as first fit but start search from last allocated hole.
      3. Best Fit - Allocate the smallest hole that is big enough.
      4. Worst Fit - Allocate the biggest hole.

Paging

Memory management technique that allows OS to retrieve processes from secondary storage into main memory.

Logical Address = Page No. + Page Size (Offset)
Physical Address = Frame no. + Frame Size

  • Frame Size = Page Size
  • No. of entries in page table = No. of Pages
  • Page Table Entry:

    Frame No. Valid(1) / Invalid(0) Protection (RWX) Reference (0/1) Caching Dirty (Modified)
  • Inverted Paging:

    Frame No. Page No. Process ID

Some Important Topics

Thrashing

CPU spends more time swapping or paging activities rather than its execution.

Segmentation v/s Paging

Paging Segmentation
Invisible to programmer Visible to programmer
Fixed page size Segment sizes may vary
Faster Slower
OS need to maintain list of free frames OS needs to maintain a list of free holes
Internal fragmentation External Fragmentation

Overlays

  • Running a program that is bigger than the size of the physical memory by keeping only those instructions and data that are needed at any given time.
  • Divide the program into modules in such a way that not all modules need to be in the memory at the same time.
  • Advantages
    • Reduce memory requirement
    • Reduce time requirement
  • Disadvantages
    • Overlap map must be specified by programmer
    • Overlapped module must be completely disjoint

Important Terms

  1. Zombie Process
    • A process whose execution is completed but it still has an entry in the process table.
    • Child processes becomes a zombie before being removed from the process table.
  2. Orphan Process : A process whose parent process no more exists i.e. either finished or terminated without waiting for its child process to terminate is called an orphan process.
  3. Cascading Termination: If the parent process is exiting or terminating then the children process will also get terminated.
  4. Starvation
    • When a process has not been able to get the required resources it needs for progress with its execution for a long period of time.
    • Low priority processes get blocked and only high priority processes are executed.
  5. Aging: To overcome starvation, increase the priority of processes that wait in the system for resources for a long period of time.
  6. Virtual Memory: Storage allocation scheme in which secondary memory can be addressed as though it were part of the main memory.

References

💖 💪 🙅 🚩
satejbidvai
Satej Bidvai

Posted on December 12, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related