Processes in Operating System

hrishi2710

Hrishikesh kumar

Posted on May 31, 2019

Processes in Operating System

Here we will discuss the concept of processes in Operating system. For the basics of operating system you can visit the following article:

So, let's get going with the concept of processes.

Processes are the program in execution. It is not only a program code, but it can also be a program counter, current activity, heap, etc.

Process concept

Process state

There can be 5 process states at any time, which can be as follows:

  1. New- When the process is about to be executed.
  2. Running - when the processor is executing the process.
  3. Waiting - when the process has been executed but waiting for something, let's say input from any device.
  4. Ready - when the process is in the queue, ready to be sent to the CPU.
  5. Terminated - when the execution is over and corresponding resource and memory is de-allocated.

Process state diagram:
process state diagram

Process Control Block(PCB)

Process control block consists of:

  1. Process state:- As illustrated above.
  2. Program counter:- It consists of the pointer pointing towards the address of the next statement to be executed.
  3. CPU register:- PCB consists of state of CPU registers
  4. CPU scheduling information:- To decide when to send a process for execution and when to keep it waiting.
  5. Memory management information:- PCB consists of the addresses of first and last registers involved in any process.
  6. Accounting information:- Collects data like time taken to run, memory used by the process, etc.
  7. I/O status

Threads

In a process, there are several kinds of different small processes(not to be confused with child processes) running simultaneously which are known as threads. For example, a word editing program can take input from the keyboard and run spellchecker simultaneously. These are 2 threads in a single process.

Process scheduling

Process scheduling diagram:
processing queue

Any incoming program(or job) is first queued in job queue, then it is sent to ready queue when it is ready for execution. Ready queue is basically linked list whose head LL consists of the address of the first PCB(process control block) and tail consists that of the last PCB that are to be sent to CPU for execution.

Once the turn for execution comes, scheduler dispatches it to CPU where it is executed. If the job is computational only, then it will be executed and then the process is terminated but if it requires I/O or it creates child process whose execution has to be completed before the process to be finished, then the process waits for the time being and another process is allocated to the CPU in meantime.

Once the interrupt/I/O/child process finishes its execution, the process is again queued to the ready queue and then it is terminated after it is done processing through CPU.

Scheduler

There are two kinds of schedulers: job scheduler/ long term scheduler and CPU scheduler/ short term scheduler.

Job scheduler schedules how a job is allocated to the ready queue to be executed. Its work comes only when the ready queue has space or an existing process completes execution and its terminated, thus it has comparatively more time for its work, thus its name, long term scheduler.

CPU scheduler tends to be faster as it has to dispatch the program from the ready queue to the CPU for execution. Its work comes relatively faster as the program execution (or part of it) takes lesser time, thus its name, short term scheduler.

Most of the processes can be divided into two categories namely I/O bound and CPU bound. I/O bound processes require less of computational work and more of input from outside whereas the CPU bound tends to be purely computational.

Schedulers collectively have to do the job of sending a mix of both the programs so that the resources are used efficiently. Let's say more of I/O bound processes are sent, then the job queue will be overloaded (as none of the existing programs is finished executing as they are waiting for I/O). On the other hand, let's say more of CPU bound processes are dispatched, then the I/O queue will be empty resulting in resource wastage.

There is also one more category of schedulers which is nowadays used in many OS namely medium term scheduler. The OS using this kind of scheduler sends the processes directly to the ready queue. Medium term scheduler comes into the act when there occurs an interrupt. During that, medium term scheduler swaps the process in execution with the interrupt and successfully dispatches it when the interrupt is over.

Context switch

Context switch is primarily what the medium scheduler does. When an interrupt occurs, the current state(or context) of the process is saved (known as state save or context save) and once the interrupt is over, the state is restored(state restore).

Operation on processes

Process creation

A process can create a child process which in turn can create another child process of its own. A tree is used to maintain the link between them. Every process created is identified by an ID known as Process ID(pid).

2 types of process execution:

  1. Parent and child are concurrently executed.
  2. Parent waits for the child process to finish execution.

2 types of address space used:

  1. Child process is duplicate of the parent using the same resources.
  2. Child process act as a new program using its own resources.

Process termination

A process is terminated when its execution is finished or any other process forces it to terminate.

In some programs, termination of the parent process causes all its child processes to terminate. This is known as cascading termination.

Once the child process is terminated, it has to wait for the process to call it through wait() statement. Till then the child process is called zombie process. And in case, if the parent is terminated and the child is not, then the child process is known as orphan process. In Linux and Unix, orphans are then attached to the next process and then terminated.

Interprocess communication(IPC)

There are two types of processes vis-a-vis communication. Independent processes which do not require communication and cooperating processes which require communication.

Communication can be done through 2 ways(or models):

  1. Shared memory model
  2. Message passing model

Shared memory model

To illustrate this model, let's have a simple problem of producer-consumer model where the producer produces information to be consumed by the consumer. Here, both the processes to run concurrently, there have to have a buffer of items in shared memory which can be filled by producer and emptied by the consumer.

Two types of buffer can be used for the aforementioned process. THe unbounded buffer places no practical limit on the size of buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty and the producer must wait if the buffer is full.

Message passing model

There should exist a communication link in order to pass the message between two processes. Issues related to these are:

Naming

Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication.

Under direct communication, each process that wants to communicate must explicitly name the recipient or sender of the message. It can be defined as:

  • send(P, message) - Send a message to process P.
  • receive(Q, message) - Receive a message from process Q.

There is sense of symmetry in this scheme as both the processes know the name of other process. There is an asymmetrical variation of this, in which sender names the recipient but recipient doesn't need to get the name of sender. It can be illustrated as:

  • send(P, message) - Send a message to process P
  • receive(id, message) - Receive a message from any process. The variable id is set to the name of the process with which communication has taken place.

Under indirect communication, the messages are sent to a mailbox or port from where the recipient of the message can get it. It can be illustrated as:

  • send (A, message) - Send a message to mailbox A.
  • receive (A, message) - Receive a message from mailbox A.

So, here it's clear that for two processes to communicate, there should be a shared mailbox. Also, unlike the last one, where only two processes can communicate through direct communication, here more than two processes can communicate by different processes receiving the same message from the mailbox.

Now, let's say Processes P, Q, R all share the same mailbox where P sends and Q, R receives. Then which process will receive the message sent by P and how to avoid the conflict?. It can be decided by the following methods:-

  • Allow a link to be associated with two processes at most.
  • Allow at most one process at a time to execute a receive() operation.
  • Select a random process (from Q and R, but not both) and then select the remaining processes in round robin fashion.

Synchronisation

Message passing maybe either blocking or nonblocking - also known as synchronous and asynchronous

  • Blocking send - The sending process is blocked until the message is received by the process or mailbox.
  • Nonblocking send - The sending process sends the message and resumes the operation.
  • Blocking receive - The receiver blocks until a message is available.
  • Nonblocking receive - The receiver retrieves either a valid message or a null.

A combination of send and receive calls can be used.

Buffering

Whether communication is direct or indirect, messages exchanged by processes reside in a temporary queue. Such queues can be implemented in 3 ways.

  1. Zero capacity - The queue has max length of zero, so no message can wait, so sender must block until the message is received.
  2. Bounded capacity - The queue has max length of n, so sender will block only when the n messages have filled the queue.
  3. Unbounded capacity - The sender never blocks.

Communication in client-server system

Apart from the above IPCs, three other models can be used for the client-server system: sockets, remote procedure calls(RPC), pipes

Sockets

Sockets are the endpoint of communication. A pair of processes needs a pair of sockets. Sockets are defined as IP address followed by port number. For example, 162.0.0.1:80 means the machine has IP address of 162.0.0.1 and listening to port 80.

Servers implementing specified series (like FTP, HTTP) listen to well known ports( FTP to port 21, HTTP to port 80). All ports numbered less than 1024 are well known ports. Sockets allow only unstructured stream of bytes to be exchanged. Thus it is called lower level of communication. Structuring responsibility lies with the client or server application.

Remote Procedure Calls(RPC)

RPC is loosely based on message sharing model as a message containing all the information (such as listening port address, parameters for the function etc) is sent to the system with listening port. The message is received and if there is any output to be returned, is done in the same manner.

One issue that needs to be dealt with is the differences in data representation on the client and server machines. Consider the representation of 32-bit integers/ Some systems( knows as big-endian) store the most significant byte first, while other systems(known as little-endian) store the least significant byte first. To resolve the differences, many RPC systems define a machine-independent representation of data. One such representation is known as external data representation (XDR). On client or server side, this XDR data can be converted to machine relevant data.

Another issue that can concern is, how the client and server know the port of each other as they don't share a memory? There are two approaches for this:

  1. The information is predetermined i.e. everytime same port is used. But, in this, after compilation, the port can't be changed.
  2. The port information can be obtained dynamically by sending a request for the same and then getting it from other side.

Pipes

Pipes act as a conduit allowing the processes to communicate with each other.
There can be following issues while implementing a pipe:

  • Bidirectional or unidirectional depending on if the message flow is allowd from both sides or not.
  • If two way communication is allowed, it is half duplex(the message is allowed to travel only one way at a time) or full duplex(both ways allowed).?
  • Is there mus relation between parent and child processes?
  • Can the pipes communicate over a network or on the same machine.

2 pipes are discussed here:

Ordinary pipes

Also known as anonymous pipes on Windows. This pipe is unidirectional and employ a must parent child relationship. This means this kind of pipe used for communication on same machine. Once the communication is over, this pipe ceases to exist.

Named pipes

These are bidirectional pipes and they can employ communication between any two processes( unlike ordinary pipes which employ communication between parent and child processes only). The pipe still exists after a particular communication is over and can be used by other processes after that.

Hope that the basic concept of processes become clear after this.

💖 💪 🙅 🚩
hrishi2710
Hrishikesh kumar

Posted on May 31, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Processes in Operating System
computerscience Processes in Operating System

May 31, 2019