Interprocess Communication and Synchronization

Introduction

In this article, we are going to introduce interprocess communication(IPC)—a mechanism for processes to synchronize and communicate with each other. After reading this article you will understand:

  • Why do we need interprocess communication(IPC)
    [Ans: Synchronization & Communication]
  • Two approaches for IPC
    [Ans: Shared memory & Message passing]
  • Why do we need synchronization for shared memory methods
    [Ans: Race condition]
  • Methods for shared memory synchronization
    [Ans: Mutex, Semaphore, Monitor]
  • Differences among shared memory synchronization methods
  • What message passing methods do we have
    [Ans: In linux: Pipe, Message queue, socket…]
  • How to categorize message passing methods
    [Ans: Blocking, Addressing, Queuing]

Let me show you a mind map for interprocess communication. It also shows the structure of this article.
You may download the PDF Version Here

Why Interprocess Communication

Cooperating processes frequently need to communicate with each other to ensure works are correctly done. Sometimes,the correctness of the executing results depend on the executing sequence of cooperating processes. At this scenario, we need to enforce synchronization to make sure we can get the correct results. Sometimes the execution of one process may require the result of another process. At this scenario, we need a communication mechanism for those processes to talk to each other.

Two Approaches for IPC

Here is where IPC comes to rescue. There are two approaches that can be used for IPC. One is shared memory, and the other is message passing. We will address the details of them in the following sections.

Shared Memory IPC

Cooperating processes may communicate by sharing the same piece of memory. One process can write some data to a piece of shared memory, and another process can read from the same piece of memory directly. When doing so, you need to be careful. Otherwise, The executing result may not be the same as your expectation. To understand why, we explain the concept of race condition first.

Race Condition

A race condition occurs when multiple processes or threads try to read and write data items at the same piece of shared memory, so that the final result depends on the order of execution of instructions.

Example:
Suppose process 1 and process 2 are sharing the same data counter , race condition may happens as follows:
Race Condition 1
The satements can be break down into three lines of assembly instructions correspondingly:
Race Condition2
When context switch happens in the middle of the three lines of instructions, unexpected results may happens(Try it!). This demonstrate the so called race condition.

Critical Section Problem

To solve race condition problem, we define a problem called critical section problem.
Definition:
Critical section is the part of program where shared memory is accessed.
Definition:
Critical section problem: Find solutions that arrange the cooperating processes properly, so that no two processes were ever in their critical sections at the same time.

Solution Criteria

To satisfy the requirements of critical section problems and other concerns like fairness, efficiency, our solutions should try to fulfill the three criteria below:

  1. Mutual Exclusion: No two processes can enter their critical sections at the same time.
  2. Progress: If no process is executing its critical section, then one of the waiting processes can enter its critical section.
  3. Bounded Waiting: No infinite wait for a process.

Hardware Supports for Shared Memory Synchronization

We are going to introduce two hardware supports for shared memory synchronization. They may be used for implementing software synchronization algorithms like mutex, semaphore. You may wonder what is hardware supporting for software synchronization algorithms. The answer is that hardware provides atomic operation for software algorithms. So the following hardware functionalities are all atomic operations.

Test and Set

Definition:

Notes: Test and set is supported by hardware, which ensures that it runs atomically.

Compare and Swap

Definition:

Notes: Compare and swap is supported by hardware, which ensures that it runs atomically.

Software solutions for Shared Memory Synchronization

Remind you again, this article is not explaining the details of these algorithms, but to relate them together. We divide them into two parts— primitive and advanced. Advanced algorithms are supported by primitive algorithms/mechanisms as well as hardware algorithms like test and set. There are four primitive algorithms/mechanisms: Lock variables, Taking turns, Peterson’s algorithm and Bakery algorithm; and three advanced algorithms: Mutex, Semaphore and Monitor. We will illustrate them one by one in the following sections. Before then, we would like to show you the relationships among them first:
Synchronization Algorithm Relationship

Now, let’s define them one by one:

Lock Variable Mechanism

Definition:

Taking Turns Mechanism

Definition:

Peterson’s Algorithm

Definition:

Bakery Algorithm

Definition:

Mutex

Difinition:

Notes: The two operations(entry_section() and exit_section()) must be atomic, which can be implemented by test and set or compare and swap.

Semaphore

Definition:

Notes: The two operations must be atomic, which can be implemented by test and set or compare and swap.

Monitor

Definition:
Monitor is an abstract data type that provides mutual exclusive operations. The monitor construct ensures that only one process at a time is active within the monitor. Monitors are provided by operating systems, programmers do not need to worry about how to implement them.

Put them Altogether

Putting all of the synchronization algorithms/mechanisms together:
You may download the PDF Version Here
Synchronization Algorithm Properties

The following table shows the properties of those software synchronization algorithms above:
You can also download its PDF Version Here
Synchronization Algorithm Properties

Message Passing IPC

To communicate with other processes, shared memory IPC enforces many synchronization constraints. For message passing IPC, we do not need to care much about synchronization problems(we do, but not much). We can categorize message passing IPC methods from three dimensions: blocking , addressing, queuing.

Blocking

The blocking property describes how message passing processes are synchronized.

Sender:

  • Blocking: The sending process is blocked until message is received.
  • Nonblocking: Send and go away.

Receiver:

  • Blocking: If no waiting message present, the receiver process is blocked until message come.
  • Nonblocking: Try to receive, and go away.

Addressing

The addressing property describes the destination of the message. A message may be delivered to the receiver directly or to a mail-box for the receiver to collect later on(Indirectly).

Queuing

The queuing property describes the receiving order of messages on the receiver side. It may be FIFO or priority based.

Linux Message Passing IPC

In linux system, there are about five basic types of message passing IPC methods: anonymous pipe, named pipe, signal, message queue and socket. The following table is trying to compare them by the three properties discussed above.
Linux IPC Methods Properties

Conclusion

//Todo