--Your friends at LectureNotes

Note for Distributed System - DS By Nagendra Babu

  • Distributed System - DS
  • Note
  • Jawaharlal Nehru Technological University Anantapur (JNTU) College of Engineering (CEP), Pulivendula, Pulivendula, Andhra Pradesh, India - JNTUACEP
  • 13 Topics
  • Uploaded 1 year ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-1

DISTRIBUTED DEADLOCK Abstract: A deadlock is a condition in a system where a process cannot proceed because it needs to obtain a resource held by another process but it itself is holding a resource that the other process needs. In a system of processes which communicate only with a single central agent, deadlock can be detected easily because the central agent has complete information about every process. Deadlock detection is more difficult in systems where there is no such central agent and processes may communicate directly with one another. If we could assume that message communication is instantaneous, or if we could place certain restrictions on message delays, deadlock detection would become simpler. However, the only realistic general assumption we can make is that message delays are arbitrary but finite. Deadlock is a fundamental problem in distributed systems. A process may request resources in any order, which may not be known a priori and a process can request resource while holding others. If the sequence of the allocations of resources to the processes is not controlled, deadlocks can occur. Moreover, a deadlock is a state where a set of processes request resources that are held by other processes in the set. Types of Deadlock: Two types of deadlock can be considered:  Communication Deadlock  Resource Deadlock Communication deadlock occurs when process A is trying to send a message to process B, which is trying to send a message to process C which is trying to send a message to A. Resource deadlock occurs when processes are trying to get exclusive access to devices, files, locks, servers, or other resources. We will not differentiate between these types of deadlock since we can consider communication channels to be resources without loss of generality. Conditions for Deadlock: Four conditions have to be met for a deadlock to occur in a system: 1. Mutual exclusion A resource can be held by at most one process. 2. Hold and wait Processes that already hold resources can wait for another resource. 1|Page

Text from page-2

DISTRIBUTED DEADLOCK 3. Non-preemption A resource, once granted, cannot be taken away. 4. Circular wait Two or more processes are waiting for resources held by one of the other processes. Resource allocation can be represented by directed graphs: P1 R1 means that resource R1 is allocated to process P1. P1 R1 means that resource R1 is requested by process P1. Deadlock is present when the graph has cycles. An example is shown in Figure 1. Figure 1: Deadlock. Wait for Graph: The state of the system can be modeled by directed graph, called a wait for graph (WFG). In a WFG, nodes are processes and there is a directed edge from node P1 to mode P2 if P1 is blocked and is waiting for P2 to release some resource. A system is deadlocked if and only if there exists a directed cycle or knot in the WFG.A Figure 1 shows a WFG, where process P11 of site 1 has an edge to process P21 of site 1 and P32 of site 2 is waiting for a resource which is currently held by process P21. At the same time process P32 is waiting on process P33 to release a resource. If P21 is waiting on process P11, then processes P11, P32 and P21 form a cycle and all the four processes are involved in a deadlock depending upon the request model. 2|Page

Text from page-3

DISTRIBUTED DEADLOCK Figure 2: A Wait-for-Graph. Handling Deadlocks in Distributed Systems: Deadlocks in distributed systems are similar to deadlocks in centralized systems. In centralized systems, we have one operating system that can oversee resource allocation and know whether deadlocks are present. With distributed processes and resources it becomes harder to detect, avoid, and prevent deadlocks. Several strategies can be used to handle deadlocks: Ignore: We can ignore the problem. This is one of the most popular solutions. Detect: We can allow deadlocks to occur, then detect that we have a deadlock in the system, and then deal with the deadlock. Prevent: We can place constraints on resource allocation to make deadlocks impossible. Avoid: We can choose resource allocation carefully and make deadlocks impossible. 3|Page

Text from page-4

DISTRIBUTED DEADLOCK Deadlock avoidance is never used (either in distributed or centralized systems). The problem with deadlock avoidance is that the algorithm will need to know resource usage requirements in advance so as to schedule them properly. Whereas, the first of these is trivially simple. The other two are described in details: Deadlock Detection: General methods for preventing or avoiding deadlocks can be difficult to find. Detecting a deadlock condition is generally easier. When a deadlock is detected, it has to be broken. This is traditionally done by killing one or more processes that contribute to the deadlock. Unfortunately, this can lead to annoyed users. When a deadlock is detected in a system that is based on atomic transactions, it is resolved by aborting one or more transactions. But transactions have been designed to withstand being aborted. Consequences of killing a process in a transactional system are less severe. Centralized: Centralized deadlock detection attempts to imitate the nondistributed algorithm through a central coordinator. Each machine is responsible for maintaining a resource graph for its processes and resources. A central coordinator maintains the resource utilization graph for the entire system. This graph is the union of the individual graphs. If this coordinator detects a cycle, it kills off one process to break the deadlock. In the non-distributed case, all the information on resource usage lives on one system and the graph may be constructed on that system. In the distributed case, the individual sub graphs have to be propagated to a central coordinator. A message can be sent each time an arc is added or deleted. If optimization is needed, a list of added or deleted arcs can be sent periodically to reduce the overall number of messages sent. Here is an example. Suppose machine A has a process P0, which holds the resource S and wants resource R, which is held by P1. The local graph on A is shown in Figure 2. Another machine, machine B has a process P2, which is holding resource T and wants resource S. Its local graph is shown in Figure 3. Both of these machines send their graphs to the central coordinator, which maintains the union (Figure 3). All is well. There are no cycles and hence no deadlocks. Now two events occur. Process P1 releases resource R and asks machine B for resource T. Two messages are sent to the coordinator: Message 1 (from machine A): “releasing R” Message 2 (from machine B): “waiting for T” This should cause no problems (no deadlock). However, if message 2 arrives first, the coordinator would then construct the graph in Figure 4 and detect a deadlock. Such a condition is known as false deadlock. A way to fix this is to use Lamport’s algorithm to impose global time ordering on all machines. Alternatively, if the coordinator suspects deadlock, it can send a reliable message to every machine asking whether it has any release 4|Page

Lecture Notes