×
SUCCESS DOESN'T HAPPEN TO YOU. IT HAPPENS BECAUSE OF YOU
--Your friends at LectureNotes
Close

Distributed Computing

by Santanu Prasad Sahoo
Institute: Biju Patnaik University of Technology Specialization: Computer Science EngineeringDownloads: 196Views: 3017Uploaded: 7 months agoAdd to Favourite

Touch here to read
Page-1

Distributed Computing by Santanu Prasad Sahoo

Topic:
Santanu Prasad Sahoo
Santanu Prasad Sahoo

/ 111

Share it with your friends

Suggested Materials

Leave your Comments

Contributors

Santanu Prasad Sahoo

Santanu Prasad Sahoo

1 Syllabus Distributed Computing Lecture : 4 Hrs / Week Practical : 3 Hrs / Week One paper : 100 Marks / 3 Hrs duration Term work : 25 Marks 1. Fundamentals Evolution of Distributed Computing Systems, System models, issues in design of Distributed Systems, Distributedcomputing environment, web based distributed model, computer networks related to distributed systems and web based protocols. 2. Message Passing Inter process Communication, Desirable Features of Good Message-Passing Systems, Issues in IPC by Message, Synchronization, Buffering, Multidatagram Messages, Encoding and Decoding of Message Data, Process Addressing, Failure Handling, Group Communication. 3. Remote Procedure Calls The RPC Model, Transparency of RPC, Implementing RPC Mechanism, Stub Generation, RPC Messages, Marshaling Arguments and Results, Server Management, Communication Protocols for RPCs, Complicated RPCs, Client-Server Binding, Exception Handling, Security, Some Special Types of RPCs, Lightweight RPC, Optimization for Better Performance. 4. Distributed Shared Memory Design and Implementation issues of DSM, Granularity, Structure of Shared memory Space, Consistency Models, replacement Strategy, Thrashing, Other Approaches to DSM, Advantages of DSM. 5. Synchronization Clock Synchronization, Event Ordering, Mutual Exclusion, Election Algorithms.
2 6. Resource and Process Management Desirable Features of a good global scheduling algorithm, Task assignment approach, Load Balancing approach, Load Sharing Approach, Process Migration, Threads, Processor allocation, Real time distributed Systems. 7. Distributed File Systems Desirable Features of a good Distributed File Systems, File Models, File Accessing Models, File-shearing Semantics, Filecaching Schemes, File Replication, Fault Tolerance, Design Principles, Sun’s network file system, Andrews file system, comparison of NFS and AFS. 8. Naming Desirable Features of a Good Naming System, Fundamental Terminologies and Concepts, Systems-Oriented Names, Name caches, Naming & security, DCE directory services. 9. Case Studies Mach & Chorus (Keep case studies as tutorial) Term work/ Practical: Each candidate will submit assignments based on the above syllabus along with the flow chart and program listing will be submitted with the internal test paper. References: 1. Distributed OS by Pradeep K. Sinha (PHI) 2. Tanenbaum S.: Distributed Operating Systems, Pearson Education 3. Tanenbaum S. Maarten V.S.: Distributed Systems Principles and Paradigms, (Pearson Education) 4. George Coulouris, Jean Dollimore. Tim Kindberg: Distributed Systems concepts and design. 
3 1 FUNDAMENTALS Unit Structure: 1.1 1.2 1.3 What is a Distributed Computing System Evolution of Distributed Computing System Distributed Computing System Models 1.1 WHAT IS A DISTRIBUTED COMPUTING SYSTEM Over the past two decades, advancements in microelectronic technology have resulted in the availability of fast, inexpensive processors, and advancements in communication technology have resulted in the availability of cost effective and highly efficient computer networks. The net result of the advancements in these two technologies is that the price performance ratio has now changed to favor the use of interconnected, multiple processors in place of a single, high-speed processor. Computer architectures consisting of multiple processors are basically of two types: interconnected, 1. Tightly coupled systems: In these systems, there is a single system wide primary memory (address space) that is shared by all the processors [Fig. 1.1(a)]. If any processor writes, for example, the value 100 to the memory location x, any other processor subsequently reading from location x will get the value 100. Therefore, in these systems, any communication between the processors usually takes place through the shared memory. 2. Loosely coupled systems: In these systems, the processors do not share memory, and each processor has its own local memory [Fig. 1.1(b)]. If a processor writes the value 100 to the memory location x, this write operation will only change the contents of its local memory and will not affect the contents of the memory. In these systems, all physical communication between the processors is done by passing messages across the network that interconnects the processors.
4 CPU CPU Systemwide shared memory CPU CPU Interconnection Hardware (a) Local memory CPU Local memory CPU Local memory CPU Local memory CPU Communication network (b) Fig. 1.1 Difference between tightly coupled and loosely coupled multiprocessor systems (a) a tightly coupled multiprocessor system; (b) a loosely coupled multiprocessor system  Tightly coupled systems are referred to as parallel processing systems, and loosely coupled systems are referred to as distributed computing systems, or simply distributed systems.  In contrast to the tightly coupled systems, the processor of distributed computing systems can be located far from each other to cover a wider geographical area. Furthermore, in tightly coupled systems, the number of processors that can be usefully deployed is usually small and limited by the bandwidth of the shared memory. This is not the case with distributed computing systems that are more freely expandable and can have an almost unlimited number of processors.  In short, a distributed computing system is basically a collection of processors interconnected by a communication network in which each processor has its own local memory and other peripherals, and the communication between any

Lecture Notes