Failure will never overtake me if my determination to succeed is strong enough.
--Your friends at LectureNotes

Note for Operating Systems - OS By Ktu Topper

  • Operating Systems - OS
  • Note
  • APJ Abdul Kalam Technological University - KTU
  • 6 Topics
  • 1235 Offline Downloads
  • Uploaded 2 years ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-3

Memory Management Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address. Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management − ● Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use. ● In multiprogramming, the OS decides which process will get memory when and how much. ● Allocates the memory when a process requests it to do so. ● De-allocates the memory when a process no longer needs it or has been terminated. Processor Management In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This function is called​ ​process scheduling. An Operating System does the following activities for processor management − ● Keeps tracks of processor and status of process. The program responsible for this task is known as​ ​traffic controller. ● Allocates the processor (CPU) to a process. ● De-allocates processor when a process is no longer required. Device Management An Operating System manages device communication via their respective drivers. It does the following activities for device management − ● Keeps tracks of all devices. Program responsible for this task is known as the​ ​I/O controller. Decides which process gets the device when and for how much time. ● Allocates the device in the efficient way. ● De-allocates devices. File Management A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. An Operating System does the following activities for file management − ● Keeps track of information, location, uses, status etc. The collective facilities are often known as​ ​file system. ● Decides who gets the resources. ● Allocates the resources. Module 12

Text from page-4

● De-allocates the resources. Other Important Activities Following are some of the important activities that an Operating System performs − ● Security​ − ​By means of password and similar other techniques, it prevents unauthorized access to programs and data. ● Control over system performance​ − ​Recording delays between request for a service and response from the system. ● Job accounting​ − ​Keeping track of time and resources used by various jobs and users. ● Error detecting aids​ − ​Production of dumps, traces, error messages, and other debugging and error detecting aids. ● Coordination between other softwares and users​ − ​Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems. 1.2 Single processor, multiprocessor and clustered systems - Overview Computer architecture means design or construction of a computer. A computer system may be organized in different ways. Some computer systems have single processor and other have multiprocessors. So computer systems categorized in these ways. 1. 2. 3. Single Processor Systems Multiprocessor Systems Clustered Systems Single Processor Some computers use only one processor such as microcomputers. On a single processor system, there is only one CPU that perform all the activities in the computer system, However, most of these systems have other special purpose processors, such as I/O Processor that move data rapidly among different components of the computer system. These processors execute only a limited system programs and do not run the user program. So we define single processor systems as “ A system that has only one general purpose CPU, is considered as single processor system. Multiprocessor systems Some systems have two or more processors. These systems are also known as parallel systems or tightly coupled systems. Mostly the processors of Module 13

Text from page-5

these systems share the common system bus (electronic path) memory and peripheral (input/output) devices. These systems are fast in data processing and have capability to execute more than one program simultaneously on different processors. This type of processing is known as multiprogramming or multiprocessing. Multiple processors further divided in two types. i. Asymmetric Multiprocessing Systems (AMS) ii. Symmetric Multiprocessing Systems (SYS) Asymmetric multiprocessing systems The multiprocessing system, in which each processor is assigned a specific task, is known as Asymmetric Multiprocessing Systems. In this system there exists master slave relationship like one processor defined as master and others are slave. The master processor controls the system and other processor executes predefined tasks. The master processor also defined the task of slave processors. Symmetric multiprocessing system The multiprocessing system in each processor performs all types of task within the operating system. All processors are peers and no master slave relationship exits. In SMP systems many programs can run simultaneously. But I/O must control to ensure that data reach the appropriate processor because all the processor shares the same memory. Clustered systems Clustered systems are another form of multiprocessor system. This system also contains multiple processors but it differs from multiprocessor system. The clustered system is composed of multiple individual systems that connected together. In clustered system, also individual systems or computers share the same storage and liked to gather via local area network. A special type of software is known as cluster to control the node the systems. Other form of clustered system includes parallel clusters and clustering over a wide area network. In parallel cluster multiple hosts can access the same data on the shared storage. So many operating systems provide this facility but some special software is are also designed to run on a parallel cluster to share data. 1.3 ​Kernel Data Structures – Operating Systems used in different computing environments. Module 14

Text from page-6

Kernel Data Structures The operating system must keep a lot of information about the current state of the system. As things happen within the system these data structures must be changed to reflect the current reality. For example, a new process might be created when a user logs onto the system. The kernel must create a data structure representing the new process and link it with the data structures representing all of the other processes in the system. Mostly these data structures exist in physical memory and are accessible only by the kernel and its subsystems. Data structures contain data and pointers, addresses of other data structures, or the addresses of routines. Taken all together, the data structures used by the Linux kernel can look very confusing. Every data structure has a purpose and although some are used by several kernel subsystems, they are simpler than they appear at first sight. Understanding the Linux kernel hinges on understanding its data structures and the use that the various functions within the Linux kernel makes of them. This section bases its description of the Linux kernel on its data structures. It talks about each kernel subsystem in terms of its algorithms, which are its methods of getting things done, and their usage of the kernel's data structures. Linked Lists Linux uses a number of software engineering techniques to link together its data structures. On a lot of occasions it uses linked or chained data structures. If each data structure describes a single instance or occurrence of something, for example a process or a network device, the kernel must be able to find all of the instances. In a linked list a root pointer contains the address of the first data structure, or element, in the list, and then each subsequent data structure contains a pointer to the next element in the list. The last element's next pointer would be 0 or NULL to show that it is the end of the list. In a doubly linked list each element contains both a pointer to the next element in the list but also a pointer to the previous element in the list. Using doubly linked lists makes it easier to add or remove elements from the middle of list, although you do need more memory accesses. This is a typical operating system trade off: memory accesses versus CPU cycles. Hash Tables Linked lists are handy ways of tying data structures together, but navigating linked lists can be inefficient. If you were searching for a particular element, you might easily have to look at the whole list before you find the one that you need. Linux uses another technique, hashing, to get around this restriction. A hash table is an array or vector of pointers. An array, or vector, is simply a set of things coming one after another in memory. A bookshelf could be said to be an array of books. Arrays are accessed by an index, which is an offset into the array's associated area in memory. Taking the bookshelf analogy a little further, you could describe each book by its position on the shelf; you might ask for the 5th book. A hash table is an array of pointers to data structures and its index is derived from information in those data structures. If you had data structures describing the population of a village then you could use a person's age as an index. To find a particular person's data you could use their age as an index into the population hash table and then follow the pointer to the data Module 15

Lecture Notes