It does not matter how slowly you go as long as you do not stop.
--Your friends at LectureNotes

Note for Advanced Data Structure - ADS By JNTU Heroes

  • Advanced Data Structure - ADS
  • Note
  • Jawaharlal Nehru Technological University Anantapur (JNTU) College of Engineering (CEP), Pulivendula, Pulivendula, Andhra Pradesh, India - JNTUACEP
  • 13 Topics
  • 130 Offline Downloads
  • Uploaded 1 year ago
Jntu Heroes
Jntu Heroes
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-1


Text from page-2

1.2 ROLE OF DATA-STRUCTURES IN COMPUTATION Makes Computations Faster: • Faster is better. (Another way to make computations faster is to use parallel or distributed computation.) Three Basic Computation Steps: Computation = Sequence of Computation Steps External Input Program: • (1) Locate/Access data-values (inputs to a step) (2) Compute a value (output of a step) (3) Store the new value External Output Algorithm + DataStructure + Implementation. Algorithm − The basic method; it determines the data-items computed. − Also, the order in which those data-items are computed (and hence the order of read/write data-access operations). • Data structures − Supports efficient read/write of data-items used/computed. Total Time = Time to access/store data + Time to compute data. Efficient Algorithm = Good method + Good data-structures (+ Good Implementation) Question: •? What is an efficient program? •? What determines the speed of an Algorithm? •? A program must also solve a "problem". Which of the three parts algorithm, data-structure, and implementation embodies this?

Text from page-3

1.3 ALGORITHM OR METHOD vs. DATA STRUCTURE Problem: Compute the average of three numbers. Two Methods: (1) (2) aver = (x + y + z)/3. aver = (x/3) + (y/3) + (z/3). • Method (1) superior to Method (2); two less div-operations. • They access data in the same order: 〈x, y, z, aver〉. • Any improvement due to data-structure applies equally well to both methods. Data structures: (a) Three variables x, y, z. (b) An array nums[0..2]. − This is inferior to (a) because accessing an array-item takes more time than accessing a simple variable. (To access nums[i], the executable code has to compute its address addr(nums[i]) = addr(nums[0]) + i*sizeof(int), which involves 1 addition and 1 multiplication.) − When there are large number of data-items, naming individual data-items is not practical. − Use of individually named data-items is not suitable when a varying number of data-items are involved (in particular, if they are used as parameters to a function). A Poor Implementation of (1): Using 3 additions and 1 division. a = x + y; //uses 2 additional assignments b = a + z; aver = b/3;

Text from page-4

1.4 LIMITS OF EFFICIENCY Hardware limit: • Physical limits of time (speed of electrons) and space (layout of circuits). This limit is computation problem independent. From 5 mips (millions of instructions per sec) to 10 mips is an improvement by the factor of 2. One nano-second = 10−9 (one billionth of a second); 10 mips = 100 ns/instruction. Software limit: • Limitless in a way, except for the inherent nature of the problem. That is, the limit is problem dependent. Sorting Algorithm A1: O(n. log n) time Sorting Algorithm A2: O(n2 ) time (n = number of items sorted) A1 is an improvement over A2 by the factor n2 n = = → ∞ as n → ∞. n. log n log n • O(n. log n) is the efficiency-limit for sorting Algorithms.

Lecture Notes