×

Close

Type:
**Note**Institute:
**
I K G Punjab technical university
**Specialization:
**Computer Science Engineering**Downloads:
**1**Views:
**77**Uploaded:
**3 months ago**Add to Favourite

Probabilistic analysis of algorithms
In analysis of algorithms, probabilistic analysis of algorithms is an approach to estimate
the computational complexity of an algorithm or a computational problem. It starts from an
assumption about a probabilistic distribution of the set of all possible inputs. This assumption is
then used to design an efficient algorithm or to derive the complexity of a known algorithm.
This approach is not the same as that of probabilistic algorithms, but the two may be combined.
For non-probabilistic, more specifically, for deterministic algorithms, the most common types of
complexity estimates are the average (expected time complexity)[dubious – discuss] and the almost
always complexity. To obtain the average-case complexity, given an input distribution,
the expected time of an algorithm is evaluated, whereas for the almost always complexity
estimate, it is evaluated that the algorithm admits a given complexity estimate that almost
surely holds.
In probabilistic analysis of probabilistic (randomized) algorithms, the distributions or averaging
for all possible choices in randomized steps are also taken into an account, in addition to the
input distributions
Amortized analysis
In computer science, amortized analysis is a method for analyzing a given algorithm's time
complexity, or how much of a resource, especially time or memory in the context of computer
programs, it takes to execute. The motivation for amortized analysis is that looking at the worstcase run time per operation can be too pessimistic.[1]
While certain operations for a given algorithm may have a significant cost in resources, other
operations may not be as costly. Amortized analysis considers both the costly and less costly
operations together over the whole series of operations of the algorithm. This may include
accounting for different types of input, length of the input, and other factors that affect its
performance
The method requires knowledge of which series of operations are possible. This is most
commonly the case with data structures, which have state that persists between operations. The
basic idea is that a worst case operation can alter the state in such a way that the worst case
cannot occur again for a long time, thus "amortizing" its cost.

There are generally three methods for performing amortized analysis: the aggregate method, the
accounting method, and the potential method. All of these give the same answers, and their usage
difference is primarily circumstantial and due to individual preference.[3]
•
•
•
Aggregate analysis determines the upper bound T(n) on the total cost of a sequence
of n operations, then calculates the amortized cost to be T(n) / n.[3]
The accounting method determines the individual cost of each operation, combining its
immediate execution time and its influence on the running time of future operations. Usually,
many short-running operations accumulate a "debt" of unfavorable state in small increments,
while rare long-running operations decrease it drastically.[3]
The potential method is like the accounting method, but overcharges operations early to
compensate for undercharges late
Internal sort
An internal sort is any data sorting process that takes place entirely within the main memory of
a computer. This is possible whenever the data to be sorted is small enough to all be held in the
main memory. For sorting larger datasets, it may be necessary to hold only a chunk of data in
memory at a time, since it won’t all fit. The rest of the data is normally held on some larger, but
slower medium, like a hard-disk. Any reading or writing of data to and from this slower media
can slow the sortation process considerably. This issue has implications for different sort
algorithms.
Some common internal sorting algorithms include:
1.
2.
3.
4.
5.
6.
Bubble Sort
Insertion Sort
Quick Sort
Heap Sort
Radix Sort
Selection sort
The External sorting methods are applied only when the number of data elements to be sorted is
too large. These methods involve as much external processing as processing in the CPU. This
sorting requires auxiliary storage. The external sorting algorithm has to deal with loading and
unloading chunks of data in optimal manner.
Counting sort

In computer science, counting sort is an algorithm for sorting a collection of objects according
to keys that are smallintegers; that is, it is an integer sorting algorithm. It operates by counting
the number of objects that have each distinct key value, and using arithmetic on those counts to
determine the positions of each key value in the output sequence. Its running time is linear in the
number of items and the difference between the maximum and minimum key values, so it is only
suitable for direct use in situations where the variation in keys is not significantly greater than the
number of items. However, it is often used as a subroutine in another sorting algorithm, radix
sort, that can handle larger keys more efficiently.
Because counting sort uses key values as indexes into an array, it is not a comparison sort, and
the Ω(n log n) lower bound for comparison sorting does not apply to it.[1] Bucket sort may be
used for many of the same tasks as counting sort, with a similar time analysis; however,
compared to counting sort, bucket sort requires linked lists, dynamic arrays or a large amount of
preallocated memory to hold the sets of items within each bucket, whereas counting sort instead
stores a single number (the count of items) per bucket.[
Bin sort :
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of
an array into a number of buckets. Each bucket is then sorted individually, either using a
different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is
a distribution sort, and is a cousin of radix sort in the most to least significant digit flavor. Bucket
sort is a generalization of pigeonhole sort. Bucket sort can be implemented with comparisons and
therefore can also be considered a comparison sort algorithm. The computational
complexity estimates involve the number of buckets.
Bucket sort works as follows:
1. Set up an array of initially empty "buckets".
2. Scatter: Go over the original array, putting each object in its bucket.
3. Sort each non-empty bucket.
4. Gather: Visit the buckets in order and put all elements back into the original array.

AVL tree :
In computer science, an AVL tree (Georgy Adelson-Velsky and Evgenii Landis' tree, named
after the inventors) is a self-balancing binary search tree. It was the first such data structure to be
invented.[2] In an AVL tree, the heights of the two child subtrees of any node differ by at most
one; if at any time they differ by more than one, rebalancing is done to restore this property.
Lookup, insertion, and deletion all take O(log n) time in both the average and worst cases,
where n is the number of nodes in the tree prior to the operation. Insertions and deletions may
require the tree to be rebalanced by one or moretree rotations.
The AVL tree is named after its two Soviet inventors, Georgy Adelson-Velskyand Evgenii
Landis, who published it in their 1962 paper "An algorithm for the organization of
information".[3]
AVL trees are often compared with red-black trees because both support the same set of
operations and take O(log n) time for the basic operations. For lookup-intensive applications,
AVL trees are faster than red-black trees because they are more rigidly balanced.[4] Similar to
red-black trees, AVL trees are height-balanced. Both are in general not weight-balanced nor μbalanced for any
descendants.
;[5] that is, sibling nodes can have hugely differing numbers of

## Leave your Comments