Knowledge in Daa

Selection Sort

The selection sort enhances the bubble sort by making only a single swap for each pass through the rundown. Indorder to do this, a selection sort searches for the biggest value as it makes a pass and, after finishing the pass, places it in the best possible area. Similarly as with a bubble sort, after the first pass, the biggest item is in the right place. After the second pass, the following biggest is set up. This procedure proceeds and requires n-1 goes to sort n item, since the last item must be set up after the (n-1) st pass.ALGORITHM: SELECTION SORT (A) 1. k ← length [A] 2. for j ←1 to n-1 3. smallest ← j 4. for I ← j + 1 to k 5. if A [i] < A [ smallest] 6. then smallest ← i 7. exchange (A [j], A [smallest]) Analysis:Input: n elements are given.Output: the number of comparisons required to make sorting.Logic: If we have n elements in selection sort, then n-1 passes are required to find a sorted array. In pass 1: n-1 comparisons are required In pass 2: n-2 comparisons are required In pass 3: n-3 comparisons are required ............................................................................ ............................................................................... In pass n-1: 1 comparison is required Total comparisons: T (n) = (n-1) + (n-2) + (n-3) +........+ 1 = = o (n2) Therefore complexity is of order n2 Example:Sort the following array using selection sort: A [] = (7, 4, 3, 6, 5).A [] =743651 Iteration:Smallest =7        4 < 7, smallest = 4        3 < 4, smallest = 3     6 > 3, smallest = 3     5 > 3, smallest = 3  Swap 7 and 3347652nd iteration:Smallest = 4  4 < 7, smallest = 4  4 < 6, smallest = 4  4 < 5, smallest = 4  No Swap347653rd iteration:Smallest = 7  6 < 7, smallest = 6  5 < 7, smallest = 5  Swap 7 and 5345674th iteration:Smallest = 6  6< 7, smallest = 6  No Swap34567Finally, the sorted list is:34567

Insertion Sor

It is a very simple method to sort the number in an increasing or decreasing order.It has various advantages:It is simple to implement.It is efficient on small datasets.It is stable (does not change the relative order of elements with equal keys)It is in-place (only requires a constant amount O (1) of extra memory space).It is an online algorithm, in that it can sort a list as it receives it.ALGORITHM: INSERTION SORT (A) 1. For k ← 2 to length [A] 2. Do key ← A[k] 3. i=k-1 4. while i>0 and A[i]>key 5. do A[i+1] ← A[i] 6. i=i-1 7. A[i+1] ← key Analysis:Input: n elements are given.Output: the number of comparisons required to make sorting.Logic: If we have n elements in insertion sort, then n-1 passes are required to find a sorted array. In pass 1: no comparison is required In pass 2: 1 comparison is required In pass 3: 2 comparisons are required ............................................................................ ............................................................................... In pass n: n-1 comparisons are required Total comparisons: T (n) = 1+2+3+...........+ n-1 = = o (n2) Therefore complexity is of order n2 Example:Illustrate the operation of INSERTION SORT on the array A = (4, 15, 7, 18, and 16).Solution:A [] =41571816For j=2 to 5 J=2, key=A [2]          Key=15  I=2-1=1, i=1  While i>0 and A [1]>15  A Condition false, so no change  Now=3, key=A [3] =7  I=3-1=2  I=2, key=7  While i>0 and A [2]>key   Condition is true  So A [2+1] ← A [2]        A [3] ← A [2]  i.e.4151816and i=2-1=1, i=1  while i>0 and A [1]>key  A Condition false. So no change  Then A [1+1] ← key      A [2] ← 7  That is47151816For j=4  Key=A [4]  Key = 18, i=3  Now, while 3>0 and A [3]>18  The Condition is false, No change.  Similarly, j=5            Key=A [5]  So key=16, i=4  Now while 4>0 and A [4]>16  Condition is true   So A [5] =16 and i=4-1=3  Now while 3>0 and A [3]>16  Condition is false  So A [3+1] =A [4] =16  And the sorted array is:47151618

Divide and Conquer Introduction

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute on a huge input, break the input into minor pieces, decide the problem on each of the small pieces, and then merge the piecewise solutions into a global solution. This mechanism of solving the problem is called the Divide & Conquer Strategy.Divide and Conquer algorithm consists of a dispute using the following three steps.Divide the original problem into a set of subproblems.Conquer: Solve every subproblem individually, recursively.Combine: Put together the solutions of the subproblems to get the solution to the whole problem.Generally, we can follow the divide-and-conquer approach in a three-step process.Examples: The specific computer algorithms are based on the Divide & Conquer approach:Maximum and Minimum ProblemBinary SearchSorting (merge sort, quick sort)Tower of Hanoi.Fundamental of Divide & Conquer Strategy:There are two fundamental of Divide & Conquer Strategy:Relational FormulaStopping Condition1. Relational Formula: It is the formula that we generate from the given technique. After generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the broken subproblems.2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we need to know that for how much time, we need to apply divide & Conquer. So the condition where the need to stop our recursion steps of D&C is called as Stopping Condition.

Max - Min Problem

Problem: Analyze the algorithm to find the maximum and minimum element from an array.Algorithm: Max ?Min Element (a []) Max: a [i] Min: a [i] For i= 2 to n do If a[i]> max then max = a[i] if a[i] < min then min: a[i] return (max, min) Analysis:Method 1: if we apply the general approach to the array of size n, the number of comparisons required are 2n-2.Method-2: In another approach, we will divide the problem into sub-problems and find the max and min of each group, now max. Of each group will compare with the only max of another group and min with min.Let n = is the size of items in an arrayLet T (n) = time required to apply the algorithm on an array of size n. Here we divide the terms as T(n/2).2 here tends to the comparison of the minimum with minimum and maximum with maximum as in above example.T (n) = 2 T  → Eq (i)T (2) = 1, time required to compare two elements/items. (Time is measured in units of the number of comparisons)→ Eq (ii)Put eq (ii) in eq (i)Similarly, apply the same procedure recursively on each subproblem or anatomy{Use recursion means, we will use some stopping condition to stop the algorithm}Recursion will stop, when  → (Eq. 4)Put the equ.4 into equation3.Number of comparisons requires applying the divide and conquering algorithm on n elements/items = Number of comparisons requires applying general approach on n elements = (n-1) + (n-1) = 2n-2From this example, we can analyze, that how to reduce the number of comparisons by using this technique.Analysis: suppose we have the array of size 8 elements.Method1: requires (2n-2), (2x8)-2=14 comparisonsMethod2: requires It is evident; we can reduce the number of comparisons (complexity) by using a proper technique.

Binary Search

 In Binary Search technique, we search an element in a sorted array by recursively dividing the interval in half.2. Firstly, we take the whole array as an interval.3. If the Pivot Element (the item to be searched) is less than the item in the middle of the interval, We discard the second half of the list and recursively repeat the process for the first half of the list by calculating the new middle and last element.4. If the Pivot Element (the item to be searched) is greater than the item in the middle of the interval, we discard the first half of the list and work recursively on the second half by calculating the new beginning and middle element.5. Repeatedly, check until the value is found or interval is empty.Analysis:Input: an array A of size n, already sorted in the ascending or descending order.Output: analyze to search an element item in the sorted array of size n.Logic: Let T (n) = number of comparisons of an item with n elements in a sorted array.Set BEG = 1 and END = nFind mid =Compare the search item with the mid item.Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1Case 2: item ≠A [mid], then we will split the array into two equal parts of size .And again find the midpoint of the half-sorted array and compare with search element.Repeat the same process until a search element is found.T (n) =  ...... (Equation 1){Time to compare the search element with mid element, then with half of the selected half part of array}At least there will be only one term left that's why that term will compare out, and only one comparison be done that's why Is the last term of the equation and it will be equal to 1

Merge Sort

It closely follows the divide & Conquers paradigm.Conceptually, it works as follows:Divide: Divide the unsorted list into two sublists of about half the size.Conquer: Sort each of the two sublists recursively until we have list sizes of length 1, in which case the list items are returned.Combine: Join the two sorted Sub lists back into one sorted list.The Main purpose is to sort the unsorted list in nondecreasing order.ALGORITHM-MERGE SORT 1. If p<r 2. Then q ← ( p+ r)/2 3. MERGE-SORT (A, p, q) 4. MERGE-SORT ( A, q+1,r) 5. MERGE ( A, p, q ,r) The following figure illustrates the dividing (splitting) procedure.FUNCTIONS: MERGE (A, p, q, r) 1. n 1 = q-p+1 2. n 2= r-q 3. create arrays [1.....n 1 + 1] and R [ 1.....n 2 +1 ] 4. for i ← 1 to n 1 5. do [i] ← A [ p+ i-1] 6. for j ← 1 to n2 7. do R[j] ← A[ q + j] 8. L [n 1+ 1] ← ∞ 9. R[n 2+ 1] ← ∞ 10. I ← 1 11. J ← 1 12. For k ← p to r 13. Do if L [i] ≤ R[j] 14. then A[k] ← L[ i] 15. i ← i +1 16. else A[k] ← R[j] 17. j ← j+1 In this method, we split the given list into two halves. Then recursively analyzing merge sort and dividing. We get many sorted lists.At last, we combine the sorted lists.Analysis of Merge Sort:Let T (n) be the total time taken in Merge SortSorting two halves will be taken at the most 2T timeWhen we merge the sorted lists, we have a total n-1 comparison because the last element which will be left will just need to be copied down in the combined list and there will be no comparison.So, the relational formula becomesBut we ignore '-1' because the element will take some time to be copied in merge lists.So T (n) = 2T + n.........equation 1Note: Stopping Condition T (1) =0 because at last there will be only 1 element left which need to be copied and there will be no comparison.Put 2 equation in 1 equationPutting 4 equation in 3 equationFrom Stopping Condition:Apply log both sides:log n=log2ilogn= i log2=ilog2n=iFrom 6 equation

Tower of Hanoi

1. It is a classic problem where you try to move all the disks from one peg to another peg using only three pegs.2. Initially, all of the disks are stacked on top of each other with larger disks under the smaller disks.3. You may move the disks to any of three pegs as you attempt to relocate all of the disks, but you cannot place the larger disks over smaller disks and only one disk can be transferred at a time.This problem can be easily solved by Divide & Conquer algorithmIn the above 7 step all the disks from peg A will be transferred to C given Condition:Only one disk will be shifted at a time.Smaller disk can be placed on larger disk.Let T (n) be the total time taken to move n disks from peg A to peg CMoving n-1 disks from the first peg to the second peg. This can be done in T (n-1) steps.Moving larger disks from the first peg to the third peg will require first one step.Recursively moving n-1 disks from the second peg to the third peg will require again T (n-1) step.So, total time taken T (n) = T (n-1)+1+ T(n-1)Relation formula for Tower of Hanoi is:We get,It is a Geometric Progression Series with common ratio, r=2     First term, a=1(20)B equation is the required complexity of technique tower of Hanoi when we have to move n disks from one peg to another.T (3) = 23- 1     = 8 - 1 = 7 Ans[As in concept we have proved that there will be 7 steps now proved by general equation]Program of Tower of Hanoi:#include<stdio.h>  void towers(int, char, char, char);   int main()   {         int num;         printf ("Enter the number of disks : ");          scanf ("%d", &num);        printf ("The sequence of moves involved in the Tower of Hanoi are :\n");        towers (num, 'A', 'C', 'B');        return 0;     }       void towers( int num, char from peg, char topeg, char auxpeg)   {             if (num == 1)   {         printf ("\n Move disk 1 from peg %c to peg %c", from peg, topeg);             return;   }     Towers (num - 1, from peg, auxpeg, topeg);      Printf ("\n Move disk %d from peg %c to peg %c", num, from peg, topeg);     Towers (num - 1, auxpeg, topeg, from peg);   }  Output:Enter the number of disks: 3 The sequence of moves involved in the Tower of Hanoi is Move disk 1 from peg A to peg C Move disk 2 from peg A to peg B Move disk 1 from peg C to peg B Move disk 3 from peg A to peg C Move disk 1 from peg B to peg A Move disk 2 from peg B to peg C Move disk 1 from peg A to peg

Binary Heap

Binary Heap is an array object can be viewed as Complete Binary Tree. Each node of the Binary Tree corresponds to an element in an array.Length [A],number of elements in arrayHeap-Size[A], number of elements in a heap stored within array A.The root of tree A [1] and gives index 'i' of a node that indices of its parents, left child, and the right child can be computed.PARENT (i)      Return floor (i/2)  LEFT (i)      Return 2i  RIGHT (i)      Return 2i+1  Representation of an array of the above figure is given below:The index of 20 is 1To find the index of the left child, we calculate 1*2=2This takes us (correctly) to the 14.Now, we go right, so we calculate 2*2+1=5This takes us (again, correctly) to the 6.Now, 4's index is 7, we want to go to the parent, so we calculate 7/2 =3 which takes us to the 17.Heap Property:A binary heap can be classified as Max Heap or Min Heap1. Max Heap: In a Binary Heap, for every node I other than the root, the value of the node is greater than or equal to the value of its highest childA [PARENT (i) ≥A[i]  Thus, the highest element in a heap is stored at the root. Following is an example of MAX-HEAP2. MIN-HEAP: In MIN-HEAP, the value of the node is lesser than or equal to the value of its lowest child.A [PARENT (i) ≤A[i]  Heapify Method:1. Maintaining the Heap Property: Heapify is a procedure for manipulating heap Data Structure. It is given an array A and index I into the array. The subtree rooted at the children of A [i] are heap but node A [i] itself may probably violate the heap property i.e. A [i] < A [2i] or A [2i+1]. The procedure 'Heapify' manipulates the tree rooted as A [i] so it becomes a heap.MAX-HEAPIFY (A, i) 1. l ← left [i] 2. r ← right [i] 3. if l≤ heap-size [A] and A[l] > A [i] 4. then largest ← l 5. Else largest ← i 6. If r≤ heap-size [A] and A [r] > A[largest] 7. Then largest ← r 8. If largest ≠ i 9. Then exchange A [i] A [largest] 10. MAX-HEAPIFY (A, largest) Analysis:The maximum levels an element could move up are Θ (log n) levels. At each level, we do simple comparison which O (1). The total time for heapify is thus O (log n).Building a Heap:BUILDHEAP (array A, int n) 1 for i ← n/2 down to 1 2 do 3 HEAPIFY (A, i, n) HEAP-SORT ALGORITHM:HEAP-SORT (A) 1. BUILD-MAX-HEAP (A) 2. For I ← length[A] down to Z 3. Do exchange A [1] ←→ A [i] 4. Heap-size [A] ← heap-size [A]-1 5. MAX-HEAPIFY (A,1)

Quick sort

It is an algorithm of Divide & Conquer type.Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that each element in left sub array is less than or equal to the average element and each element in the right sub- array is larger than the middle element.Conquer: Recursively, sort two sub arrays.Combine: Combine the already sorted array.Algorithm:QUICKSORT (array A, int m, int n)    1 if (n > m)    2 then    3 i ← a random index from [m,n]    4 swap A [i] with A[m]    5 o ← PARTITION (A, m, n)    6 QUICKSORT (A, m, o - 1)   7 QUICKSORT (A, o + 1, n)  Partition Algorithm:Partition algorithm rearranges the sub arrays in a place.PARTITION (array A, int m, int n)    1 x ← A[m]    2 o ← m    3 for p ← m + 1 to n   4 do if (A[p] < x)    5 then o ← o + 1    6 swap A[o] with A[p]   7 swap A[m] with A[o]    8 return o  Figure: shows the execution trace partition algorithmExample of Quick Sort:44  33  11  55  77  90  40  60  99  22  88    Let 44 be the Pivot element and scanning done from right to leftComparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is smaller than 44 so swap them. 22 33 11 55 77 90 40 60 99 44 88 Now comparing 44 to the left side element and the element must be greater than 44 then swap them. As 55 are greater than 44 so swap them.22 33 11 44 77 90 40 60 99 55 88 Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from pivot element.22 33 11 40 77 90 44 60 99 55 88 Swap with 77:22 33 11 40 44 90 77 60 99 55 88 Now, the element on the right side and left side are greater than and smaller than 44 respectively.Now we get two sorted lists:And these sublists are sorted under the same process as above done.These two sorted sublists side by side.Merging Sublists:             SORTED LISTSWorst Case Analysis: It is the case when items are already in sorted form and we try to sort them again. This will takes lots of time and space.Equation:T (n) =T(1)+T(n-1)+n  T (1) is time taken by pivot element.T (n-1) is time taken by remaining element except for pivot element.N: the number of comparisons required to identify the exact position of itself (every element)If we compare first element pivot with other, then there will be 5 comparisons.It means there will be n comparisons if there are n items.Relational Formula for Worst Case:Note: for making T (n-4) as T (1) we will put (n-1) in place of '4' and ifWe put (n-1) in place of 4 then we have to put (n-2) in place of 3 and (n-3)In place of 2 and so on.T(n)=(n-1) T(1) + T(n-(n-1))+(n-(n-2))+(n-(n-3))+(n-(n-4))+nT (n) = (n-1) T (1) + T (1) + 2 + 3 + 4+............nT (n) = (n-1) T (1) +T (1) +2+3+4+...........+n+1-1[Adding 1 and subtracting 1 for making AP series]T (n) = (n-1) T (1) +T (1) +1+2+3+4+........ + n-1T (n) = (n-1) T (1) +T (1) + -1Stopping Condition: T (1) =0Because at last there is only one element left and no comparison is required.T (n) = (n-1) (0) +0+-1Worst Case Complexity of Quick Sort is T (n) =O (n2)Randomized Quick Sort [Average Case]:Generally, we assume the first element of the list as the pivot element. In an average Case, the number of chances to get a pivot element is equal to the number of items.Let total time taken =T (n)  For eg: In a given list     p 1,   p 2,    p 3,    p 4............pn    If p 1 is the pivot list then we have 2 lists.       I.e. T (0) and T (n-1)    If p2 is the pivot list then we have 2 lists.          I.e. T (1) and T (n-2)     p 1,   p 2,    p 3,    p 4............pn   If p3 is the pivot list then we have 2 lists.    I.e. T (2) and T (n-3)      p 1,   p 2,    p 3,    p 4............p n  So in general if we take the Kth element to be the pivot element.Then,Pivot element will do n comparison and we are doing average case so,So Relational Formula for Randomized Quick Sort is: = n+1 +(T(0)+T(1)+T(2)+...T(n-1)+T(n-2)+T(n-3)+...T(0)) = n+1 +x2 (T(0)+T(1)+T(2)+...T(n-2)+T(n-1)) n T (n) = n (n+1) +2  (T(0)+T(1)+T(2)+...T(n-1)........eq 1  Put n=n-1 in eq 1(n -1) T (n-1) = (n-1) n+2 (T(0)+T(1)+T(2)+...T(n-2)......eq2  From eq1 and eq 2n T (n) - (n-1) T (n-1)= n(n+1)-n(n-1)+2 (T(0)+T(1)+T(2)+?T(n-2)+T(n-1))-2(T(0)+T(1)+T(2)+...T(n-2))n T(n)- (n-1) T(n-1)= n[n+1-n+1]+2T(n-1)n T(n)=[2+(n-1)]T(n-1)+2nn T(n)= n+1 T(n-1)+2nPut n=n-1 in eq 3Put 4 eq in 3 eqPut n=n-2 in eq 3Put 6 eq in 5 eqPut n=n-3 in eq 3Put 8 eq in 7 eqFrom 3eq, 5eq, 7eq, 9 eq we getFrom 10 eqMultiply and divide the last term by 2Is the average case complexity of quick sort for sorting n elements.3. Quick Sort [Best Case]: In any sorting, best case is the only case in which we don't make any comparison between elements that is only done when we have only one element to sort.

Stable Sorting

A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array.Some Sorting Algorithm is stable by nature like Insertion Sort, Merge Sort and Bubble Sort etc.Sorting Algorithm is not stable like Quick Sort, Heap Sort etc.Another Definition of Stable Sorting:A Stable Sort is one which preserves the original order of input set, where the comparison algorithm does not distinguish between two or more items. A Stable Sort will guarantee that the original order of data having the same rank is preserved in the output.In Place Sorting Algorithm:An In-Place Sorting Algorithm directly modifies the list that is received as input instead of creating a new list that is then modified.In this Sorting, a small amount of extra space it uses to manipulate the input set. In other Words, the output is placed in the correct position while the algorithm is still executing, which means that the input will be overwritten by the desired output on run-time.In-Place, Sorting Algorithm updates input only through replacement or swapping of elements.An algorithm which is not in-place is sometimes called not-in-Place or out of Place.An Algorithm can only have a constant amount of extra space, counting everything including function call and Pointers, Usually; this space is O (log n).Note:Bubble sort, insertion sort, and selection sort are in-place sorting algorithms. Because only swapping of the element in the input array is required.Bubble sort and insertion sort can be applying as stable algorithms but selection sort cannot (without significant modifications).Merge sort is a stable algorithm but not an in-place algorithm. It requires extra array storage.Quicksort is not stable but is an in-place algorithm.Heap sort is an in-place algorithm but is not stable.

Counting Sort

It is a linear time sorting algorithm which works faster by not making a comparison. It assumes that the number to be sorted is in range 1 to k where k is small.Basic idea is to determine the "rank" of each number in the final sorted array.Counting Sort uses three arrays:A [1, n] holds initial input.B [1, n] holds sorted output.C [1, k] is the array of integer. C [x] is the rank of x in A where x ∈ [1, k]Firstly C [x] to be a number of elements of A [j] that is equal to xInitialize C to zeroFor each j from 1 to n increment C [A[j]] by 1We set B[C [x]] =A[j]If there are duplicates, we decrement C [i] after copying.Counting Sort (array P, array Q, int k)   1. For i ← 1 to k   2. do C [i] ← 0     [ θ(k) times]   3. for j  ← 1 to length [A]   4. do C[A[j]] ← C [A [j]]+1    [θ(n) times]   5. // C [i] now contain the number of elements equal to i   6. for i  ← 2 to k   7. do C [i]  ←  C [i] + C[i-1] [θ(k) times]   8. //C[i] now contain the number of elements ≤ i   9. for j ← length [A] down to 1 [θ(n) times]   10. do B[C[A[j]  ←  A [j]   11. C[A[j]  ←  C[A[j]-1  Explanation:Step1: for loop initialize the array R to 'o'. But there is a contradict in the first step initialize of loop variable 1 to k or 0 to k. As 0&1 are based on the minimum value comes in array A (input array). Basically, we start I with the value which is minimum in input array 'A'For loops of steps 3 to 4 inspects each input element. If the value of an input element is 'i', we increment C [i]. Thus, after step 5, C [i] holds the number of input element equal to I for each integer i=0, 1, 2.....kStep 6 to 8 for loop determines for each i=0, 1.....how many input elements are less than or equal to iFor loop of step 9 to 11 place each element A [j] into its correct sorted position in the output array B. for each A [j],the value C [A[j]] is the correct final position of A [j] in the output array B, since there are C [A[j]] element less than or equal to A [i].Because element might not be distinct, so we decrement C[A[j] each time we place a value A [j] into array B decrement C[A[j] causes the next input element with a value equal to A [j], to go to the position immediately before A [j] in the output array.Analysis of Running Time:For a loop of step 1 to 2 take θ(k) timesFor a loop of step 3 to 4 take θ(n) timesFor a loop of step 6 to 7 take θ(k) timesFor a loop of step 9 to 11 take θ(n) timesOverall time is θ(k+n) time.Note:Counting Sort has the important property that it is stable: numbers with the same value appears in the output array in the same order as they do in the input array.Counting Sort is used as a subroutine in Radix Sort.Example: Illustration the operation of Counting Sort in the array.A= ( 7,1,3,1,2,4,5,7,2,4,3)  Solution:Fig: Initial A and C ArraysFor j=1 to 11  J=1, C [1, k] =  Fig: A [1] = 7 ProcessedJ=2, C [1, k] =  Fig: A [2] = 1 ProcessedJ=3, C [1, k]  Fig: A [3] = 3 ProcessedJ=4, C [1, k]  Fig: A [4] = 1 ProcessedJ=5, C [1, k]  Fig: A [5] = 2 ProcessedUPDATED C is:Fig: C now contains a count of elements of ANote: here the item of 'A' one by one get scanned and will become a position in 'C' and how many times the item get accessed will be mentioned in an item in 'C' Array and it gets updated or counter increased by 1 if any item gets accessed again.Now, the for loop i= 2 to 7 will be executed having statement:C [i] = C [i] + C [i-1]  By applying these conditions we will get C updated as i stated from 2 up to 7C [2] = C [2] + C [1] C [3] = C [3] + C [2] C [2] = 2 + 2 C [3] = 2 + 4 C [2] = 4 C [3] = 6 C [4] = C [4] + C [3] C [5] = C [5] + C [4] C [4] = 2 + 6 C [5] = 1 +8 C [4] = 8 C [5] = 9 C [6] = C [6] + C [5] C [7] = C [7] + C [6] C [6] = 0 + 9 C [7] = 2 + 9 C [6] = 9 C [7] = 11 Thus the Updated C is:Fig: C set to rank each number of ANow, we will find the new array BNow two Conditions will apply:B[C[A[j] ← A [j]C[A[j] ← C[A[j]-1We decrease counter one by one by '1'We start scanning of the element in A from the last position.Element in A became a position in CFor j  ←  11 to 1  Step 1B [C [A [11]]] = A [11] C [A [11] = C [A [11]-1 B [C [3] = 3 C [3] = C [3] -1 B [6] = 3 C [3] = 5 Fig: A [11] placed in Output array BStep 2B [C [A [10]]] = A [10] C [A [10]] = C [A [10]]-1 B [C [4]] =4 C [4] = C [4] -1 B [8] = 4 C [4] = 7 Fig: A [10] placed in Output array BStep 3B [C [A [9]] = A [9] C [A [9] = C [A [9]]-1 B [C [2]] = A [2] C [2] = C [2]-1 B [4] = 2 C [2] = 3 Fig: A [9] placed in Output array BStep 4B [C [A [8]]] = A [8] C [A [8]] =C [A [8]] -1 B [C [7]] =7 C [A [8]] = C [7]-1 B [11] =7 C [7] = 10 Fig: A [8] placed in Output array BStep 5B [C [A [7]]] = A [7] C [A [7]] = C [A [7]] - 1 B [C [5]] = 5 C [5] = C [5] - 1 B [9] = 5 C [5] =8 Fig: A [7] placed in Output array BStep 6B [C [A [6]]] = A [6] C [A [6]] = C [A [6]] - 1 B [C [4]] = 4 C [4] = C [4] - 1 B [7] = 4 C [4] = 6 Fig: A [6] placed in Output array BStep 7B [C [A [5]]] = A [5] C [A [5] = C [A [5]] -1 B [C [2] =2 C [2] = C [2] - 1 B [3] = 2 C [2] = 2

Bucket Sort

Bucket Sort runs in linear time on average. Like Counting Sort, bucket Sort is fast because it considers something about the input. Bucket Sort considers that the input is generated by a random process that distributes elements uniformly over the intervalμ=[0,1].To sort n input numbers, Bucket SortPartition μ into n non-overlapping intervals called buckets.Puts each input number into its bucketsSort each bucket using a simple algorithm, e.g. Insertion Sort and thenConcatenates the sorted lists.Bucket Sort considers that the input is an n element array A and that each element A [i] in the array satisfies 0≤A [i] <1. The code depends upon an auxiliary array B [0....n-1] of linked lists (buckets) and considers that there is a mechanism for maintaining such lists.BUCKET-SORT (A) 1. n ← length [A] 2. for i ← 1 to n 3. do insert A [i] into list B [n A[i]] 4. for i ← 0 to n-1 5. do sort list B [i] with insertion sort. 6. Concatenate the lists B [0], B [1] ...B [n-1] together in order. Example: Illustrate the operation of BUCKET-SORT on the array.A = (0.78, 0.17, 0.39, 0.26, 0.72, 0.94, 0.21, 0.12, 0.23, 068)  Solution:Fig: Bucket sort: step 1, placing keys in bins in sorted orderFig: Bucket sort: step 2, concatenate the listsFig: Bucket sort: the final sorted sequence