Knowledge in Daa

Need of Algorithm

Need of AlgorithmTo understand the basic idea of the problem.2. To find an approach to solve the problem.3. To improve the efficiency of existing techniques. To understand the basic principles of designing the algorithms.5. To compare the performance of the algorithm with respect to other techniques.6. It is the best method of description without describing the implementation detail.

Complexity of Algorithm

It is very convenient to classify algorithm based on the relative amount of time or relative amount of space they required and specify the growth of time/space requirement as a function of input size.Time Complexity: Running time of a program as a function of the size of the input.Space Complexity: Some forms of analysis could be done based on how much space an algorithm needs to complete its task. This space complexity analysis was critical in the early days of computing when storage space on the computer was limited. When considering this algorithm are divided into those that need extra space to do their work and those that work in place.But now a day's problem of space rarely occurs because space on the computer (internal or external) is enough.Broadly, we achieve the following types of analysis -Worst-case: f (n) defined by the maximum number of steps taken on any instance of size n.Best-case: f (n) defined by the minimum number of steps taken on any instance of size n.Average case: f (n) defined by the average number of steps taken on any instance of size n

Algorithm Design Techniques

 Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the divide & conquer techniques involve three steps:Divide the original problem into a set of subproblems.Solve every subproblem individually, recursively.Combine the solution of the subproblems (top level) into a solution of the whole original problem.2. Greedy Technique: Greedy method is used to solve the optimization problem. An optimization problem is one in which we are given a set of input values, which are required either to be maximized or minimized (known as objective), i.e. some constraints or conditions.Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to optimize a given objective.The greedy algorithm doesn't always guarantee the optimal solution however it generally produces a solution that is very close in value to the optimal.3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them to obtain solutions for bigger problems.This is particularly helpful when the number of copying subproblems is exponentially large. Dynamic Programming is frequently related to Optimization Problems. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be bounded, has to be divided into at least two new restricted subproblems. Branch and Bound algorithm are methods for global optimization in non-convex problems. Branch and Bound algorithms can be slow, however in the worst case they require effort that grows exponentially with problem size, but in some cases we are lucky, and the method coverage with much less effort.5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed to access a source of independent, unbiased random bits, and it is then allowed to use these random bits to influence its computation.6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right one. It is a depth-first search of the set of possible solution. During the search, if an alternative doesn't work, then backtrack to the choice point, the place which presented different alternatives, and tries the next alternative.7. Randomized Algorithm: A randomized algorithm uses a random number at least once during the computation make a decision.

Asymptotic Analysis of algorithms (Growth of function)

Asymptotic notation:The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).Asymptotic analysisIt is a technique of representing limiting behavior. The methodology has the applications across science. It can be used to analyze the performance of an algorithm for some large data set.1. In computer science in the analysis of algorithms, considering the performance of algorithms when applied to very large input datasets Asymptotic Analysis of algorithms (Growth of function)Resources for an algorithm are usually expressed as a function regarding input. Often this function is messy and complicated to work. To study Function growth efficiently, we reduce the function down to the important part. Let f (n) = an2+bn+c In this function, the n2 term dominates the function that is when n gets sufficiently large.Dominate terms are what we are interested in reducing a function, in this; we ignore all constants and coefficient and look at the highest order term concerning n.Asymptotic notation:The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).Asymptotic analysisIt is a technique of representing limiting behavior. The methodology has the applications across science. It can be used to analyze the performance of an algorithm for some large data set.1. In computer science in the analysis of algorithms, considering the performance of algorithms when applied to very large input datasetsThe simplest example is a function ƒ (n) = n2+3n, the term 3n becomes insignificant compared to n2 when n is very large. The function "ƒ (n) is said to be asymptotically equivalent to n2 as n → ∞", and here is written symbolically as ƒ (n) ~ n2.Asymptotic notations are used to write fastest and slowest possible running time for an algorithm. These are also referred to as 'best case' and 'worst case' scenarios respectively."In asymptotic notations, we derive the complexity concerning the size of the input. (Example in terms of n)""These notations are important because without expanding the cost of running the algorithm, we can estimate the complexity of the algorithms."Why is Asymptotic Notation Important?1. They give simple characteristics of an algorithm's efficiency.2. They allow the comparisons of the performances of various algorithms.Asymptotic Notations:Asymptotic Notation is a way of comparing function that ignores constant factors and small input sizes. Three notations are used to calculate the running time complexity of an algorithm:1. Big-oh notation: Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is the measure of the longest amount of time. The function f (n) = O (g (n)) [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such thatf (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case  

Analyzing Algorithm Control Structure

To analyze a programming code or algorithm, we must notice that each instruction affects the overall performance of the algorithm and therefore, each instruction must be analyzed separately to analyze overall performance. However, there are some algorithm control structures which are present in each programming code and have a specific asymptotic analysis.Some Algorithm Control Structures are:SequencingIf-then-elsefor loopWhile loop1. Sequencing:Suppose our algorithm consists of two parts A and B. A takes time tA and B takes time tB for computation. The total computation "tA + tB" is according to the sequence rule. According to maximum rule, this computation time is (max (tA,tB)).Example:Suppose tA =O (n) and tB = θ (n2). Then, the total computation time can be calculated as Computation Time = tA + tB = (max (tA,tB) = (max (O (n), θ (n2)) = θ (n2)

Complexity of for loop:

The outer loop executes N times. Every time the outer loop executes, the inner loop executes M times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the total complexity for the two loops is O (N2)Consider the following loop:for i ← 1 to n       {           P (i)   }  If the computation time ti for ( PI) various as a function of "i", then the total computation time for the loop is given not by a multiplication but by a sum i.e.For i ← 1 to n       {              P (i)   }  Takes If the algorithms consist of nested "for" loops, then the total computation time isFor i ← 1 to n { For j ← 1 to n { P (ij) } } Example:Consider the following "for" loop, Calculate the total computation time for the following:For i ← 2 to n-1      {          For j ← 3 to i           {                         Sum ← Sum+A [i] [j]                   }              }  Solution:The total Computation time is:4. While loop:The Simple technique for analyzing the loop is to determine the function of variable involved whose value decreases each time around. Secondly, for terminating the loop, it is necessary that value must be a positive integer. By keeping track of how many times the value of function decreases, one can obtain the number of repetition of the loop. The other approach for analyzing "while" loop is to treat them as recursive algorithms.Algorithm:1. [Initialize] Set k: =1, LOC: =1 and MAX: = DATA [1]  2. Repeat steps 3 and 4 while K≤N  3.  if MAX<DATA [k],then:      Set LOC: = K and MAX: = DATA [k]  4. Set k: = k+1     [End of step 2 loop]  5. Write: LOC, MAX  6. EXIT  Example:The running time of algorithm array Max of computing the maximum element in an array of n integer is O (n).Solution:   array Max (A, n)  1. Current max ← A [0]  2. For i ←  1 to n-1  3. do if current max < A [i]  4. then  current max ← A [i]  5. return current max.  The number of primitive operation t (n) executed by this algorithm is at least.2 + 1 + n +4 (n-1) + 1=5n  2 + 1 + n + 6 (n-1) + 1=7n-2  The best case T(n) =5n occurs when A [0] is the maximum element. The worst case T(n) = 7n-2 occurs when element are sorted in increasing order.We may, therefore, apply the big-Oh definition with c=7 and n0=1 and conclude the running time of this is O (n).

Recurrence Relation

A recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence.For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described by the recurrence.T (n) = θ (1) if n=1 2T + θ (n) if n>1 There are four methods for solving RecurrenceSubstitution MethodIteration MethodRecursion Tree MethodMaster Method

Substitution Method:

Substitution Method:The Substitution Method Consists of two main steps:Guess the Solution.Use the mathematical induction to find the boundary condition and shows that the guess is correct.For Example1 Solve the equation by Substitution Method. T (n) = T + n We have to show that it is asymptotically bound by O (log n).Solution:For T (n) = O (log n) We have to show that for some constant cT (n) ≤c logn.  Put this in given Recurrence Equation. T (n) ≤c log+ 1 ≤c log+ 1 = c logn-clog2 2+1 ≤c logn for c≥1 Thus T (n) =O logn. Example2 Consider the RecurrenceT (n) = 2T+ n n>1 Find an Asymptotic bound on T.Solution:We guess the solution is O (n (logn)).Thus for constant 'c'. T (n) ≤c n logn Put this in given Recurrence Equation. Now, T (n) ≤2clog +n ≤cnlogn-cnlog2+n =cn logn-n (clog2-1) ≤cn logn for (c≥1) Thus T (n) = O (n logn) .

Iteration Methods

Recurrence RelationA recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence.For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described by the recurrence.T (n) = θ (1) if n=1 2T + θ (n) if n>1 There are four methods for solving Recurrence:Substitution MethodIteration MethodRecursion Tree MethodMaster Method1. Substitution Method:The Substitution Method Consists of two main steps:Guess the Solution.Use the mathematical induction to find the boundary condition and shows that the guess is correct.For Example1 Solve the equation by Substitution Method. T (n) = T + n We have to show that it is asymptotically bound by O (log n).Solution:For T (n) = O (log n) We have to show that for some constant cT (n) ≤c logn.  Put this in given Recurrence Equation. T (n) ≤c log+ 1 ≤c log+ 1 = c logn-clog2 2+1 ≤c logn for c≥1 Thus T (n) =O logn. Example2 Consider the RecurrenceT (n) = 2T+ n n>1 Find an Asymptotic bound on T.Solution:We guess the solution is O (n (logn)).Thus for constant 'c'. T (n) ≤c n logn Put this in given Recurrence Equation. Now, T (n) ≤2clog +n ≤cnlogn-cnlog2+n =cn logn-n (clog2-1) ≤cn logn for (c≥1) Thus T (n) = O (n logn) . 2. Iteration MethodsIt means to expand the recurrence and express it as a summation of terms of n and initial condition.Example1: Consider the RecurrenceT (n) = 1  if n=1        = 2T (n-1) if n>1  Solution: T (n) = 2T (n-1) = 2[2T (n-2)] = 22T (n-2) = 4[2T (n-3)] = 23T (n-3) = 8[2T (n-4)] = 24T (n-4) (Eq.1) Repeat the procedure for i times T (n) = 2i T (n-i) Put n-i=1 or i= n-1 in (Eq.1) T (n) = 2n-1 T (1) = 2n-1 .1 {T (1) =1 .....given} = 2n-1 Example2: Consider the RecurrenceT (n) = T (n-1) +1 and T (1) =  θ (1).  Solution: T (n) = T (n-1) +1 = (T (n-2) +1) +1 = (T (n-3) +1) +1+1 = T (n-4) +4 = T (n-5) +1+4 = T (n-5) +5= T (n-k) + k Where k = n-1 T (n-k) = T (1) = θ (1) T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).

Recursion Tree Method

 Recursion Tree Method is a pictorial representation of an iteration method which is in the form of a tree where at each level nodes are expanded.2. In general, we consider the second term in recurrence as root.3. It is useful when the divide & Conquer algorithm is used.4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and child represents the cost of a single subproblem.5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs and then sum all pre-level costs to determine the total cost of all levels of the recursion.6. A Recursion Tree is best used to generate a good guess, which can be verified by the Substitution Method.Example 1 Consider T (n) = 2T + n2 We have to obtain the asymptotic bound using recursion tree method.Solution: The Recursion tree for the above recurrence isExample 2: Consider the following recurrence T (n) = 4T +n Obtain the asymptotic bound using recursion tree method.Solution: The recursion trees for the above recurrenceExample 3: Consider the following recurrenceObtain the asymptotic bound using recursion tree method.Solution: The given Recurrence has the following recursion treeWhen we add the values across the levels of the recursion trees, we get a value of n for every level. The longest path from the root to leaf is

Master Method

The Master Method is used for solving the following types of recurrenceT (n) = a T+ f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be interpreted asLet T (n) is defined on non-negative integers by the recurrence. T (n) = a T+ f (n) In the function to the analysis of a recursive algorithm, the constants and function take on the following significance:n is the size of the problem.a is the number of subproblems in the recursion.n/b is the size of each subproblem. (Here it is assumed that all subproblems are essentially the same size.)f (n) is the sum of the work done outside the recursive calls, which includes the sum of dividing the problem and the sum of combining the solutions to the subproblems.It is not possible always bound the function according to the requirement, so we make three cases which will tell us what kind of bound we can apply on the function.Master Theorem:It is possible to complete an asymptotic tight bound in these three cases:Case1: If f (n) =  for some constant ε >0, then it follows that:T (n) = Θ Example:T (n) = 8 T apply master theorem on it. Solution:Compare T (n) = 8 T with T (n) = a T a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3 Put all the values in: f (n) = 1000 n2 = O (n3-ε ) If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2) Since this equation holds, the first case of the master theorem applies to the given recurrence relation, thus resulting in the conclusion:T (n) = Θ Therefore: T (n) = Θ (n3) Case 2: If it is true, for some constant k ≥ 0 that:F (n) = Θ then it follows that: T (n) = Θ Example:T (n) = 2 , solve the recurrence by using the master method. As compare the given problem with T (n) = a T a = 2, b=2, k=0, f (n) = 10n, logba = log22 =1 Put all the values in f (n) =Θ , we will get 10n = Θ (n1) = Θ (n) which is true. Therefore: T (n) = Θ = Θ (n log n) Case 3: If it is true f(n) = Ω  for some constant ε >0 and it also true that: a f  for some constant c<1 for large value of n ,then :T (n) = Θ((f (n))   Example: Solve the recurrence relation:T (n) = 2 Solution:Compare the given problem with T (n) = a T a= 2, b =2, f (n) = n2, logba = log22 =1 Put all the values in f (n) = Ω ..... (Eq. 1) If we insert all the value in (Eq.1), we will get n2 = Ω(n1+ε) put ε =1, then the equality will hold. n2 = Ω(n1+1) = Ω(n2) Now we will also check the second condition: 2 If we will choose c =1/2, it is true: ∀ n ≥1 So it follows: T (n) = Θ ((f (n)) T (n) = Θ(n2)

Bubble Sort

Bubble Sort also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly stepping throughout the list to be sorted, comparing two items at a time and swapping them if they are in the wrong order. The pass through the list is duplicated until no swaps are desired, which means the list is sorted.This is the easiest method among all sorting algorithms.BUBBLE SORT (A) 1. for i ← 1 to length [A] 2. for k ← length [A] down to i+1 3. if A[k] <A[k-1] 4. exchange (A[k], A [k-1]) Analysis:Input: n elements are given.Output: the number of comparisons required to make sorting.Logic: If we have n elements in bubble sort, then n-1 passes are required to find a sorted array. In pass 1: n-1 comparisons are required In pass 2: n-2 comparisons are required In pass 3: n-3 comparisons are required ............................................................................ ............................................................................... In pass n-1: 1 comparisons is required Total comparisons: T (n) = (n-1) + (n-2) +...........+ 1 = = o (n2) Therefore complexity is of order n2