LEC-2 ALGORITHMS EFFICIENCY & COMPLEXITY
WHAT IS AN ALGORITHM? • An algorithm is a set of instructions designed to perform a specific task • A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. • An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. • An algorithm is thus a sequence of computational steps that transform the input into the output.
HOW TO WRITE AN ALGORITHMS • Index started with 1 • No variable declaration • No use of semicolon • Assignment statement a3 • Comparison if a=3 • Reputational structures while a1 to N or for a1 to N • When a statement is continued from one line to another within structure, indent the continued line(s).
EFFICIENCY(ALGORITHMIC COMPLEXITY)? • Properties of algorithms  Correctness  Deterministic  Efficiency • Algorithmic Complexity: how many steps our algorithm will take on any given input instance by simply executing it on the given input. • Algorithmic complexity is concerned about how fast or slow particular algorithm performs. Efficiency of an algorithm can be measured in terms of: − Execution time (time complexity) − The amount of memory required (space complexity)
EFFICIENCY Which measure is more important? time complexity comparisons are more interesting than space complexity comparisons Time complexity: A measure of the amount of time required to execute an algorithm Factors that should not affect time complexity analysis: • The programming language chosen to implement the algorithm • The quality of the compiler • The speed of the computer on which the algorithm is to be executed
(TIME) EFFICIENCY OF AN ALGORITHM worst case efficiency is the maximum number of steps that an algorithm can take for any collection of data values. Best case efficiency is the minimum number of steps that an algorithm can take any collection of data values. Average case efficiency the efficiency averaged on all possible inputs Example: Consider: Search for an element in a list • Best case search when item at beginning • Worst case when item at end • Average case somewhere between If the input has size n, efficiency will be a function of n
MEASURING EFFICIENCY Simplified analysis can be based on: • Number of arithmetic operations performed • Number of comparisons made • Number of times through a critical loop • Number of array elements accessed • etc Three algorithms for computing the sum 1 + 2 + . . . + n for an integer n > 0
MEASURING EFFICIENCY Java code for algorithms
MEASURING EFFICIENCY The number of basic operations required by the algorithms
ANALYSIS OF SUM (2) // Input: int A[N], array of N integers // Output: Sum of all numbers in array A int Sum(int A[], int N) { int s=0; for (int i=0; i< N; i++ ) { s = s + A[i]; } return s; } 1 2 3 4 5 6 7 8 1,2,8: Once time 3,4,5,6,7: Once per each iteration of for loop, N iteration Total: 5N + 3 The complexity function of the algorithm is : f(N) = 5N +3 A Simple Example
Asymptotic Notation − The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2, …}. Such notations are convenient for describing the worst-case running-time function T (n), which usually is defined only on integer input sizes. − We will use asymptotic notation primarily to describe the running times of algorithms order of growth − Running time of an algorithm increases with the size of the input in the limit as the size of the input increases without bound Growth of function − Means if we increase the number of inputs(values of n) the function growth speedily.
Big “O” Notation Definition: function f(n) is O(g(n)) if there exist constants c and n0 such that for all n>=n0: f(n) <= c (g(n)). -The notation is often confusing: f = O(g) is read "f is big-oh of g.“ -Generally, when we see a statement of the form f(n)=O(g(n)): -f(n) is the formula that tells us exactly how many operations the function/algorithm in question will perform when the problem size is n. -g(n) is like an upper bound for f(n). Within a constant factor, the number of operations required by your function is no worse than g(n).
Big “O” Notation Why is this useful? –We want out algorithms to scalable. Often, we write program and test them on relatively small inputs. Yet, we expect a user to run our program with larger inputs. Running-time analysis helps us predict how efficient our program will be in the `real world'.
Big “O” Notation
Big “O” Notation
Big “O” Notation
Big “O” Notation
Big “Ω” Notation Definition: function f(n) is Ω (g(n)) if there exist constants c and n0 such that for all n>=n0: f(n) >= c (g(n)). -The notation is often confusing: f = Ω (g) is read "f is big-omega of g.“ -Generally, when we see a statement of the form f(n)= Ω (g(n)):
Big “𝜃” Notation

Lec 2 algorithms efficiency complexity

  • 1.
  • 2.
    WHAT IS ANALGORITHM? • An algorithm is a set of instructions designed to perform a specific task • A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. • An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. • An algorithm is thus a sequence of computational steps that transform the input into the output.
  • 3.
    HOW TO WRITEAN ALGORITHMS • Index started with 1 • No variable declaration • No use of semicolon • Assignment statement a3 • Comparison if a=3 • Reputational structures while a1 to N or for a1 to N • When a statement is continued from one line to another within structure, indent the continued line(s).
  • 4.
    EFFICIENCY(ALGORITHMIC COMPLEXITY)? • Propertiesof algorithms  Correctness  Deterministic  Efficiency • Algorithmic Complexity: how many steps our algorithm will take on any given input instance by simply executing it on the given input. • Algorithmic complexity is concerned about how fast or slow particular algorithm performs. Efficiency of an algorithm can be measured in terms of: − Execution time (time complexity) − The amount of memory required (space complexity)
  • 5.
    EFFICIENCY Which measure ismore important? time complexity comparisons are more interesting than space complexity comparisons Time complexity: A measure of the amount of time required to execute an algorithm Factors that should not affect time complexity analysis: • The programming language chosen to implement the algorithm • The quality of the compiler • The speed of the computer on which the algorithm is to be executed
  • 6.
    (TIME) EFFICIENCY OFAN ALGORITHM worst case efficiency is the maximum number of steps that an algorithm can take for any collection of data values. Best case efficiency is the minimum number of steps that an algorithm can take any collection of data values. Average case efficiency the efficiency averaged on all possible inputs Example: Consider: Search for an element in a list • Best case search when item at beginning • Worst case when item at end • Average case somewhere between If the input has size n, efficiency will be a function of n
  • 7.
    MEASURING EFFICIENCY Simplified analysiscan be based on: • Number of arithmetic operations performed • Number of comparisons made • Number of times through a critical loop • Number of array elements accessed • etc Three algorithms for computing the sum 1 + 2 + . . . + n for an integer n > 0
  • 8.
  • 9.
    MEASURING EFFICIENCY The numberof basic operations required by the algorithms
  • 10.
    ANALYSIS OF SUM(2) // Input: int A[N], array of N integers // Output: Sum of all numbers in array A int Sum(int A[], int N) { int s=0; for (int i=0; i< N; i++ ) { s = s + A[i]; } return s; } 1 2 3 4 5 6 7 8 1,2,8: Once time 3,4,5,6,7: Once per each iteration of for loop, N iteration Total: 5N + 3 The complexity function of the algorithm is : f(N) = 5N +3 A Simple Example
  • 11.
    Asymptotic Notation − Thenotations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2, …}. Such notations are convenient for describing the worst-case running-time function T (n), which usually is defined only on integer input sizes. − We will use asymptotic notation primarily to describe the running times of algorithms order of growth − Running time of an algorithm increases with the size of the input in the limit as the size of the input increases without bound Growth of function − Means if we increase the number of inputs(values of n) the function growth speedily.
  • 12.
    Big “O” Notation Definition:function f(n) is O(g(n)) if there exist constants c and n0 such that for all n>=n0: f(n) <= c (g(n)). -The notation is often confusing: f = O(g) is read "f is big-oh of g.“ -Generally, when we see a statement of the form f(n)=O(g(n)): -f(n) is the formula that tells us exactly how many operations the function/algorithm in question will perform when the problem size is n. -g(n) is like an upper bound for f(n). Within a constant factor, the number of operations required by your function is no worse than g(n).
  • 13.
    Big “O” Notation Whyis this useful? –We want out algorithms to scalable. Often, we write program and test them on relatively small inputs. Yet, we expect a user to run our program with larger inputs. Running-time analysis helps us predict how efficient our program will be in the `real world'.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
    Big “Ω” Notation Definition:function f(n) is Ω (g(n)) if there exist constants c and n0 such that for all n>=n0: f(n) >= c (g(n)). -The notation is often confusing: f = Ω (g) is read "f is big-omega of g.“ -Generally, when we see a statement of the form f(n)= Ω (g(n)):
  • 19.

Editor's Notes

  • #3 computational procedure ? 1 :A method of computing. 2 :The act or process of computing. 3 :The result of computing.