Al-Balqa Applied University
جامعة البلقاء التطبيقية
Prof. Oqeili Saleh Lectures
Prince Abdullah bin Ghazi Faculty
Of Communication and Information Technology
Department of Computer Science
Efficiency of Algorithms
Space Efficiency
For large quantities of data space/memory used should be
analyzed.
The major components of space are:
instruction space
data space
run-time stack space
Analysis of Time Complexity
Running time depends upon:
Compiler used
R/w speed to Memory and Disk
Machine Architecture : 32 bit vs 64
Input size (rate of growth of time)
When analyzing for time complexity we can take two approaches:
1. Estimation of running time – Random Access Machine (hypothetical Model Machine )
By analysis of the code we can do:
a. Operation counts - select operation(s) that are executed most frequently and determine
how many times each is done.
b. Step counts - determine the total number of steps, possibly lines of code, executed by
the program.
2. Order of magnitude/asymptotic categorization - gives a general idea of performance.
Random-Access Machine (RAM).
• Memory consists of an infinite array of cells.
• Each cell can store at most one data item (bit, byte, a record, ..).
• Any memory cell can be accessed in unit time.
• Instructions are executed sequentially.
• All basic instructions take unit time:
• Load/Store
• Arithmetics (e.g. +, −, ∗, /)
• Logic (e.g. >)
• Running time of an algorithm is the number of RAM instructions it
executes
By analysis of the code we can do:
a. Operation counts - select operation(s) that are executed
most frequently and determine how many times each is
done.
b. Step counts - determine the total number of steps,
possibly lines of code, executed by the program.
RAM Model
Single Processor
Sequential Execution
1 Time unit / Operation:
(assignment, +, - , *, / , logical operations)
list_sum (A, n)
{
Cost # of times
Sum = 0 1 1
for i = 1 to n-1 2 n+1
sum = sum + A(i) 2 n
return sum 1 1
T(list_sum) = 4n + 4
RAM model is not realistic:
• Memory is finite (even though we often imagine it to be infinite
when we program).
• Not all memory accesses take the same time (cache, main
memory, disk).
• Not all arithmetic operations take the same time (e.g.
multiplications are expensive
Time Efficiency – Asymptotic Notation
In Asymptotic Analysis, the performance of an algorithm is
evaluated in terms of input size (we don’t measure the actual
running time). We calculate, how does the time (or space)
taken by an algorithm increases with the input size.
3 cases should be considered:
1. worst case
Example
2. average case
3. best case
Usually worst case is considered since it gives an upper
bound on the performance that can be expected.
Best case is rarely considered
The average case is usually harder to determine - more data
dependent, and is often the same as the worst case.
Big-O Notation
Given f(n) and g(n) - functions defined for
positive integers
Then
f(n) = O(g(n))
If there exists a c (c ≥1) such that:
f(n) ≤ c g(n)
for all sufficiently large positive
integers n
(n ≥ n0 )
The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above.
Example1: Example 2:
f(n) = 4n +12 f(n) =3n2 + 3n + 10
Function Big-O Name
1 O(1) constant
log n O(log n) logarithmic
n O(n) linear
n log n O(n log n) n log n
n2 O(n2) quadratic
n3 O(n3) cubic
2n O(2n) exponential
n! O(n!) factorial
Steps for finding Big-O runtime:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in
terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Ω Notation
Big O notation provides an asymptotic upper bound on a function,
Ω notation provides an asymptotic lower bound.
Ω Notation can be useful when we have lower bound on time complexity of an
algorithm.
Ω (g(n)) = {f(n): there exist positive constants c
and n0 such that
0 <= c*g(n) <= f(n) for all n >= n0}.
Since the best case performance of an algorithm
is generally not useful, the Omega notation is the
least used notation among all three.
Θ Notation
The theta notation bounds a functions from above and
below, so it defines exact asymptotic behavior.
Θ(g(n)) = {f(n): there exist positive constants
c1, c2 and n0 such that:
0 <= c1*g(n) <= f(n) <= c2*g(n)
for all n >= n0}
The above definition means, if f(n) is theta of g(n),
then the value f(n) is always between c1*g(n) and
c2*g(n) for large values of n (n >= n0).
A simple way to get Theta notation of an expression is to
drop low order terms and ignore leading constants.
2n3+7n2+2000=Θ(n3)
Dropping lower order terms is always Ok since there will always be a n 0
after which Θ(n3) has higher values than Θn2) regardless of the constant.
Function Big-O Name
1 O(1) Constant
log n O(log n) logarithmic
n 1/2 O(n 1/2) Square Root
n O(n) linear
n log n O(n log n) n log n
n2 O(n2) quadratic
n3 O(n3) cubic
2n O(2n) exponential
nc O( n c) Polynomial
n! O(n!) factorial