Skip to content

Commit 3093d04

Browse files
author
Flor Silvestre
committed
First commit
1 parent 6b153c1 commit 3093d04

35 files changed

+580
-0
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,3 +102,4 @@ venv.bak/
102102

103103
# mypy
104104
.mypy_cache/
105+
local/

.vscode/settings.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"jira-plugin.workingProject": ""
3+
}

algorithms/algorithms.md

Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
# Algorithms
2+
3+
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output.
4+
5+
## Categories
6+
7+
From the data structure point of view, following are some important categories of algorithms:
8+
9+
**Search** − Algorithm to search an item in a data structure.
10+
11+
**Sort** − Algorithm to sort items in a certain order.
12+
13+
**Insert** − Algorithm to insert item in a data structure.
14+
15+
**Update** − Algorithm to update an existing item in a data structure.
16+
17+
**Delete** − Algorithm to delete an existing item from a data structure.
18+
19+
## Algorithm Complexity
20+
21+
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main factors, which decide the efficiency of X.
22+
23+
**Time Factor** − Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
24+
25+
**Space Factor** − Space is measured by counting the maximum memory space required by the algorithm.
26+
27+
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms of n as the size of input data.
28+
29+
### Space Complexity
30+
31+
Space complexity of an algorithm represents the amount of memory space required by the algorithm in its life cycle. The space required by an algorithm is equal to the sum of the following two components −
32+
33+
A fixed part that is a space required to store certain data and variables, that are independent of the size of the problem. For example, simple variables and constants used, program size, etc.
34+
35+
A variable part is a space required by variables, whose size depends on the size of the problem. For example, dynamic memory allocation, recursion stack space, etc.
36+
37+
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the variable part of the algorithm, which depends on instance characteristic I. Following is a simple example that tries to explain the concept −
38+
39+
Algorithm: SUM(A, B)
40+
Step 1 - START
41+
Step 2 - C ← A + B + 10
42+
Step 3 - Stop
43+
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space depends on data types of given variables and constant types and it will be multiplied accordingly.
44+
45+
### Time Complexity
46+
47+
Time complexity of an algorithm represents the amount of time required by the algorithm to run to completion. Time requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided each step consumes constant time.
48+
49+
For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that T(n) grows linearly as the input size increases.
50+
51+
## Big-O Notation
52+
n: Size of the imputs
53+
54+
![Big O](assets/big_o_table.png)
55+
![Big O](assets/big-o-notation.jpg)
56+
![Big O](assets/bigochart.gif)
57+
58+
---
59+
60+
## Greedy Algorithms
61+
62+
An algorithm is designed to achieve optimum solution for a given problem. In greedy algorithm approach, decisions are made from the given solution domain. As being greedy, the closest solution that seems to provide an optimum solution is chosen.
63+
64+
Greedy algorithms try to find a localized optimum solution, which may eventually lead to globally optimized solutions. However, generally greedy algorithms do not provide globally optimized solutions.
65+
66+
### Counting Coins
67+
68+
This problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If we are provided coins of ₹ 1, 2, 5 and 10 and we are asked to count ₹ 18 then the greedy procedure will be −
69+
70+
1 − Select one ₹ 10 coin, the remaining count is 8
71+
72+
2 − Then select one ₹ 5 coin, the remaining count is 3
73+
74+
3 − Then select one ₹ 2 coin, the remaining count is 1
75+
76+
4 − And finally, the selection of one ₹ 1 coins solves the problem
77+
78+
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the problem then the same approach may not be able to produce the same optimum result.
79+
80+
For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1)
81+
82+
Hence, we may conclude that the greedy approach picks an immediate optimized solution and may fail where global optimization is a major concern.
83+
84+
### Examples
85+
Most networking algorithms use the greedy approach. Here is a list of few of them −
86+
87+
Travelling Salesman Problem
88+
Prim's Minimal Spanning Tree Algorithm
89+
Kruskal's Minimal Spanning Tree Algorithm
90+
Dijkstra's Minimal Spanning Tree Algorithm
91+
Graph - Map Coloring
92+
Graph - Vertex Cover
93+
Knapsack Problem
94+
Job Scheduling Problem
95+
96+
## Divide and Conquer
97+
98+
In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem is solved independently. When we keep on dividing the subproblems into even smaller sub-problems, we may eventually reach a stage where no more division is possible. Those "atomic" smallest possible sub-problem (fractions) are solved. The solution of all sub-problems is finally merged in order to obtain the solution of an original problem.
99+
100+
Broadly, we can understand divide-and-conquer approach in a three-step process.
101+
102+
### Divide/Break
103+
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the actual problem.
104+
105+
### Conquer/Solve
106+
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are considered 'solved' on their own.
107+
108+
### Merge/Combine
109+
When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution of the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that they appear as one.
110+
111+
### Examples
112+
The following computer algorithms are based on divide-and-conquer programming approach −
113+
114+
Merge Sort
115+
Quick Sort
116+
Binary Search
117+
Strassen's Matrix Multiplication
118+
Closest pair (points)
119+
120+
## Dynamic programming
121+
122+
This approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-problems are not solved independently. Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems.
123+
124+
Dynamic programming is used where we have problems, which can be divided into similar sub-problems, so that their results can be re-used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best solution.
125+
126+
So we can say that −
127+
128+
* The problem should be able to be divided into smaller overlapping sub-problem.
129+
130+
* An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
131+
132+
* Dynamic algorithms use Memoization.
133+
134+
### Comparison
135+
In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are motivated for an overall optimization of the problem.
136+
137+
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use Memoization to remember the output of already solved sub-problems.
138+
139+
### Example
140+
The following computer problems can be solved using dynamic programming approach −
141+
142+
Fibonacci number series
143+
Knapsack problem
144+
Tower of Hanoi
145+
All pair shortest path by Floyd-Warshall
146+
Shortest path by Dijkstra
147+
Project scheduling
71.5 KB
Loading

algorithms/assets/big-o.png

337 KB
Loading

algorithms/assets/big_o_table.png

9.11 KB
Loading

algorithms/assets/bigochart.gif

26.1 KB
Loading

data-structures/arrays.py

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
from array import *
2+
3+
4+
"""
5+
Typecode Value
6+
b Represents signed integer of size 1 byte/td>
7+
B Represents unsigned integer of size 1 byte
8+
c Represents character of size 1 byte
9+
i Represents signed integer of size 2 bytes
10+
I Represents unsigned integer of size 2 bytes
11+
f Represents floating point of size 4 bytes
12+
d Represents floating point of size 8 bytes
13+
"""
14+
arr = array(typecode='i', [10, 20, 30, 40])
15+
16+
17+
# Array iteration
18+
for x in arr:
19+
print(arr[x])
20+
21+
# Insert value 60 at index 1
22+
arr.insert(1, 60)
23+
24+
# Remove first occurrence of a value
25+
arr.remove(20)
26+
27+
# Search for index of first occurence of a value
28+
arr.index(30)
29+
30+
# Update a value
31+
arr[2] = 20
57.7 KB
Loading
15.3 KB
Loading

0 commit comments

Comments
 (0)