|
| 1 | + |
1 | 2 | # Table of Contents |
2 | | -    |
3 | 3 |
|
4 | | -1. [Root Finding Methods](#org770afdb) |
5 | | - 1. [Newton’s method](#org013f1b7) |
6 | | - 2. [Fixed point method](#orgde6567b) |
7 | | - 3. [Secant method](#org4ebbe87) |
8 | | -2. [Interpolation techniques](#org9e2a72e) |
9 | | - 1. [Hermite Interpolation](#orgd63ca7f) |
10 | | - 2. [Lagrange Interpolation](#org1e8da43) |
11 | | - 3. [Newton’s Interpolation](#orgd4f58aa) |
12 | | -3. [Integration methods](#org8e7e5c8) |
13 | | - 1. [Euler Method](#org351da1a) |
14 | | - 2. [Newton–Cotes Method](#org75020aa) |
15 | | - 3. [Predictor–Corrector Method](#orgcf8f14e) |
16 | | - 4. [Trapizoidal method](#orgf561a2c) |
| 4 | +1. [Root Finding Methods](#org97f8dc1) |
| 5 | + 1. [Newton’s method](#org4ec5a5a) |
| 6 | + 2. [Fixed point method](#orgd92eb51) |
| 7 | + 3. [Secant method](#org5e86b54) |
| 8 | +2. [Interpolation techniques](#org7879a30) |
| 9 | + 1. [Hermite Interpolation](#org01982a3) |
| 10 | + 2. [Lagrange Interpolation](#org1020c9c) |
| 11 | + 3. [Newton’s Interpolation](#orgd08b2ee) |
| 12 | +3. [Integration methods](#orgf7b000b) |
| 13 | + 1. [Euler Method](#orge64619c) |
| 14 | + 2. [Newton–Cotes Method](#orgb51f88e) |
| 15 | + 3. [Predictor–Corrector Method](#org2f8adfb) |
| 16 | + 4. [Trapizoidal method](#org4dbe660) |
| 17 | + |
| 18 | +\ \ \ \ |
17 | 19 |
|
| 20 | +:TOC: :include all |
18 | 21 |
|
19 | 22 |
|
20 | | -<a id="org770afdb"></a> |
| 23 | +<a id="org97f8dc1"></a> |
21 | 24 |
|
22 | 25 | # Root Finding Methods |
23 | 26 |
|
24 | 27 |
|
25 | | -<a id="org013f1b7"></a> |
| 28 | +<a id="org4ec5a5a"></a> |
26 | 29 |
|
27 | 30 | ## [Newton’s method](https://en.wikipedia.org/wiki/Newton%27s_method) |
28 | 31 |
|
29 | 32 | Newton’s method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as $$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$ |
30 | 33 |
|
31 | 34 |
|
32 | | -<a id="orgde6567b"></a> |
| 35 | +<a id="orgd92eb51"></a> |
33 | 36 |
|
34 | 37 | ## [Fixed point method](https://en.wikipedia.org/wiki/Fixed-point_iteration) |
35 | 38 |
|
36 | 39 | Fixed-point iteration is a method of computing fixed points of iterated functions. More specifically, given a function f defined on the real numbers with real values and given a point x0 in the domain of f, the fixed point iteration is |
37 | 40 | $$ x_{n+1}=f(x_{n}),\,n=0,1,2,\dots$$ |
38 | 41 |
|
39 | 42 |
|
40 | | -<a id="org4ebbe87"></a> |
| 43 | +<a id="org5e86b54"></a> |
41 | 44 |
|
42 | 45 | ## [Secant method](https://en.wikipedia.org/wiki/Secant_method) |
43 | 46 |
|
44 | 47 | Secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton’s method. |
45 | 48 | $$ x_{n}=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={\frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}. $$ |
46 | 49 |
|
47 | 50 |
|
48 | | -<a id="org9e2a72e"></a> |
| 51 | +<a id="org7879a30"></a> |
49 | 52 |
|
50 | 53 | # Interpolation techniques |
51 | 54 |
|
52 | 55 |
|
53 | | -<a id="orgd63ca7f"></a> |
| 56 | +<a id="org01982a3"></a> |
54 | 57 |
|
55 | 58 | ## Hermite Interpolation |
56 | 59 |
|
57 | 60 | Hermite Interpolation is a method of interpolating data points as a polynomial function. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences. |
58 | 61 |
|
59 | 62 |
|
60 | | -<a id="org1e8da43"></a> |
| 63 | +<a id="org1020c9c"></a> |
61 | 64 |
|
62 | 65 | ## Lagrange Interpolation |
63 | 66 |
|
64 | 67 | Lagrange polynomials are used for polynomial interpolation. See [Wikipedia](https://en.wikipedia.org/wiki/Lagrange_polynomial) |
65 | 68 |
|
66 | 69 |
|
67 | | -<a id="orgd4f58aa"></a> |
| 70 | +<a id="orgd08b2ee"></a> |
68 | 71 |
|
69 | 72 | ## Newton’s Interpolation |
70 | 73 |
|
71 | 74 | Newton’s divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form. |
72 | 75 |
|
73 | 76 |
|
74 | | -<a id="org8e7e5c8"></a> |
| 77 | +<a id="orgf7b000b"></a> |
75 | 78 |
|
76 | 79 | # Integration methods |
77 | 80 |
|
78 | 81 |
|
79 | | -<a id="org351da1a"></a> |
| 82 | +<a id="orge64619c"></a> |
80 | 83 |
|
81 | 84 | ## Euler Method |
82 | 85 |
|
83 | 86 | Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. |
84 | 87 | $$ y_{n+1} = y_{n} + h f(t_{n} , y_{n}) $$ |
85 | 88 |
|
86 | 89 |
|
87 | | -<a id="org75020aa"></a> |
| 90 | +<a id="orgb51f88e"></a> |
88 | 91 |
|
89 | 92 | ## Newton–Cotes Method |
90 | 93 |
|
91 | 94 | Newton–Cotes formulae, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes. |
92 | 95 |
|
93 | 96 |
|
94 | | -<a id="orgcf8f14e"></a> |
| 97 | +<a id="org2f8adfb"></a> |
95 | 98 |
|
96 | 99 | ## Predictor–Corrector Method |
97 | 100 |
|
98 | 101 | Predictor–Corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps: |
99 | 102 |
|
100 | | -1. The initial, “prediction” step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate (“anticipate”) this function’s value at a subsequent, new point. |
101 | | -2. The next, “corrector” step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function’s value at the same subsequent point. |
| 103 | +1. The initial, *“prediction”* step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate (“anticipate”) this function’s value at a subsequent, new point. |
| 104 | +2. The next, *“corrector”* step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function’s value at the same subsequent point. |
102 | 105 |
|
103 | 106 |
|
104 | | -<a id="orgf561a2c"></a> |
| 107 | +<a id="org4dbe660"></a> |
105 | 108 |
|
106 | 109 | ## Trapizoidal method |
107 | 110 |
|
|
0 commit comments