Skip to content

Commit ffda3da

Browse files
no message
1 parent 9126479 commit ffda3da

File tree

3 files changed

+115
-368
lines changed

3 files changed

+115
-368
lines changed

README.md

Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
```
2+
title: Numerical Analysis
3+
author: Jishnu
4+
```
5+
6+
# Table of Contents
7+
8+
1. [Root Finding Methods](#org770afdb)
9+
1. [Newton’s method](#org013f1b7)
10+
2. [Fixed point method](#orgde6567b)
11+
3. [Secant method](#org4ebbe87)
12+
2. [Interpolation techniques](#org9e2a72e)
13+
1. [Hermite Interpolation](#orgd63ca7f)
14+
2. [Lagrange Interpolation](#org1e8da43)
15+
3. [Newton’s Interpolation](#orgd4f58aa)
16+
3. [Integration methods](#org8e7e5c8)
17+
1. [Euler Method](#org351da1a)
18+
2. [Newton–Cotes Method](#org75020aa)
19+
3. [Predictor–Corrector Method](#orgcf8f14e)
20+
4. [Trapizoidal method](#orgf561a2c)
21+
22+
23+
24+
<a id="org770afdb"></a>
25+
26+
# Root Finding Methods
27+
28+
29+
<a id="org013f1b7"></a>
30+
31+
## [Newton&rsquo;s method](https://en.wikipedia.org/wiki/Newton%27s_method)
32+
33+
Newton&rsquo;s method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as $$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$
34+
35+
36+
<a id="orgde6567b"></a>
37+
38+
## [Fixed point method](https://en.wikipedia.org/wiki/Fixed-point_iteration)
39+
40+
Fixed-point iteration is a method of computing fixed points of iterated functions. More specifically, given a function f defined on the real numbers with real values and given a point x0 in the domain of f, the fixed point iteration is
41+
$$ x_{n+1}=f(x_{n}),\,n=0,1,2,\dots$$
42+
43+
44+
<a id="org4ebbe87"></a>
45+
46+
## [Secant method](https://en.wikipedia.org/wiki/Secant_method)
47+
48+
Secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton&rsquo;s method.
49+
$$ x_{n}=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={\frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}. $$
50+
51+
52+
<a id="org9e2a72e"></a>
53+
54+
# Interpolation techniques
55+
56+
57+
<a id="orgd63ca7f"></a>
58+
59+
## Hermite Interpolation
60+
61+
Hermite Interpolation is a method of interpolating data points as a polynomial function. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences.
62+
63+
64+
<a id="org1e8da43"></a>
65+
66+
## Lagrange Interpolation
67+
68+
Lagrange polynomials are used for polynomial interpolation. See [Wikipedia](https://en.wikipedia.org/wiki/Lagrange_polynomial)
69+
70+
71+
<a id="orgd4f58aa"></a>
72+
73+
## Newton&rsquo;s Interpolation
74+
75+
Newton&rsquo;s divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.
76+
77+
78+
<a id="org8e7e5c8"></a>
79+
80+
# Integration methods
81+
82+
83+
<a id="org351da1a"></a>
84+
85+
## Euler Method
86+
87+
Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method.
88+
$$ y_{n+1} = y_{n} + h f(t_{n} , y_{n}) $$
89+
90+
91+
<a id="org75020aa"></a>
92+
93+
## Newton–Cotes Method
94+
95+
Newton–Cotes formulae, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes.
96+
97+
98+
<a id="orgcf8f14e"></a>
99+
100+
## Predictor–Corrector Method
101+
102+
Predictor–Corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps:
103+
104+
1. The initial, &ldquo;prediction&rdquo; step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate (&ldquo;anticipate&rdquo;) this function&rsquo;s value at a subsequent, new point.
105+
2. The next, &ldquo;corrector&rdquo; step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function&rsquo;s value at the same subsequent point.
106+
107+
108+
<a id="orgf561a2c"></a>
109+
110+
## Trapizoidal method
111+
112+
Trapezoidal rule is a technique for approximating the definite integral. The trapezoidal rule works by approximating the region under the graph of the function f(x) as a trapezoid and calculating its area.
113+
$$ \int _{a}^{b}f(x)\,dx\approx \sum _{k=1}^{N}{\frac {f(x_{k-1})+f(x_{k})}{2}}\Delta x_{k}$$
114+

README.org

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Newton's divided differences is an algorithm, historically used for computing ta
2525
* Integration methods
2626
** Euler Method
2727
Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method.
28-
$$ y_{n+1} = y_{n} + h f(t_n , y_n) $$
28+
$$ y_{n+1} = y_{n} + h f(t_{n} , y_{n}) $$
2929

3030
** Newton–Cotes Method
3131
Newton–Cotes formulae, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes.

0 commit comments

Comments
 (0)