2
$\begingroup$

Consider the integral $\int_a^b f(t)dt$.

There are many numerical integration methods, like trapezoidal rule, Simpson's rule, Gaussian quadrature, but all they involve derivative in the error bound.

For example, error for the Simpson's rule is $\frac{(b-a)^5}{180N^4} \operatorname{max}_{u\in[a,b]}|f^{(4)}(u)|$.

But what if $f(t)$ is nowhere differentiable? What numerical method should I use?

Are there numerical methods, that don't involve derivative in the error bound?

$\endgroup$
5
  • 4
    $\begingroup$ Sampling $f$ on a set of measure zero (in particular, on a finite or countable set) can never give you any information on $\int f$ unless you also have control on how fast $f$ varies. $\endgroup$ Commented Jan 11, 2024 at 20:24
  • 1
    $\begingroup$ The machinery for integration in constructive analysis gives rise to algorithms that can be implemented for real (no pun...) and which can deliver you a result with guarantees on the error without any knowledge of derivatives. This is because they, in effect, do interval arithmetic. Not necessarily fast though - if that matters to you. $\endgroup$ Commented Jan 11, 2024 at 23:13
  • $\begingroup$ @DanPiponi , can you give more information? $\endgroup$ Commented Jan 12, 2024 at 12:43
  • 2
    $\begingroup$ I'm thinking of something like this: pure.ed.ac.uk/ws/portalfiles/portal/12417914/download_2.pdf (Took forever to find that paper again because of the name of the author!) But the paper itself says the implementation performs "abysmally". There's also something like this which is also slow: fredrikj.net/arb I think this only works with analytic functions so may not be suitable for you. $\endgroup$ Commented Jan 12, 2024 at 17:14
  • $\begingroup$ "Paul R is looking for an answer from a reputable source." Are you suggesting that Kuipers and Niederreiter is a disreputable source? $\endgroup$ Commented Jan 17, 2024 at 11:33

2 Answers 2

9
+50
$\begingroup$

If $f$ is of bounded variation $V(f)$, then Koksma's inequality (Theorem 5.1 in Kuipers & Niederreiter, Uniform Distribution of Sequences) says $$ \left|{1\over N}\sum_{n=1}^Nf(x_n)-\int_0^1f(t)\,dt\right|\le V(f)D_n^* $$ where $D_N^*$ is the discrepancy of the point set $\{\,x_1,x_2,\dotsc,x_n\}$.

Theorem 5.4 in the same book says that if $f$ is continuous and has modulus of continuity $M$, then $$ \left|{1\over N}\sum_{n=1}^Nf(x_n)-\int_0^1f(t)\,dt\right|\le M(D^*_N) $$ There are other formulas in the book for estimating integrals.

$\endgroup$
3
  • $\begingroup$ are you talking about this book web.maths.unsw.edu.au/~josefdick/preprints/KuipersNied_book.pdf ? $\endgroup$ Commented Jan 16, 2024 at 10:29
  • $\begingroup$ Yes, that's the book. $\endgroup$ Commented Jan 16, 2024 at 11:24
  • $\begingroup$ Looks like everything boils down to the average of the function values at $N$ points. $\endgroup$ Commented Jan 16, 2024 at 12:28
5
$\begingroup$

$\newcommand\Z{\mathbb Z}\newcommand\R{\mathbb R}\newcommand\la{\lambda}$If $f$ is an arbitrary Lebesgue-integrable function, then, as is done in a definition of the Lebesgue integral, it makes sense to group together close values of the function $f$, rather than close values of its argument.

So, say, for any Lebesgue-integrable function $f\colon[0,1]\to\R$, any real $a$, and any real $h>0$, $$\Big|\int_0^1 f(x)\,dx-\sum_{k\in\Z}(a+k+1/2)h\,\la(E_{f,a,h,k})\Big|\le\frac h2,$$ where $E_{f,a,h,k}:=f^{-1}((a+kh,a+(k+1)h])$ and $\la$ is the Lebesgue measure.

Of course, if $f$ is monotonic, then the sets $E_{f,a,h,k}$ will be intervals. In this case, the bound $\frac h2$ will be just as effective as the Koksma bound $V(f)D_n^*$ in Gerry Myerson's answer (provided the set $\{x_1,\dots,x_n\}$ is of the minimal discrepancy, $\frac1{2n}$ -- see Theorem 1.4 on p. 91).

If $f$ is not monotonic (think e.g. of $f(x)=\sin mx$ for large $m$), then the bound $\frac h2$ can be much better than the Koksma bound $V(f)D_n^*$. As for the calculation of the variation $V(f)$ (when it is finite, even when $f$ is not monotonic), it seems to be of about the same complexity as the calculation of the $\la(E_{f,a,h,k})$'s.

Also, it seems that the bound $\frac h2$ will be more effective, or significantly more effective, than the bound $M(D_n^*)$ in Gerry Myerson's answer, even when the set $\{x_1,\dots,x_n\}$ is of the minimal discrepancy. For instance, when $f(x)=x^p$ for a real $p>0$ and all $x\in[0,1]$, $a=0$, and $h=1/n$, then the bound $\frac h2$ is $\frac1{2n}$, whereas the bound $M(D_n^*)$ is $\sim \frac p{2n}$ if $p\ge1$ and $=\frac1{(2n)^p}$ if $p\in(0,1]$ (again, provided the set $\{x_1,\dots,x_n\}$ is of the minimal discrepancy, $\frac1{2n}$).

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.