Search Knowledge

© 2026 LIBREUNI PROJECT

Mathematics / Differential Equations

Numerical Root-Finding & Integration

Numerical Analysis: Roots, Quadrature, and ODEs

Numerical analysis is the study of algorithms that obtain approximate solutions to mathematical problems where exact analytical solutions are either impossible to find or computationally expensive. This lesson covers the rigorous foundations of root-finding, integration, and the numerical solution of differential equations.

1. Root-Finding Algorithms

Root-finding involves determining a value such that .

The Bisection Method

Based on the Intermediate Value Theorem (IVT), if is continuous on and , then there exists at least one such that .

The algorithm iteratively halves the interval . The absolute error after iterations is bounded by: This method is robust but converges linearly.

Newton-Raphson Method

Derived from the linear Taylor expansion of around : Setting and solving for yields:

Convergence Analysis: If and is a simple root (), Newton’s method exhibits quadratic convergence: However, it requires a “sufficiently close” initial guess .

Fixed-Point Iteration (FPI)

An equation can often be rewritten as . The iteration converges to a fixed point if satisfies the conditions of the Banach Fixed-Point Theorem.

Contraction Mapping Principle: If is a contraction mapping on a closed interval (i.e., for all ), then:

  1. is unique in .
  2. The sequence converges to for any .
  3. Convergence is linear with rate .

2. Order of Convergence

We formally define the order of convergence and the asymptotic error constant :

  • (Linear): Bisection, FPI.
  • (Superlinear): Secant Method ().
  • (Quadratic): Newton’s Method.

3. Numerical Integration (Quadrature)

Numerical integration approximates via a weighted sum .

Newton-Cotes Formulas

These use equally spaced nodes.

  • Trapezoidal Rule: Approximates with a first-degree polynomial.
  • Simpson’s 1/3 Rule: Fits a parabola through three points ().

Gaussian Quadrature

Unlike Newton-Cotes, Gaussian quadrature chooses both weights and nodes to maximize the degree of polynomials integrated exactly. For an -point rule, nodes are the roots of the -th degree Legendre Polynomial on . An -point Gaussian quadrature rule is exact for all polynomials of degree or less.

4. Numerical Ordinary Differential Equations (ODEs)

Consider the Initial Value Problem (IVP):

Euler’s Method

The simplest approach using the first term of the Taylor expansion:

  • Local Truncation Error (LTE): (error in one step).
  • Global Truncation Error (GTE): (accumulated error).

Runge-Kutta Methods (RK4)

The “Classic” RK4 method achieves accuracy by taking a weighted average of four incremental slopes:

5. Stability and Stiffness

Stiffness

A differential equation is stiff if certain terms cause rapid variations in the solution, requiring extremely small step sizes for explicit methods like Euler or RK4 to remain stable.

Implicit Methods

To solve stiff equations, we use Implicit Euler (Backward Euler): This requires solving an algebraic equation at each step (often via Newton’s method) but allows for much larger step sizes because it is A-stable.

Python Implementation: RK4 vs. Euler

Below we compare the accuracy of Euler’s method against the 4th-order Runge-Kutta method for the ODE with . The exact solution is .

import numpy as np

def f(t, y):
    return -2 * t * y**2

def exact_sol(t):
    return 1 / (1 + t**2)

def euler_step(t, y, h):
    return y + h * f(t, y)

def rk4_step(t, y, h):
    k1 = f(t, y)
    k2 = f(t + h/2, y + h/2 * k1)
    k3 = f(t + h/2, y + h/2 * k2)
    k4 = f(t + h, y + h * k3)
    return y + (h/6) * (k1 + 2*k2 + 2*k3 + k4)

# Simulation parameters
t0, y0 = 0, 1
t_end = 2
h = 0.2
steps = int((t_end - t0) / h)

t_vals = np.linspace(t0, t_end, steps + 1)
y_euler = np.zeros(steps + 1)
y_rk4 = np.zeros(steps + 1)
y_euler[0] = y0
y_rk4[0] = y0

for i in range(steps):
    y_euler[i+1] = euler_step(t_vals[i], y_euler[i], h)
    y_rk4[i+1] = rk4_step(t_vals[i], y_rk4[i], h)

# Error analysis at t = 2
exact = exact_sol(t_end)
print(f"Exact solution: {exact:.6f}")
print(f"Euler result:   {y_euler[-1]:.6f} (Error: {abs(exact - y_euler[-1]):.6e})")
print(f"RK4 result:     {y_rk4[-1]:.6f} (Error: {abs(exact - y_rk4[-1]):.6e})")

Knowledge Check

Conceptual Check

Given a root-finding algorithm where the error follows |e_{n+1}| = C |e_n|^{1.618}, what is the classification of its convergence?

Conceptual Check

Why are implicit methods like Backward Euler preferred for 'stiff' differential equations despite being more computationally expensive per step?

Conceptual Check

In Gaussian Quadrature, how are the nodes x_i chosen to achieve maximum polynomial precision?