Numerical ordinary differential equations
From Academic Kids

Numerical ordinary differential equations is the part of numerical analysis which studies the numerical solution of ordinary differential equations (ODEs). This field is also known under the name numerical integration, but some people reserve this term for the computation of integrals.
Many differential equations cannot be solved analytically, in which case we have to satisfy ourselves with an approximation to the solution. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, for instance in mechanics, chemistry, ecology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved.
Contents 
The problem
We want to approximate the solution of the differential equation
 <math>y'(t) = f(t,y(t)), \qquad y(t_0)=y_0, \qquad\qquad (1)<math>
where f is a function that maps [t_{0},∞) × R^{d} to R^{d}, and the initial condition y_{0} ∈ R^{d} is a given vector.
The above formulation is called an initial value problem (IVP). The PicardLindelöf theorem states that there is a unique solution, if f is Lipschitz continuous. In contrast, boundary value problems (BVPs) specify (components of) the solution y at more than one points. Different methods need to be used to solve BVPs, for example the shooting method, multiple shooting or global methods like finite differences or collocation.
Note that we restrict ourselves to firstorder differential equations (meaning that only the first derivative of y appears in the equation, and no higher derivatives). However, a higherorder equation can easily be converted to a firstorder equation by introducing extra variables. For example, the secondorder equation y'' = −y can be rewritten as two firstorder equations: y' = z and z' = −y.
Methods
Two elementary methods are discussed to give the reader a feeling for the subject. After that, pointers are provided to other methods (which are generally more accurate and efficient). The methods mentioned here are analysed in the next section.
The Euler method
Starting with the differential equation (1), we replace the derivative y' by the finite difference approximation
 <math> y'(t) \approx \frac{y(t+h)  y(t)}{h}, \qquad\qquad (2) <math>
which yields the following formula
 <math> y(t+h) \approx y(t) + hf(t,y(t)) \qquad\qquad (3). <math>
This formula is usually applied in the following way. We choose a step size h, and we construct the sequence t_{0}, t_{1} = t_{0} + h, t_{2} = t_{0} + 2h, ... We denote by y_{n} a numerical estimate of the exact solution y(t_{n}). Motivated by (3), we compute these estimates by the following recursive scheme
 <math> y_{n+1} = y_n + hf(t_n,y_n). <math>
This is the Euler method, named after Leonhard Euler who described this method in 1768.
The backward Euler method
If, instead of (2), we use the approximation
 <math> y'(t) \approx \frac{y(t)  y(th)}{h}, <math>
we get the backward Euler method:
 <math> y_{n+1} = y_n + hf(t_{n+1},y_{n+1}). <math>
The backward Euler method is an implicit method, meaning than we have to solve an equation to find y_{n+1}. One often uses functional iteration or (some modification of) the NewtonRaphson method to achieve this. Of course, it costs time to solve this equation; this cost must be taken into consideration when one selects the method to use.
Generalisations
The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higherorder methods.
One possibility is to use not only the previously computed value y_{n} to determine y_{n+1}, but to make the solution depend on more past values. This yields a socalled multistep method. Almost all practical multistep methods fall within the family of linear multistep methods, which have the form
 <math> \alpha_k y_{n+k} + \alpha_{k1} y_{n+k1} + \cdots
+ \alpha_0 y_n<math>
 <math> = h \left( \beta_k f(t_{n+k},y_{n+k}) + \beta_{k1}
f(t_{n+k1},y_{n+k1}) + \cdots + \beta_0 f(t_n,y_n) \right). <math>
Another possibility is to use more points in the interval [t_{n},t_{n+1}]. This leads to the family of RungeKutta methods, named after Carle Runge and Martin Kutta. One of their fourthorder methods is especially popular.
Both ideas can also be combined. The resulting methods are called general linear methods.
Advanced features
A good implementation of one of these methods for solving an ODE entails more than the timestepping formula.
It is often inefficient to use the same step size all the time, so variable stepsize methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error.
An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Extrapolation methods are often used to construct various methods of different orders.
Other desirable features include:
 dense output: cheap numerical approximations for the whole integration interval, and not only at the points t_{0}, t_{1}, t_{2}, ...
 event location: finding the times where, say, a particular function vanishes.
 support for parallel computing.
Alternative methods
Many methods do not fall within the framework discussed here. Some classes of alternative methods are:
 multiderivative methods, which use not only the function f but also its derivatives. This class includes HermiteObreschkoff methods and Fehlberg methods.
 methods for second order ODEs. We said that all higherorder ODEs can be transformed to firstorder ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In particular, Nyström methods work directly with secondorder equations.
 geometric integration methods are especially designed for special classes of ODEs (e.g., symplectic integrators for the solution of Hamiltonian equations). They take care that the numerical solution respects the underlying structure or geometry of these classes.
Analysis
Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are convergence (whether the method approximates the solution), order (how well it approximates the solution), and stability (whether errors are damped out).
Convergence
A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t^{*} > 0,
 <math> \lim_{h\to0+} \max_{n=0,1,\dots,\lfloor t^*/h\rfloor}
\ y_{n,h}  y(t_n) \ = 0. <math>
All the methods mentioned above are convergent. In fact, convergence is a condition sine qua non for any numerical scheme.
Order
Suppose the numerical method is
 <math> y_{n+k} = \Psi(t_{n+k}; y_n, y_{n+1}, \dots, y_{n+k1};
h). <math>
The method is said to have order p if
 <math> \Psi \left( t_{n+k}; y(t_n), y(t_{n+1}), \dots, y(t_{n+k1}); h \right)
 y(t_{n+k}) = \mathcal{O}(h^{p+1}). \quad\quad (4) <math>
The quantity on the lefthand side is called the local error of the method. The (forward) Euler method and the backward Euler method introduced above both have order 1. Most methods being used in practise attain higher order.
The local error is the error committed in a single step. A related concept is the global error, the error sustained in all the steps one needs to reach a fixed time t. Explicitly, the global error at time t is y_{N}  y(t) where N = (tt_{0})/h. The global error of a pth order onestep method (that is, a method of the form (4) with k = 1) is O(h^{p}); in particular, such a method is convergent. This statement is not necessarily true for multistep methods.
Stability and stiffness
 Main article: Stiff equation
For some differential equations, application of standard methods such as the Euler method, explicit RungeKutta methods, or multistep methods such as AdamsBashforth methods exhibit instability in the solutions obtained by these methods; however other methods do behave correctly with these "difficultbehaving" equations (however, in fact these equations may be quite simple). This "difficultbehaviour" in the equation is described as stiffness, and is often caused by the presence of different time scales in the underlying problem. Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather prediction, biology, and electronics.
History
Below is a concise timeline of some important developments in this field.
 1768  Leonhard Euler publishes his method.
 1824  Augustin Louis Cauchy proves convergence of the Euler method. In this proof, Cauchy uses the implicit Euler method.
 1855  First mention of the multistep methods of John Couch Adams in a letter written by F. Bashforth.
 1895  Carle Runge publishes the first RungeKutta method.
 1905  Martin Kutta describes the popular fourthorder RungeKutta method.
 1910  Lewis Fry Richardson announces his extrapolation method.
 1952  Charles F. Curtiss and Joseph Oakland Hirschfelder coin the term stiff equations.
See also
References
 Ernst Hairer, Syvert Paul Nørsett and Gerhard Wanner, Solving ordinary differential equations I: Nonstiff problems, second edition, Springer Verlag, Berlin, 1993. ISBN 3540566708.
 Ernst Hairer and Gerhard Wanner, Solving ordinary differential equations II: Stiff and differentialalgebraic problems, second edition, Springer Verlag, Berlin, 1996. ISBN 3540604529.
(This twovolume monograph systematically covers all aspects of the field.)  Arieh Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, 1996. ISBN 0521553768 (hardback), ISBN 0521556554 (paperback).
(Textbook, targeting advanced undergraduate and postgraduate students in mathematics, which also discusses numerical partial differential equations.)  John Denholm Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Chichester, 1991. ISBN 0471929905.
(Textbook, slightly more demanding than the book by Iserles.)