There are many ways to solve ordinary differential equations (ordinary differential equations are those with one independent variable; we will assume this variable is time, t). The techniques discussed in these pages approximate the solution of first order ordinary differential equations (with initial conditions) of the form
$${{dy(t)} \over {dt}} = y'(t) = f(y(t),t), \quad \quad {\rm{with\;}} y(t_0)=y_0$$In other words, problems where the derivative of our solution at time t, y(t), is dependent on that solution and t (i.e., y'(t)=f(y(t),t)). The usefulness of approximate techniques are described below, but we will jump right into a discussion of how such a technique can work.
Given an equation
$${{dy(t)} \over {dt}} = y'(t) = f(y(t),t), \quad \quad {\rm{with\;}} y(t_0)=y_0$$We know y(t₀), our task is to find an approximate value for y(t) at a time t=t₀+h (where h is a small time step) in the future. The graph below shows some function y(t), and we are given t₀ and y0=y(t₀) (but not the equation for y(t)).
The goal to find an approximation for y(t₀+h). We will call this approximation y*(t₀+h). Using the equation given above, we can find the slope of the function, f(y(t₀),t₀) and add a line with that slope to the graph going through (t₀,y(t₀)).
We then use the slope, f(y(t₀),t₀), to find an estimate for the function at t=t₀+h by performing a linear extrapolation from our starting place, (t₀,y(t₀)), so $y^*( {{t_0} + h} ) = y( {{t_0}} ) + f\left( {y( {{t_0}} ),{t_0}} \right)h$. In other words we go to the right an amount h, and we go up an amount equal to the slope multiplied by h (i.e, the increment is f(y(t₀),t₀)·h).
To find the approximation at t=t₀+2h we repeat the procedure using our estimate, y*(t₀+h), to estimate the slope, and proceed as before.
$${y^*}\left( {{t_0} + 2h} \right) = {y^*}\left( {{t_0} + h} \right) + f\left( {{y^*}\left( {{t_0} + h} \right),{t_0}} \right)h$$In general, we find the next approximation (at t=t₀+(n+1)h) from the previous approximation (at t=t₀+n·h)
$${y^*}\left( {{t_0} + (n + 1)h} \right) = {y^*}\left( {{t_0} + nh} \right) + f\left( {{y^*}\left( {{t_0} + nh} \right),{t_0}} \right)h$$This process is repeated indefinitely to get our approximate solution. This method is called Euler's method and is covered in detail (with examples) on the next page.
The example above used only derivatives to estimate our solution, so why do we call the process numerical integration? To answer this question, recall the fundamental theorem of calculus that relates integration and differentiation:
$$ y\left( {{t_0} + h} \right) = y\left( {{t_0}} \right) + \int_{{t_0}}^{{t_0} + h} {y'(\lambda )} d\lambda $$Since y'(t)=f(y(t),t), we get the new value (y(t₀+h)) by using the value, (y(t₀)) and integrating the function f(y(t),t) from t₀ to t₀+h.
$$ y\left( {{t_0} + h} \right) = y\left( {{t_0}} \right) + \int_{{t_0}}^{{t_0} + h} {f\left( {y\left( \lambda \right),\lambda } \right)} d\lambda$$The figure on the left is a repeat of the last one above, the figure on the right is a plot of y'(t). The simplest approximation to the integral of y'(t) between t₀ and t₀+h is simply to use a rectangular block of area h· f(y(t₀),t₀) to y(t₀), as shown on the diagram below, right. This is called Euler's approximation. The area of the rectangle shown on the right is equal to the increment in the function on the left (y'(t₀)·h in both cases).
In other words
$$y\left( {{t_0} + h} \right) \approx y\left( {{t_0}} \right) + f\left( {y\left( {{t_0}} \right),{t_0}} \right)h$$We call this new value our approximation for y(t), y*(t) and change the approximate equality to a strict equality
$$y^*\left( {{t_0} + h} \right) = y\left( {{t_0}} \right) + f\left( {y\left( {{t_0}} \right),{t_0}} \right)h$$If we make the value of h small enough we get a good approximation to the integral, and we have our approximate solution, using integration of the derivative function. To find the approximation at another time step, h, into the future, we simply repeat the procedure, using y*(t₀+h).
$${y^*}\left( {{t_0} + 2h} \right) = {y^*}\left( {{t_0} + h} \right) + f\left( {{y^*}\left( {{t_0} + h} \right),{t_0}} \right)h$$and so on
$${y^*}\left( {{t_0} + (n + 1)h} \right) = {y^*}\left( {{t_0} + nh} \right) + f\left( {{y^*}\left( {{t_0} + nh} \right),{t_0}} \right)h$$It turns out that this method is not very accurate (using more complex integration algorithms, something like the trapezoid rule, would be better). These are the higher order methods, discussed later.
Many equations can be solved analytically using a variety of mathematical tools, but often we would like to get a computer generated approximation to the solution. The reasons for this are many. We may have a first order differential equation (with initial condition at t₀. such as:
$${a_0}{{{d^2}y(t)} \over {d{t^2}}} + {a_1}{{dy(t)} \over {dt}} + {a_2}y(t) = f\left( t \right), \quad \quad \quad {\rm{with\;}} y(t_0)=y_0$$that we wish to solve for y(t). While this is easily solved using the Laplace Transform (or other methods), we may wish to solve it for a wide variety of coefficients a₀, a1, a2, or for different initial conditions. Or you may need to solve it in real time for an input, g(t), that is not known in advance. Likewise you may have a non-linear problem like a Van der Pol oscillator
$${{{d^2}y(t)} \over {d{t^2}}} + \left( {1 - {y^2}(t)} \right){{dy(t)} \over {dt}} + y(t) = 0$$for which there is no analytic solution. In these cases, an approximate solution obtained via computer is attractive and, perhaps, necessary.
The techniques discussed in these pages approximate the solution of first order ordinary differential equations of the form
$${{dy(t)} \over {dt}} = y'(t) = f(y(t),t)$$In other words, problems where the derivative of our solution at time t, y(t), is dependent on that solution and t (i.e., y'(t)=f(y(t),t)). This may seem very restrictive, but it is often straightforward to convert a higher order equation like the ones given previously to this type of problem. For example, given the equation
$${{dy(t)} \over {dt}} = y'\left( t \right) = f(y(t),t)$$we can declare two new variables (state variables) q1(t) and q2(t) such that
$$\eqalign{Now for each of the state variables we have a problem of the specified type, a single derivative on the left, and functions of time (and perhaps the state variable on the right).
We can also use the same technique to rewrite the Van der Pol oscillator equation as couple first order equations
$$\eqalign{To learn more, and see some numerical examples (with MATLAB solutions), go to the next page.
© Copyright 2005 to 2019 Erik Cheever This page may be freely used for educational purposes.
Erik Cheever Department of Engineering Swarthmore College