Consider the situation in which the solution, y(t), to a differential equation
$${{dy(t)} \over {dt}} = y'\left( t \right) = f(y(t),t)$$is to be approximated by computer starting from some known initial condition, y(t0)=y0 (note that the tick mark denotes differentiation). The following text develops an intuitive technique for doing so, and then presents several examples. This technique is known as "Euler's Method" or "First Order Runge-Kutta".
Consider the following case: we wish to use a computer to approximate the solution of the differential equation
$${{dy(t)} \over {dt}} + 2y(t) = 0$$or
$${{dy(t)} \over {dt}} = - 2y(t)$$with the initial condition set as y(0)=3. For this case the exact solution can be determined to be (y(t)=3e-2t, t≥0) and is shown below. Since we know the exact solution in this case we will be able to use it to check the accuracy of our approximate solution.
There are several ways to develop an approximate solution, we will do so using the Taylor Series for y(t) expanded about t=0 (in general we expand around t=t0).
$$y(t) = y(0) + y'(0)t + y''(0){{{t^2}} \over 2} + \cdots $$We now restrict our solution to a short time step, h, after t=0 and truncate the Taylor series after the first derivative
$$\eqalign{We call the value of the approximation y*(h), and we call the derivative y'(0)=k_1.
$$y^*(h) = y(0) + {k_1}h$$This is shown on the graph below for h=0.2
There are several important details to observe:
To find the value of the approximation after the next time step, y*(2h), we simply repeat the process using our approximation, y*(h) to estimate the derivative at time h (we don't know y(h) exactly, so we can only estimate the derivative - we call this estimate k_1).
$$\eqalign{In general we move forward one step in time from t0 to t0+h
$$\eqalign{We can use MATLAB to perform the calculation described above. A simple loop accomplishes this:
%% Example 1 % Solve y'(t)=-2y(t) with y0=3 y0 = 3; % Initial Condition h = 0.2;% Time step t = 0:h:2; % t goes from 0 to 2 seconds. yexact = 3*exp(-2*t) % Exact solution (in general we won't know this)
ystar = zeros(size(t)); % Preallocate array (good coding practice)
ystar(1) = y0; % Initial condition gives solution at t=0.
for i=1:(length(t)-1)
k1 = -2*ystar(i); % Previous approx for y gives approx for derivative
ystar(i+1) = ystar(i) + k1*h; % Approximate solution for next value of y
end
plot(t,yexact,t,ystar);
legend('Exact','Approximate');
The MATLAB commands match up easily with the code. A slight variation of the code was used to show the effect of the size of hon the accuracy of the solution (see image below). Note that larger values of h result in poorer approximations (including bad oscillations with h=0.8).
For a first order ordinary differential equation defined by
$${{dy(t)} \over {dt}} = f(y(t),t)$$to progress from a point at t=t0, y*(t0), by one time step, h, follow these steps (repetitively).
$$\eqalign{Notes:
Adding an input creates no particular difficulty to our approximate solution (but makes the exact solution significantly harder to find). Consider an input of cos(4t).
$$\eqalign{ y'(t) + 2y(t) &= \cos (4t) \quad {\rm{or}} \quad y'(t) &= - 2y(t) + \cos (4t) \cr} $$so
$$\eqalign{Note: the exact solution in this case is $y(t) = 2.9{e^{ - 2{\kern 1pt} t}} + 0.1\cos (4t) + 0.2\sin (4t)$.
We can use MATLAB to perform the calculation described above. To perform this new approximation all that is necessary is to change the calculation of k1 (the value of the exact solution is also changed, for plotting).
%% Example 2 % Solve y'(t)=-2y(t)+cos(4t) with y0=3 y0 = 3; % Initial Condition h = 0.2;% Time step t = 0:h:2; % t goes from 0 to 2 seconds. % Exact solution, hard to find in this case (in general we won't have it) yexact = 0.1*cos(4*t)+0.2*sin(4*t)+2.9*exp(-2*t); ystar = zeros(size(t)); % Preallocate array (good coding practice) ystar(1) = y0; % Initial condition gives solution at t=0. for i=1:(length(t)-1) k1 = -2*ystar(i)+cos(4*t(i)); % Previous approx for y gives approx for deriv ystar(i+1) = ystar(i) + k1*h; % Approximate solution for next value of y end plot(t,yexact,t,ystar); legend('Exact','Approximate');
A modified version of the program (that uses several values of h) generated the plot below. As before, the solution is better with smaller values of h.
Non-linear differential equations can be very difficulty to solve analytically, but pose no particular problems for our approximate method. Consider the differential equation given by
$${{dy(t)} \over {dt}} - y(t)(1 - 2t) = 0,\quad \quad \quad \quad y(0) = 1$$the solution is (described here)
$$y(t) = {e^{t - {t^2}}}$$To obtain the approximate solution, we write an expression for y'(t) and use it to find k1.
$$\eqalign{As before, to perform this new approximation all that is necessary is to change the calculation of k1 and the initial condition (the value of the exact solution is also changed, for plotting).
%% Example 3 % Solve y'(t)=y(t)(1-2t) with y0=1 y0 = 1; % Initial Condition h=0.2; % Time step t = 0:h:2; % t goes from 0 to 2 seconds. % Exact solution, hard to find in this case (in general we won't have it) yexact = exp(t-t.^2); ystar = zeros(size(t)); % Preallocate array (good coding practice) ystar(1) = y0; % Initial condition gives solution at t=0. for i=1:(length(t)-1) k1 = ystar(i)*(1-2*t(i)); % Approx for y gives approx for deriv ystar(i+1) = ystar(i) + k1*h; % Approximate solution at next value of y end plot(t,yexact,t,ystar); legend('Exact','Approximate');
A modified version of the program (that uses several values of h) generated the plot below. As before, the solution is better with smaller values of h.
Though the techniques introduced here are only applicable to first order differential equations, the technique can be use on higher order differential equations if we reframe the problem as a first order matrix differential equation. Consider the 3rd order equation (with initial conditions)
$$\displaylines{If we introduce new variables, q1, q2, and q3, we can write three first order differential equations
$$\eqalign{ {q_1}(t) &= y(t) \cr {q_2}(t) &= y'(t) \cr {q_3}(t) &= y''(t) \cr \crWe can further simplify by writing this as a single first order matrix differential equation.
$$\eqalign{ {\bf{q}}'(t) &= \left[ {\matrix{ {{q_1}'(t)} \cr {{q_2}'(t)} \cr {{q_3}'(t)} \cr} } \right] = \left[ {\matrix{ 0 & 1 & 0 \cr 0 & 0 & 1 \crA more in depth discussion of the procedure of going from nth order differential equation to first order nxn matrix differential equation is here.
We now proceed as before except the variable being integrated and the quantity k1 are both vectors.
$$\eqalign{Note: the exact solution to this problem is $y(t) = {1 \over 4}+{{\rm{e}}^{ - t}}{\mkern 1mu} \left( {\cos (t) - {5 \over 2}\sin (t)} \right) - {5 \over 4}{{\rm{e}}^{ - 2{\kern 1pt} t}}$, for t≥0.
To perform this new approximation all that is necessary is to change the appropriate variables from scalars to vectors or matrices, and to define the A and B matrices. Note that in this case we multiply the B matrix by 1 since the input is a unit step (γ(t)=1 for t≥0). If the input were a sine wave, the sine would multiply B; if there is no input, it is not necessary to include the B matrix at all (it is multiplied by 0).
%% Example 4 % Solve y'''(t)+4y''(t)+6y'(t)+4y(t)=gamma(t), y''(0)=0, y'(0)=-1, y(0)=0 q0 = [0; -1; 0]; % Initial Condition (vector) h = 0.2;% Time step t = 0:h:5; % t goes from 0 to 2 seconds. A = [0 1 0; 0 0 1; -4 -6 -4]; % A Matrix B = [0; 0; 1]; % B Matrix yexact = 1/4 + exp(-t).*(cos(t) - 5*sin(t)/2) - 5*exp(-2*t)/4; % Exact soln qstar = zeros(3,length(t)); % Preallocate array (good coding practice) qstar(:,1) = q0; % Initial condition gives solution at t=0. for i=1:(length(t)-1) k1 = A*qstar(:,i)+B*1; % Approx for y gives approx for deriv qstar(:,i+1) = qstar(:,i) + k1*h; % Approximate solution at next value of q end plot(t,yexact,t,qstar(1,:)); % ystar = first row of qstar legend('Exact','Approximate'); hold off
A modified version of the program (that uses several values of h) generated the plot below. As expected, the solution is better with smaller values of h.
The math for this method, the first order Runge-Kutta (or Euler's Method) is fairly simple to understand, and has been discussed before. If we write the differential equation as
$${{dy(t)} \over {dt}} = y'\left( t \right) = f(y(t),t)$$and write the approximation to the derivative as
$$k_1 = y'\left( t \right) = f(y^*(t),t)$$We expand y(t) around t0 assuming a time step h
$$y({t_0} + h) = y({t_0}) + y'({t_0})h + y''({t_0}){{{h^2}} \over 2} + \cdots $$and drop all terms after the linear term. Because all of the dropped terms are multiplied by h2 or greater, we say that the algorithm is accurate to order h2 locally, or O(h2) (if h is small the other terms that are multiplied by h3, h4... which will be even smaller, and can be dropped as well).
$$y({t_0} + h) \approx y({t_0}) + y'({t_0})h$$This gives us our approximate solution at the next time step.
$$y^*({t_0} + h) = y^*({t_0}) + {k_1}h$$
Sinc the number of steps over the whole interval is proportional to 1/h (or O(h-1)) we might expect the overall accuracy to be the O(h2)·O(h-1)=O(h). A rigourous analysis proves that this is true. Such an analysis can be found in references about numerical methods such as the book Applied Numerical Methods, by Carnahan, Luther and Wilkes. Because the error is O(h), where h is raised to the power 1, this is called a first order algorithm.
The global error of the First Order Runge-Kutta (i.e., Euler's) algorithm is O(h).
With a little more work we can develop an algorithm that is accurate to higher order than O(h). The next page describes just such a method.
© Copyright 2005 to 2019 Erik Cheever This page may be freely used for educational purposes.
Erik Cheever Department of Engineering Swarthmore College