# 19.2 First Order differential Equations

We will first deal with a first order differential equation by which we mean, specifically, an equation of the form $$y'(x) = f(x, y)$$, for some function $$f$$. Suppose, further, that we know the solution at some point $$z$$.

This tells us, that in the interval in $$x$$ starting at $$z$$ and ending at $$z + d$$ for very small $$d$$, we have, approximately

$y(z + d) - y(z) = f(z, y) d$

We can use this "linear approximation" to compute $$y(z + d)$$, and then continue to compute $$y(z + 2d)$$ from it, and so on.

This approach is like the left hand rule for doing integrals; the only difference is that $$y$$ itself appears in $$f$$.

To implement this you let $$x$$ increase from its start value by $$d$$ from row to row and have $$y$$ increase by $$f(x,y)d$$.

Exercise 19.1 Set this up for $$f(z+y)$$ given by $$xy$$ and plot $$y$$ vs $$x$$. (This represents the differential equation: $$y'=yx$$, which has solution $$lny = x + c$$. Start at $$x=1, y=1$$ and find $$y(2)$$ numerically and exactly from that solution, and compare.

It is a little more difficult to produce the analogue of the right hand rule, or trapezoid or Simpson's rule, since they require evaluating $$f$$ and hence $$y$$ beyond $$z$$, and we only start with $$y(z)$$ and $$y(z+d)$$ is what we want to discover. If we put $$y(z+d)$$ in our formula for computing $$y(z+d)$$ the computer will accuse us, rightly, of using a circular reference.

There are ways to get around this and a whole sequence of formulae are known for evaluating $$y(x+d) - y(d)$$ given our equation to any order in $$d$$. These are called Runge-Kutta rules, and are very effective. You can see how they do in the accompanying applet.

We will only describe the simplest correction, namely approximate $$y(z+d)$$ according to

$y(z+d) = y(z) + \frac{d}{2}(f(z,y(z)) + f(z+d,y(z)+f(z,y(z))d)$

This means we are using $$f(z,y(z))$$ as the derivative of $$f$$ throughout the interval between $$z$$ and $$z+d$$ in approximating the value of $$y(z+d)$$ in the last term above.

This is still pretty easy to do, and is more or less like the trapezoid rule, differing only in that we are estimating the derivative $$f$$ at argument $$z+d$$ rather than knowing it.

What do you do if you don't know $$y(z)$$ at all?

You can get a feel for what all solutions are by making a plot in two dimensional space, one dimension being $$z$$ and the other $$y(z)$$. If you choose a grid of points in this plot, at each point you know the derivative $$f(z,y(z))$$. If you draw an arrow pointing in the direction $$\frac{dy}{dx} = f(z,y(z))$$. You can then connect the arrows (like connecting dots), forming paths, and these paths each represent solutions to the differential equation.

These paths cannot cross.

Exercise 19.2: Figure out why paths cannot cross.

But they have can have some interesting features. Fixed points are one such feature and are what we saw in Chapter 18. A fixed point is one for which the equation implies you stay there. A stable fixed point is one such that if you are near it you rotate or spiral into it. There are also things called attractors, which are curves either in the past or the future (when the independent variable is time) which many paths crowd into. A stable fixed point is a kind of attractor.

You tell me I can implement the integration you described on a spreadsheet?

Yes. Put First order ODE in A1; xstart in A2; ystart into A3; d into A4. Put your data, which consists of the starting values of $$x$$ and $$y$$, and your choice for $$d$$ in B2, B3, and B4.

Then start columns at A6, B6, C6, which will contain $$x$$ and $$y$$ respectively. In A6, put x; in B6, put y (trapezoidal rule); in C6, put y (left-hand rule).

So, you can put =B2 into A7, =B3 into B7, and =A7+$B$4 into A8 and copy it down column A.

In B8 put =B7+$B$4/2*(f(A7,B7)+f(A7+$B$4,B7+f(A7,B7)*d) and copy that down column B. That is it.

You can compare the result with the left hand rule computation by setting up column C and starting with =B7 into C7, but putting =C7+$B$4*f(A7,C7) into C8 and copying it down. Then you can make an $$x,y$$ scatter chart of all three columns, and see what happens. The difference between the two computations gives you an impression of how bad the simpler one is.

You can see that it takes a bit more work to change functions $$f$$, but is quite easy to change initial conditions. Here is the result for $$f'(x) = xy$$ with $$d = 0.01$$ and a starting point at $$x = 1$$, $$y = 1$$.

Number of steps
Number of digits after decimal point

Exercises 19.3 Set this up for $$f(x, y) = x^2y$$, and for $$f(x,y) = xsin(x, y)$$, starting at $$x=0, y=1$$.

Does this always work?

No. For lots of interesting equations it is fine. However, sometimes your variable $$y$$ can go to infinity, and then the calculation becomes quite inaccurate.

This can happen because, we are allowing any equation for $$y'$$, and hence any equation for $$\left(\frac{1}{y}\right)'$$. Which means $$\frac{1}{y}$$ can sometimes be $$0$$. If $$\frac{1}{y}$$ should happen to go through $$0$$, then $$y$$ will go to infinity without any particular reason for it.

Most of the time you can avoid this difficulty by solving the differential equation for $$\frac{1}{y}$$ while you are solving the one for $$y$$. When $$y$$ goes to infinity, $$\frac{1}{y}$$ is quite tame and is near $$0$$ (remember that$$\left(\frac{1}{y}\right)'$$ is $$\frac{-y'}{y^2}$$, so that if we let $$\frac{1}{y}$$ be $$u$$, then $$u$$ obeys $$u' = -f\left(x,\frac{1}{u}\right)u^2$$). If you do this you can use as next $$y$$ value the one you get from the smaller of $$y$$ and $$\frac{1}{y}$$.

Anyway, integrating differential equations this way is sufficiently easy that it is worth a try.