
We are interested in the derivative of the integral
with respect to the upper limit, t.
We can compute this derivative, roughly, by evaluating for very small d.
But g(t + d)  g(t) is just
The region between x = t and x = t + d is just a sliver, in which sin(x) is very near sin(t). So the area in this sliver between y = sin(t) and y = 0 is just d*sin(t), where d is the width of the sliver and sin(t) its height.
This tells us that the derivative of g(t), the derivative of the integral of the sine function at argument t, is this area divided by d or sin(t).
Exactly the same result holds for any function whose values for arguments sufficiently close to t are as close as you like to its value at t. (These are called continuous functions) for all t between the limits of integration.
This result is called the fundamental theorem of calculus. It says: If you differentiate the integral of a function, f, that is continuous at argument t in the closed interval including the endpoints of integration (this is the condition that if's values are as close as you like to f(t) at arguments sufficiently near t) you get back the value of the integrand, f, at argument t.
Another way to say this is: the integral with upper limit as variable, as we have just defined it, is an antiderivative of its integrand, when that integrand is continuous.
This means that integrating a function and then differentiating the result with respect to upper limit, gives back the function.
We can also make the same statement about applying these operations in the opposite order.
Suppose we start with a differentiable function, f, and form its derivative, f '(x), and integrate this derivative between somewhere, say a, and t.
In other words suppose we form
The fundamental theorem then tells us: g(t) = f(t) – f(a).
To see this, recall that if f is differentiable at argument x then for d sufficiently small, we have, to any desired accuracy:
If we chop the interval between t and a up into slices of widths given by d's appropriate to each x value, we can sum up the contribution from either side of the equation f 'd = f(x + d)  f(x) over all the slices. The sum of the left hand terms will give us the sum of the areas in the little slices, and the sum of the right hand pieces will "telescope". The left term from one slice will be the right term from the previous slice with the opposite sign; the two will cancel each other out, and we will get contributions only from the first and last slices.
This is the standard form for the fundamental theorem.
And what good is this "fundamental theorem"?
The uses of this theorem, and of its analogues in higher dimensions, have been so significant in history that they cannot be exaggerated. We will ignore these here. For our purposes, the main use of this theorem is in allowing us to evaluate integrals, that is, areas under curves, for vast numbers of integrands.
What integrands?
For starters, we can integrate any integrand that we can recognize as a derivative.
For example, the sine is the derivative of minus the cosine. Applying the last equation above to this fact, we get
The original area we used as an example was the integral of the sine from 0 to 1. This is cos(0)  cos(1) or 1  cos(1).
What else can we recognize?
1. Any power of x such as x^{a}, and therefore any polynomial or sum of powers.
2. The exponent function, exp(x) and therefore exp(kx) for any k.
3. The derivative of the arctangent, of the tangent and arcsine, and lots more.
Exercises:
Evaluate the following definite integrals:
18.1 Integrand sin(x)cos(x) from 0 to 2.
18.2 Integrand x^{2} + 3 x  7 from 1 to 4.
18.3 Integrand (1 + x^{2})^{1} from 0 to infinity.
18.4 Integrand (2 + x)^{1} from 0 to 1.
18.5 Write down some horrible function. Differentiate it. Now ask some friend (former friend?) to integrate your result. You will know the answer!
18.6 Remember the separate occurrence rule for this one. Differentiate (with respect to t:
