12.1 The Anti-derivative

The antiderivative is the name we sometimes, (rarely) give to the operation that goes backward from the derivative of a function to the function itself. Since the derivative does not determine the function completely (you can add any constant to your function and the derivative will be the same), you have to add additional information to go back to an explicit function as anti-derivative.

Thus we sometimes say that the antiderivative of a function is a function plus an arbitrary constant. Thus the antiderivative of \(\cos x\) is \((\sin x) + c\).

The more common name for the antiderivative is the indefinite integral. This is the identical notion, merely a different name for it.

A wavy line is used as a symbol for it. Thus the sentence "the antiderivative of \(\cos x\) is \((\sin x) + c\)" is usually stated as: the indefinite integral of \(\cos x\) is \((\sin x) + c\), and this is generally written as

\[\int \cos x \; dx = (\sin x) + c\]

Actually this is bad notation. The variable \(x\) that occurs on the right is a variable and represents the argument of the sine function. The symbols on the left merely say that the function whose antiderivative we are looking for is the cosine function. You will avoid confusion if you express this using an entirely different symbol (say \(y\)) on the left to denote this. The proper way to write this is then

\[\int \cos y \; dy = (\sin x) + c\]

Why use this peculiar and ugly notation?

We do so out of respect for tradition. This is the notation people have used for centuries. We will see why they did so in the next section.

The first question we address is: if you give me a function, say \(g\), and ask me to find its indefinite integral, how do I do it?

The basic answer to this question is: there are no new gimmicks for doing this. You can work backwards from the rules for differentiation, and get some rules for integration, and that is essentially all you can do. But that allows you to integrate (find the antiderivative of) lots of useful functions.

The antiderivative of a sum of several terms is the sum of their antiderivatives. This follows from the fact that the derivative of a sum is the sum of the derivatives of the terms. And similarly, multiplying a function by a constant multiplies its antiderivative by the same constant.

Using these facts we can find the antiderivative of any polynomial.

How?

The fact that the derivative of \(x^k\) is \(kx^{k-1}\) is equivalent to the statement that the antiderivative of \(kx^{k-1}\) is \(x^k + c\). This means that the antiderivative of \(x^k\) is \(\frac{x^{k+1}}{k+1} +c\).

What’s with this \(+c\) stuff?

It is a reminder that the derivative of a constant is \(0\) so an anti-derivative as an inverse operation to a derivative is not completely determined. You can add any constant to an anti-derivative and get another one. Some believe that it was invented by pedants to torture students by penalizing them for occasionally ignoring this boring fact.

We can apply this to each term in a polynomial, and find its anti-derivative.

Thus, the anti-derivative of

\[3x^3 - 4x^2 - x + 7\]

is

\[\frac{3x^4}{4} - \frac{4x^3}{3} - \frac{x^2}{2} + 7x + c\]

Students typically find this so easy that when they are forced to find such an anti-derivative on a test, often their minds are already focused on the next question, and they absent mindedly forget and differentiate instead of anti-differentiating one or perhaps all terms. Please avoid this error.

Exercises:

Find antiderivatives of each of the following functions:

12.1 \(x^3 - 3x^2 + 6\)

12.2 \(\cos (x)\)

12.3 \(\sin (2x)\)

12.4 \(\exp (2x)\)

12.5 \(x^{-\frac{1}{2}}\)

(check your answer by differentiating it.)