Antiderivatives (Explaining Calculus #15)

In the last post in the series on calculus, we talked about Riemann sums – which is just a fancy way of speaking of estimating the area of a complicated region by using lots of rectangles to obtain an approximately correct answer. The post that I must follow this up with, which for now seems completely out of left field, is the idea of an antiderivative. In one sentence we will be talking about pressing a “reverse button” on the process of taking derivatives. I must apologize to anyone reading through the series one post at a time for the apparently sudden and disconnected change of topic – but I promise that soon, we will discover that Riemann sums and antiderivatives have much more in common than is obvious at face value.

But for now, on to antiderivatives!

Reversing Directions

Imagine you are walking to your friend’s house. You know the directions from your house to your friend’s house are:

  1. Walk north 1 block.
  2. Walk east 2 blocks.
  3. Walk north 3 blocks.

If you carefully follow these directions, then you will get to your friend’s house. Now, what if your friend wants to walk to your house? What would his directions be? If you pause and think about it, all your friend would need to do is to take your directions and “reverse” them. His first step should be to “undo” your last step. So, his Step 1 must be “Walk south 3 blocks” since your Step 3 was “Walk north 3 blocks”. His next step should undo your Step 2, and his last step should undo your Step 1. When he writes all of this down, his steps will be

  1. Walk south 3 blocks.
  2. Walk west 2 blocks.
  3. Walk north 1 block.

Notice the patterns between the two sets of instructions. The numbers of blocks flipped their order (123 in your instructions, 321 in his instructions). Also, every direction became its opposite. In other words, you could actually create an instruction manual for reversing any set of directions:

  1. Take your friend’s directions and reverse the order of every step.
  2. After reversing the order of the steps, turn the compass direction in each step to its complete opposite.

To visualize this, here is how this process changes your directions into your friend’s directions:

Your Directions(Flipping the Order)Your Friend’s Directions
Walk north 1 blockWalk north 3 blocksWalk south 3 blocks
Walk east 2 blocksWalk east 2 blocksWalk west 2 blocks
Walk north 3 blocksWalk north 1 blockWalk south 1 block
The method for reversing your directions

Notice this method will actually always work, for any set of directions you might have. So, what we have done is we devised a process, maybe we can call it the anti-direction process, that reverses any set of directions. If the original directions took me from A to B, then the anti-directions will take me from B to A.

What Does “Reversing a Derivative” Mean?

A set of directions is a method by which we can get from one place to another. We can think of this as an analogy for mathematics. When we do some mathematical process, we can think of that process as having a starting point and an ending point. With a many mathematical processes, it can be really useful to know how to reverse the process. As we shall see later, learning how to reverse derivatives is one of these important processes.

The process of reversing a derivative will be called antidifferentiation, and the result we get after we are done is called an antiderivative. So, to give this a definition, the antiderivative of some function f(x) is another function F(x) that I can find by doing a “reverse derivative” on f(x)… whatever that means.

Luckily for us, we can figure out what that means by thinking a bit more about the analogy of directions. Go back to the table, and look at the columns with your directions and your friend’s directions. Notice that, were I to first follow my directions and immediately after that follow my friend’s directions, I would end up right back where I started. This is why the idea of anti-directions made some kind of sense – they literally cancel out the normal directions.

Mathematically, the idea of cancelling out pops up all over the place – subtraction cancels addition and division cancels multiplication. If we want our idea of an antiderivative to make sense, then derivatives should cancel out an antiderivative. In other words, if I take the antiderivative of f(x) and land at the as-of-yet-unknown function F(x), and if I then take the derivative to get F^\prime(x), then everything should cancel out and I should end up back where I started.

This can be used for a much more clear definition for this idea of antiderivatives:

Definition: F(x) is an antiderivative of f(x) if the equation F^\prime(x) = f(x) is true.

Very often, mathematicians use the rather strange looking symbol F(x) = \int f(x) dx to represent antiderivatives. There is a reason why we write it like this, but we haven’t yet gone far enough into the world of antiderivatives to explain why this makes sense. So, for now, take me at my word that this is a sensible way to abbreviate antiderivatives.

A Potential Problem with Antiderivatives

Our goal is to reverse the process of the derivative. If we were lucky, there would be exactly one way to do this, no matter where we start. Unfortunately, this is not quite true. This is because some functions have the same derivative, even though they are not the same function.

Take for example F(x) = x^2 + 3 and G(x) = x^2 + 7. Using the rules for taking derivatives, we can determine that F^\prime(x) = G^\prime(x) = 2x, and yet F(x) \not = G(x). This is a problem because, if I ask you how to find the antiderivative of 2x, which we also write as \int 2x dx, how could I know whether the answer if F(x), or G(x), or maybe something else altogether?

This is a problem – but fortunately there is a solution. We know the solution because we know exactly how much ambiguity exists when trying to reverse the derivative. The following fact explains this:

Fact: If F(x), G(x) are both antiderivatives of a function f(x), then F(x) - G(x) is a constant.

Proof: Recall that F^\prime(x) = G^\prime(x) = f(x) by the definition of “antiderivative”. Using the equation F^\prime(x) = G^\prime(x), we can arrive at a new equation F^\prime(x) - G^\prime(x) = 0. Rewriting this slightly,

\left( F(x) - G(x) \right)^\prime = F^\prime(x) - G^\prime(x) = 0.

This means that the function F(x) - G(x) has a zero derivative. The only functions with zero derivatives – that is, functions that have a completely flat slope everywhere – are constant functions, whose graphs are horizontal lines. So, F(x) - G(x) must be constant.

Ok, so now we know something. If I know that F(x) is an antiderivative of f(x), then so is F(x) + C for any possible constant number C that I might choose, but at least I now know that this is really a full list of the possibilities. So, we might say

\int f(x) dx = F(x) + C.

If we need a full formula for the antiderivative, then we would need some extra information in order to discover the value of C. When application problems are discussed, we will see how this is done. In the meantime, the important detail is to know that, up to this possible +C, we know that the antiderivative is a thing that makes sense.

How do we find a formula for this function F(x)? There are a variety of techniques mathematicians use to handle antiderivatives – but for the sake of understanding what these things are, it will suffice to see how it works in the simplest case we can work with. Later on, once we have a better grip on these things, we could learn how to do more complicated situations.

Main Example: Power Functions

The simplest type of function that we know of are polynomials. And among polynomials, the simplest are those like x^n for n some whole number. For this situation, we will walk through the process of how you might figure out how to calculate F(x), where f(x) = x^n. Remember that the process of the derivative would result in the calculation f^\prime(x) = n x^{n-1}. If you don’t remember why, that’s ok. The way we want to visualize this is as some sort of process which starts with x^n and ends with n x^{n-1}.

This is helpful because we know that, whatever antiderivatives do, it should be true that \int n x^{n-1} dx = x^n (there should be a +C, of course, but I will generally leave that off unless it is important). So, our question boils down to trying to discover some kind of rule that, when we start with n x^{n-1}, we end with x^n.

Here, the idea of reversing the directions is helpful. Recall from the introduction that, if you are reversing directions on a map, you must do two things:

(1) Reverse the order of all the steps.

(2) Do the reverse of each step.

We might think of our effort to find an antiderivative like this as well. If you think about the process x^n \to n x^{n-1}, which represents the derivative, we really have “two steps” going on. First, you multiply the whole expression by the exponent, so x^n \to n x^n is the first step. The second step is to then subtract one from the exponent, so this would take n x^n \to n x^{n-1}. So, the whole process might be visualized by the two arrows:

x^n \to n x^n \to n x^{n-1}.

In order to figure out an antiderivative formula, we would like to run these arrows in the opposite direction. That is, we’d like

n x^{n-1} \to n x^n \to x^n.

Now, let’s think about what each of these arrows would mean. The arrow n x^{n-1} \to n x^n is meant to be the reverse of n x^n \to n x^{n-1}, which was to subtract the exponent by 1. So, we should be adding one to the exponent instead. Similarly, the arrow n x^n \to x^n is the reverse of the arrow x^n \to n x^n, where the rule was to multiply by the exponent. So, to reverse this rule, we should instead divide by the exponent.

This now gives us a set of rules that, if we do them right, should result in an antiderivative recipe. Now, let’s try it out on x^n. The first step must be to add one to the exponent, and the second step to then divide by the exponent. If we do this process (for n x^{n-1} and x^n simultaneously, so the parallels can be seen more clearly) we arrive at

n x^{n-1} \to n x^n \to x^n,

x^n \to x^{n+1} \to \dfrac{1}{n+1} x^{n+1}.

So, hypothetically, the function F(x) = \dfrac{1}{n+1} x^{n+1} should be an antiderivative of f(x) = x^n. To double check that we are really correct, we can take the derivative directly:

F^\prime(x) = \dfrac{1}{n+1} \dfrac{d}{dx} \left[ x^{n+1} \right] = \dfrac{1}{n+1} \left[ (n+1) x^n \right] = x^n.

Alright – so we have our formula! You can do something similar for other types of functions we’ve run into – for example, \int e^x dx = e^x and \int \dfrac{1}{x} dx = \ln(x) (in this case, whenever the input is a positive number).


In this post, we’ve focused on mapping out the beginnings of the idea of an antiderivative. As of yet, we haven’t discussed why this is so important. But now that we have the concept laid out, we can move on and next time discuss why antiderivatives matter so much. In doing so, we will connect the two most recent posts in the series – on antiderivatives and Riemann sums – in a way that is quite surprising.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: