Computing Derivatives: Part 2 (Explaining Calculus #8)

Most recently in the series on calculus, we did some overview on some “precalculus” topics that we’d need for later calculus discussions. Having now done this, we move on to several examples of more ‘complicated’ rules that derivatives follow. We will then investigate some more difficult specific functions and their derivatives. Finally, at the end, I will leave a list of “homework problems” for any of my readers who want practice.

The Product Rule

We have already tackled to to take derivatives of functions like f(x) + g(x). We handled addition, one of the main operations of arithmetic. By doing addition, we were able to handle subtraction as well. But what about multiplication? That would be the natural next step, just as in school multiplication is the next natural step after learning about addition and subtraction. Our purpose now is to lay out the so-called product rule, which enables us to take derivatives of functions like f(x) g(x).

Fact: For any two functions f(x) and g(x) that have derivatives,

\dfrac{d}{dx}[f(x) g(x)] = f^\prime(x) g(x) + f(x) g^\prime(x).

Proof: This proof uses a slick trick of “adding zero” to an expression. First, by the definition of derivatives,

\dfrac{d}{dx}[f(x) g(x)] = \lim\limits_{h \to 0} \dfrac{f(x+h) g(x+h) - f(x) g(x)}{h}.

Now, we want to be clever. Our clever move will be the fact that f(x) g(x+h) - f(x) g(x+h) = 0. While this looks very strange to point out something so obvious, this fact means that

\lim\limits_{h \to 0} \dfrac{f(x+h) g(x+h) - f(x) g(x)}{h} = \lim\limits_{h \to 0} \dfrac{f(x+h) g(x+h) - f(x) g(x+h) + f(x) g(x+h) - f(x) g(x)}{h}.

The first to pieces of this numerator have a common factor, as do the second two. Therefore, this messy expression can be simplified a little bit as

\lim\limits_{h \to 0} \dfrac{g(x+h)(f(x+h) - f(x))}{h} + \lim\limits_{h \to 0} \dfrac{f(x)(g(x+h) - g(x))}{h}.

Since \lim\limits_{h \to 0} g(x+h) = g(x), the first of these simplifies as

\lim\limits_{h \to 0} \dfrac{g(x+h)(f(x+h) - f(x))}{h} = g(x) \lim\limits_{h \to 0} \dfrac{f(x+h) - f(x)}{h} = g(x) f^\prime(x).

The second piece, using the exact same process, is equal to f(x) g^\prime(x). When everything is put back together, f(x) g(x) has derivative f^\prime(x) g(x) + f(x) g^\prime(x).

The Chain Rule

You’d think that the next thing we would do is division. And in fact, there is a way I could do division next. But it will actually be easier to do something else first. I want to do functions inside of other functions next, like f(g(x)), which just means “copy-paste the expression g(x) wherever there would have been an x. This is called the chain rule, because the functions sort of link together like a chain, inextricably connected.

Fact: For any two functions f(x), g(x) for which f(g(x)) makes sense, we have \dfrac{d}{dx}[f(g(x))] = f^\prime(g(x)) * g^\prime(x). In different notation, if y is a function of u, which itself is a function of x, then

\dfrac{dy}{du} * \dfrac{du}{dx} = \dfrac{dy}{dx}.

In an earlier post, I made a comment about a similarity between this derivative notation and genuine fractions. The chain rule is the central such similarity – when written in the dy-over-dx style, it looks as if the du’s are cancelling out, like they would if these were actual fractions.

Proof: This one is actually quite tricky compared to the others. For this reason, I won’t actually do a totally correct proof. Instead, I’m going to do a proof that “usually” works. For those who want a great challenge, try to figure out where this proof goes wrong. (If you end up wanting to know, the Wikipedia article on the chain rule will actually tell you. If you don’t care, then reading this proof will give you the right idea.)

The definition of the derivative tells us that

\dfrac{d}{dx}[f(g(x))] = \lim\limits_{h \to 0} \dfrac{f(g(x+h)) - f(g(x))}{h}.

We now make a clever step of multiplying top and bottom by g(x+h) - g(x).

\dfrac{d}{dx}[f(g(x))] = \lim\limits_{h \to 0} \dfrac{f(g(x+h)) - f(g(x))}{g(x+h) - g(x)} \cdot \dfrac{g(x+h) - g(x)}{h}.

We can even split this up into two limits multiplied together.

\dfrac{d}{dx}[f(g(x))] = \lim\limits_{h \to 0} \dfrac{f(g(x+h)) - f(g(x))}{g(x+h) - g(x)} \cdot \lim\limits_{h \to 0} \dfrac{g(x+h) - g(x)}{h} = g^\prime(x) \lim\limits_{h \to 0} \dfrac{f(g(x+h)) - f(g(x))}{g(x+h) - g(x)}.

Since g(x) will be a continuous function (it has a derivative so it must be continuous), g(x+h) \to g(x) as h \to 0. This enables us to treat g(x+h) as if it were something like g(x) + h. There is a more careful way to write that down, but since we are being informal I won’t do that (and no, this isn’t the real problem in the proof… that already happened earlier). This will mean that

\lim\limits_{h \to 0} \dfrac{f(g(x+h)) - f(g(x))}{g(x+h) - g(x)} = \lim\limits_{h \to 0} \dfrac{f(g(x) + h) - f(g(x))}{(g(x) + h) - g(x)} = \lim\limits_{h \to 0} \dfrac{f(y+h) - f(y)}{h},

where y = g(x) is used to make the formulas easier to follow. This leads directly to the value f^\prime(g(x)) for this limit. When we combine all of our work, we find that \dfrac{d}{dx}[f(g(x))] = f^\prime(g(x)) g^\prime(x).

Fact: If the functions f(x), g(x) have derivatives, then the derivative of \dfrac{f(x)}{g(x)} is \dfrac{g(x) f^\prime(x) - f(x) g^\prime(x)}{g(x)^2}.

Proof: The first thing we do is to view the division as a multiplication by

\dfrac{f(x)}{g(x)} = f(x) \cdot \dfrac{1}{g(x)}.

Secondly, we want to use the chain rule for \dfrac{1}{g(x)}. To do this, we will use the function h(x) = \dfrac{1}{x}. Then \dfrac{1}{g(x)} = h(g(x)). Therefore,

\dfrac{f(x)}{g(x)} = f(x) \cdot h(g(x)).

We can then use the product rule to begin this derivative:

\dfrac{d}{dx}\bigg[ \dfrac{f(x)}{g(x)} \bigg] = f(x) \cdot \dfrac{d}{dx}[ h(g(x)) ] + h(g(x)) f^\prime(x).

From the chain rule, we can simplify the derivative of h(g(x)) to arrive at

\dfrac{d}{dx}\bigg[ \dfrac{f(x)}{g(x)} \bigg] = f(x) \cdot [h^\prime(g(x)) \cdot g^\prime(x)] + h(g(x)) f^\prime(x).

Now, since h(x) = \dfrac{1}{x} = x^{-1}, the rule for taking derivatives of powers of x proven in an earlier post tells us that h^\prime(x) = - x^{-2} = \dfrac{-1}{x^2}. Therefore,

\dfrac{d}{dx}\bigg[ \dfrac{f(x)}{g(x)} \bigg] = f(x) \cdot \bigg(\dfrac{-1}{g(x)^2} \cdot g^\prime(x) \bigg) + \dfrac{1}{g(x)} \cdot f^\prime(x),

and we can simplify this as

\dfrac{d}{dx}\bigg[ \dfrac{f(x)}{g(x)} \bigg] = \dfrac{- f(x) g^\prime(x)}{g(x)^2} + \dfrac{f^\prime(x) g(x)}{g(x)^2} = \dfrac{g(x) f^\prime(x) - f(x) g^\prime(x)}{g(x)^2}.

This is the original formula we wanted to prove. So, our proof is now done.

More Special Functions

We now move on from these general principles to some more specific examples. In the previous post on computing derivatives, we built up all the tools to compute the derivative of any polynomial expression, and in fact any expression involving x raised to constant powers. Now, we move on to trigonometry, exponentials, and logarithms. These derivatives are more difficult to discover, but the importance of these functions requires that any complete study of calculus should include their derivatives. Also, a thorough study of these functions and their derivatives will provide a very complete picture of how to calculate derivatives using the limit definition.

Fact: The derivatives of \sin{x} and \cos{x} are

\dfrac{d}{dx}[\sin{x}] = \cos{x} \text{ and } \dfrac{d}{dx}[\cos{x}] = -\sin{x}.

Proof: The definition of the derivative tells us that

\dfrac{d}{dx}[ \sin{x} ] = \lim\limits_{h \to 0} \dfrac{\sin{(x+h)} - \sin{x}}{h}.

In a previous post in the series, where I defined the function \sin{x}, I also gave the following rule for computing the value of \sin{(x+h)}:

\sin{(x+h)} = \sin{x} \cos{h} + \cos{x} \sin{h}.


\dfrac{d}{dx}[ \sin{x} ] = \lim\limits_{h \to 0} \dfrac{\sin{(x+h)} - \sin{x}}{h} = \lim\limits_{h \to 0} \dfrac{(\sin{x} \cos{h} + \cos{x} \sin{h}) - \sin{x}}{h}.

We can now split this limit up into two pieces – one for \sin{x} and one for \cos{x}.

\lim\limits_{h \to 0} \dfrac{(\sin{x} \cos{h} + \cos{x} \sin{h}) - \sin{x}}{h} = \lim\limits_{h \to 0} \dfrac{\sin{x}(\cos{h} - 1)}{h} + \lim\limits_{h \to 0} \dfrac{\cos{x} \sin{h}}{h}.

There are some facts we have to know in order to continue, these are that \lim\limits_{h \to 0} \dfrac{\sin{h}}{h} = 1 and \lim\limits_{h \to 0} \dfrac{\cos{h} -1}{h} = 0. The proofs of these are a bit tricky, and so I don’t want to go off on a tangent (math pun!) talking about those here. I will add an appendix to the end of this post in which I talk about how to find these limits.

Moving on, once we know the values of these limits, we know that

\lim\limits_{h \to 0} \dfrac{\sin{x}(\cos{h} - 1)}{h} = \sin{x} \cdot \lim\limits_{h \to 0} \dfrac{\cos{h} - 1}{h} = 0


\lim\limits_{h \to 0} \dfrac{\cos{x} \sin{h}}{h} = \cos{x} \cdot \lim\limits_{h \to 0} \dfrac{\sin{h}}{h} = \cos{x}.

Therefore, putting together all the steps we’ve laid out,

\dfrac{d}{dx}[\sin{x}] = \cos{x}.

This completes the first half of the proof. I will leave the proof about the derivative of \cos{x} as practice for any of my curious readers, only providing a few guiding hints. The proof begins the same way. Instead of using the special rule for \sin{(x+h)}, you need to use the rule for \cos{(x+h)} that I gave in the same post in which I gave the rule for \sin{(x+h)}. After this rule is used, you should be able to finish the proof by following the same ideas I use here.

This completes our discussion of

Fact: The derivative of f(x) = b^x is f^\prime(x) = b^x * \log{b}.

Proof: The first thing we will do is to “compress” every value of b into the very special number e. Remember that, earlier in the series, we defined the “natural” exponential function e^x and “natural” logarithm function \log{x}. Also, remember that e^{\log{x}} = x. Using x = b, we conclude that b = e^{\log{b}} and therefore

b^x = (e^{\log{b}})^x = e^{x * \log{b}}.

Now, let’s call f(x) = e^x and g(x) = x * \log{b}. Then b^x = f(g(x)). Using the chain rule, we then know that

\dfrac{d}{dx}[b^x] = \dfrac{d}{dx}[ f(g(x)) ] = f^\prime(g(x)) * g^\prime(x) = \log{b} * f^\prime(x*\log{b}).

What we have done here is to express the derivative of b^x in terms of the derivative of e^x. This means that we now only need to know how to find the derivative of this most special function. Using the definition of the derivative, along with some basic rules of exponents, we have

\dfrac{d}{dx}[e^x] = \lim\limits_{h \to 0} \dfrac{e^{x+h} - e^x}{h} = \lim\limits_{h \to 0} \dfrac{e^x(e^h - 1)}{h} = e^x \lim\limits_{h \to 0} \dfrac{e^h-1}{h}.

The proof here is a bit detailed, but the value of \lim\limits_{h \to 0} \dfrac{e^h - 1}{h} is 1. I will delay this proof to the appendix. But, the key fact here is that e^x is so special because it is its own derivative. That is, if f(x) = e^x, then f^\prime(x) = f(x) = e^x! Using all of these facts, we conclude that

\dfrac{d}{dx}[b^x] = e^{x * \log{b}} * \log{b} = b^x * \log{b}.

This completes our discussion of the derivative of exponential functions.

Fact: The derivative of the function f(x) = \log_b{x} is f^\prime(x) = \dfrac{1}{x * \log{b}}.

Proof: There is a way to do this similar to the previous proof about b^x. Instead of doing that, I want to be clever. Since the functions b^x and \log_b{x} are so closely related, shouldn’t their derivatives be really closely related too? We should think so. That just makes sense. Then maybe we can find a way to take advantage of our formula for the derivative of b^x to find the derivative of \log_b{x}. Can you find the way? Take a moment and think about it.

The way I’ll do this is to transform the equation f(x) = \log_b{x} into the equation b^{f(x)} = b^{\log_b{x}}. The second step is to notice that b^{\log_b{x}} = x because of the relationship between b^x and \log_b{x}. Therefore, b^{f(x)} = x. The other clever observation we have to make is that if two things are equal, their derivatives must also be equal. This means that \dfrac{d}{dx}[ b^{f(x)} ] = \dfrac{d}{dx}[x]. This is so clever because we can find these two derivatives by different methods. The right-hand side is much easier, the derivative of x is just 1. The left-hand side can be found by using the fact we already found about the derivative of exponentials along with the chain rule:

\dfrac{d}{dx}[b^{f(x)}] = (b^{f(x)}*\log{b}) * f^\prime(x) = (x*\log{b})*f^\prime(x)

since b^{f(x)} = x was already known to us. By combining these two derivative computations, we conclude that

(x*\log{b})*f^\prime(x) = 1,

and all we need to do is divide both sides of the equation by x*\log{b} to find our answer.

Notice that from this fact, we can also deduce that \dfrac{d}{dx}[\log{x}] = \dfrac{1}{x}. I will leave this to my reader. (Hint: why is \log{e} = 1?)

This basically completes the list of important derivatives. But for the sake of learning, I want to compute one more that uses a clever trick much like the trick I used in the previous proof that enables us to find the derivative of a function that is “more complicated” than any of those we have previously discussed. The reason I want to show this is to demonstrate how we can take advantage of the rules we have already learned to find derivatives of more complex functions.

Fact: The derivative of f(x) = x^x is f^\prime(x) = x^x(\log{x} + 1).

Proof: The function x^x isn’t actually susceptible to any derivative rule we’ve used to far. This is where we make a clever move to bring this function into the realm we know how to deal with by taking a logarithm. By the normal rules of logarithms, \log{f(x)} = \log{x^x} = x \log{x}. The derivative of the right hand side can be done using the product rule:

\dfrac{d}{dx}[ x \log{x} ] = x \dfrac{d}{dx}[\log{x}] + \log{x} \dfrac{d}{dx}[x] = \dfrac{x}{x} + \log{x} = \log{x} + 1.

One the other hand, the left hand side can be done using the chain rule:

\dfrac{d}{dx}[\log{f(x)}] = \dfrac{1}{f(x)} * f^\prime(x) = \dfrac{f^\prime(x)}{x^x}.

Since the left and right hand sides are equal, we conclude that

\dfrac{f^\prime(x)}{x^x} = \log{x}+1 \implies f^\prime(x) = x^x(\log{x} + 1).


We have now done everything we need to do with the admittedly tedious work of computing special kinds of derivatives. If you’ve made it this far, well done. Things get less tedious and more conceptual from here. Think about how tedious it was to learn how to add and multiply small numbers. Despite this tediousness, the effort is clearly worthwhile because of how useful addition and multiplication are for solving real-world problems. What we’ve done is “learned out times tables” for derivatives. Now that we’ve finished our “tables,” we can move on to grander topics. We can stop looking at individual trees and see the forest. Beginning with the next post, this is what we shall do. In the meantime, if you want some more practice, I’ve left some practice problems and extra material in an appendix that my more curious leaders could look at.


Problem 1: Find the derivatives of x^2 e^x and of \dfrac{x^2+2}{x^3 - x}.

Problem 2: Work out the details of the proof that the derivative of \cos{x} is - \sin{x}. Also, prove that the derivative of \tan{x} = \dfrac{\sin{x}}{\cos{x}} is \dfrac{1}{\cos^2{x}}. Go on to find the derivatives of \cot{x}, \sec{x}, and \csc{x}.

Problem 3: Find the derivatives of e^{e^x} and of \log{(\log{x})}. (Hint: Use the chain rule for both!)


In this appendix, we evaluate the various limits we have delayed in the proofs given above.

Fact: \lim\limits_{h \to 0} \dfrac{\sin{h}}{h} and \lim\limits_{h \to 0} \dfrac{\cos{h} - 1}{h}.

Proof: The proof for the second limit is similar to the first, so we won’t consider it here. We only do the first limit. To do so, we consider the following diagram:

We need to establish some initial values of the geometry here. The angle at A is defined to be x. The side AB has length 1, which means immediately that AE has length \cos{x} and side CE has length \sin{x}. (I’m not currently very good at uploading labelled diagrams like this… sketch it on a piece of paper with labelled sides if you need that visual).

We first notice the triangle \Delta ABC. The area of this triangle is one half its base times its height. Its height is \sin{x}, and its base is 1. Therefore,

\text{Area}(\Delta ABC) = \dfrac{1}{2} \sin{x}.

Before we carry on, why is the length of BD equal to \tan{x}? This is because of “similar triangles.” Remember from geometry that two triangles are similar if their angles have the same degrees. The two triangles \Delta ACE and ABD are similar. The key fact about similar triangles is that any fractions of side lengths are always the same. This means that

\dfrac{\text{Length}(AE)}{\text{Length}(CE)} = \dfrac{\text{Length}(AB)}{\text{Length}(BD)}.

The length of AE is just \cos{x}. The length of AB is 1, and the length of CE is \sin{x}. Therefore,

\dfrac{\cos{x}}{\sin{x}} = \dfrac{1}{\text{Length}(BD)}.

Isolating \text{Length}(BD) = \dfrac{\sin{x}}{\cos{x}} = \tan{x}. So, the length of BD truly is equal to \tan{x}. This quickly tells us (as with the smaller triangle) that \text{Area}(\Delta ABD) = \dfrac{1}{2} \tan{x}. One more thing we want to observe. Instead of the triangle \Delta ABC, look at the “pizza slice” ABC. It definitely has more area than the triangle \Delta ABC and less area that the larger triangle \Delta ABD. Therefore, by putting together our earlier calculations,

\text{Area}(\Delta ABC) \leq \text{Area}(ABC) \leq \text{Area}(\Delta ABD)

\implies \dfrac{1}{2} \sin{x} \leq \text{Area}(ABC) \leq \dfrac{1}{2} \tan{x}.

The way angles are defined, the “pizza slice” ABC has area \dfrac{x}{2\pi} of a full circle. The area of the full unit circle is \pi, so the area of the pizza slice is \dfrac{1}{2} x. Therefore,

\dfrac{1}{2} \sin{x} \leq \dfrac{1}{2} x \leq \dfrac{1}{2} \tan{x} \implies \sin{x} \leq x \leq \tan{x}.

If we divide both sides by \sin{x}, then since \dfrac{\tan{x}}{\sin{x}} = \dfrac{1}{\cos{x}},

1 \leq \dfrac{x}{\sin{x}} \leq \dfrac{1}{\cos{x}}.

If we “flip” all these fractions and reverse the inequalities, this means that

1 \geq \dfrac{\sin{x}}{x} \geq \cos{x}.

By taking limits on all parts of this equation,

\lim\limits_{x \to 0} 1 \geq \lim\limits_{x \to 0} \dfrac{\sin{x}}{x} \geq \lim\limits_{x \to 0} \cos{x}.

Notice now that the third of these limits is just \cos{0}, which if we look at the triangle construction earlier is just 1, the length of AB. Therefore,

1 \leq \lim\limits_{x \to 0} \dfrac{\sin{x}}{x} \leq 1.

This technique is often called the Sandwich Theorem because we stuck the expression we wanted in between two other expressions. Since, of course, the only number between 1 and 1 is 1, the limit we wanted to find must be 1.

Fact: \lim\limits_{h \to 0} \dfrac{e^h - 1}{h} = 1.

Proof: This is done by very cleverly using the way the number e is defined. Remember that

e = \lim\limits_{x \to \infty}  \bigg( 1 + \dfrac{1}{x} \bigg)^x.

The clever move we can make here is to define h = \dfrac{1}{x} and rewrite this same limit in terms of h. Notice that if x \to \infty, then h \to 0. (Technically this is only a limit “from the right”. We would first have to prove that this limit actually exists. This isn’t terribly difficult to do, but it would require a lot of additional writing. So, I won’t do this. If you look at a graph of the function \dfrac{e^x-1}{x}, you can convince yourself that this limit does exist.)

By interchanging variables in this way,

e = \lim\limits_{h \to 0} (1 + h)^{1/h}.

Using this as a substitution for e inside the limit we want to actually compute,

\lim\limits_{h \to 0} \dfrac{e^h-1}{h} = \lim\limits_{h \to 0} \dfrac{((1+h)^{1/h})^h - 1}{h} = \lim\limits_{h \to 0} \dfrac{(1+h) - 1}{h} = \lim\limits_{h \to 0} \dfrac{h}{h} = 1.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: