# Computing Derivatives: Part 1 (Explaining Calculus #6)

In the previous post in this series, we set up a definition of the derivative of a function, which is a new function that tells us how the original function changes over time. Now that we have set up this idea of derivatives, we are going to enter into a period of showing how to work with derivatives. After all, if we ever want to make use of this idea, we need to know how to actually calculate these things.

This will take up two posts, which are basically ordered by difficulty. In this post, I’ll focus on a subset of the foundational rules of derivatives, mainly the ones that are easier to grapple with. In the follow-up to this post, we will delve into some equally foundational rules that are a bit harder to discover.

Some General Rules for Derivatives

The opening of this post are two rules about derivatives that apply to all functions (well, all functions that have derivatives… which is quite a lot of functions and almost every function you’ll ever hear of).

The “Distributive Law” for Derivatives

This first piece can be thought of as a distributive law. Another way you might think about it is that you can do derivatives “one piece at a time.” If you have a function like $x + x^2$, this rule tells you that you can find its derivative by using the derivatives of $x$ and $x^2$ in a straightforward way.

Fact: If $f(x), g(x)$ are functions with derivatives, then $\dfrac{d}{dx}[f(x) \pm g(x)] = f^\prime(x) + g^\prime(x)$.

Proof: The definition of derivatives tells us that

$\dfrac{d}{dx}[ f(x) + g(x) ] = \lim\limits_{h \to 0} \dfrac{(f(x+h) + g(x+h)) - (f(x) + g(x))}{h}.$

We can split up this limit into two limits:

$\lim\limits_{h \to 0} \dfrac{(f(x+h) + g(x+h)) - (f(x) + g(x))}{h} = \lim\limits_{h \to 0} \dfrac{f(x+h) + f(x)}{h} + \lim\limits_{h \to 0} \dfrac{g(x+h) - g(x)}{h}.$

The two pieces at the end of this equation are just $f^\prime(x)$ and $g^\prime(x)$. This means that the derivative of $f(x) + g(x)$ is $f^\prime(x) + g^\prime(x)$.

The proof works exactly the same way for $f(x) - g(x)$, so we are done with this proof.

The “Coefficient Law” for Derivatives

This next rule is also quite straightforward. This rule tells us that coefficients don’t really play much of a role when calculating derivatives. You can essentially just pretend they aren’t there and put then back in when you are done. As an example, to calculate the derivative of $123 x^4$, you can just calculate the derivative of $x^4$ and, when you are done, multiply the answer by 123.

Fact: If $f(x)$ is a function and $c$ is a constant number. Then

$\dfrac{d}{dx}[c f(x)] = c f^\prime(x).$

The proof for this fact is pretty similar to the one for the distributive law of derivatives. For anyone who is wanting some practice with these ideas, this would be a good example to work on.

Some Specific Functions with Derivatives

We’ve just finished discussing some basic rules of how derivatives work. We’ll now spend a little bit of time talking about specific functions that are important and how their derivatives are calculated.

Derivative of a Constant

The easiest of all functions to do the derivative of is a constant function. Since these functions never change, you can think of them as horizontal lines. Because of the viewpoint of derivatives as slopes or as capturing changes over time, you’d expect the derivative of something that never changes to be zero (zero reflecting the amount of change the function does). In fact, this is true.

Fact: If $f(x) = c$ is a constant function, then $f^\prime(x) = 0$.

The proof of this fact is not hard, and would be a good exercise for those who want to practice doing such calculations.

Derivative of a Polynomial

One of the most important kinds of functions we talk about in mathematics are polynomials, which are sums of expressions like $ax^n$, for a positive whole number $n$ and some kind of constant value $a$. Notice that, because of the coefficient law and distributive law mentioned earlier, we can treat polynomials one term at a time. For instance, if we want to compute the derivative of $f(x) = 3x^2 + 7x - 4$, the distributive law tells us that we only need to know how to take the derivatives of $3x^2$, $7x$, and $4$. The coefficient law then tells us that we only need to know how to take the derivative of $x^2, x$ and $4$. We already know from earlier that the derivative of constants are zero, so we only need to know how to find the derivatives of $x^2$ and $x$. In fact, what we are actually going to do is to find the derivative of all members of the list $x, x^2, x^3, x^4, \dots$ all at once. But, before I do this, the example of $x^2$ will be helpful.

Fact: If $f(x) = x^2$, then $f^\prime(x) = 2x$.

Proof: We can actually just do this directly using the definition of derivatives. We begin with the definition:

$f^\prime(x) = \lim\limits_{h \to 0} \dfrac{f(x+h) - f(x)}{h} = \lim\limits_{h \to 0} \dfrac{(x+h)^2 - x^2}{h}.$

We can simplify the inside of this limit using some algebra:

$\lim\limits_{h \to 0} \dfrac{(x+h)^2 - x^2}{h} = \lim\limits_{h \to 0} \dfrac{(x^2 + 2xh + h^2) - x^2}{h} = \lim\limits_{h \to 0} \dfrac{h(2x+h)}{h}.$

Usually, algebra doesn’t allow you to cancel out the $h$ on top and bottom , because if $h = 0$, then this is not allowed. But, since we are inside of a limit in which $h \to 0$, we can assume that $h$ is never actually zero (it just approaches zero). This means we actually can cancel. So now,

$f^\prime(x) = \lim\limits_{h \to 0} (2x+h) = 2x + 0 = 2x,$

which we evaluate in this way because $2x+h$ is continuous at $h = 0$, so you are allowed to plug in zero for $h$. So, the derivative of $x^2$ is $2x$. This completes the proof.

The idea behind the derivative of $x^n$ is basically the same. The difference occurs in the algebraic simplification of the term $(x+h)^n$ that occurs in the limit. Here, we lay out exactly how to handle this difference.

Fact: If $f(x) = x^n$, then $f^\prime(x) = n x^{n-1}$.

Proof: The definition of $f^\prime(x)$ tells us that $f^\prime(x) = \lim\limits_{h \to 0} \dfrac{f(x+h) - f(x)}{h} = \lim\limits_{h \to 0} \dfrac{(x+h)^n - x^n}{h}$. The key to simplifying this expression is in figuring out the term $(x+h)^n$. We can do this using the method of foiling. For now, let’s just assume we have done this, and that we’ve found that

$(x+h)^n = x^n + a_1 x^{n-1} h^1 + a_2 x^{n-2} h^2 + \dots + a_{n-2} x^2 h^{n-2} + a_{n-1} x h^{n-1} + h^n.$

Notice then that $(x+h)^n - x^n = a_1 x^{n-1}h + h^2 g(x)$, where $g(x)$ holds all the leftover terms that have at least two powers of $h$ in them. Knowing this, we can compute the limit from earlier.

$f^\prime(x) = \lim\limits_{h \to 0} \dfrac{a_1 x^{n-1} h + h^2 g(x)}{h} = \lim\limits_{h \to 0} [a_1 x^{n-1} + h g(x)] = a_1 x^{n-1}.$

This means that we actually only need to figure out $a_1$, since it is the only part that actually influences anything. But think now about what $a_1$ was. This is the number of ways that foiling the expression $(x+h)^n$ can give you a term like $x^{n-1}h$. Well, think about the different places that singular $h$ might have come from. There is one for each $(x+h)$ in the product, of which there are $n$. Ergo, there are exactly $n$ different ways we could have obtained $x^{n-1}h$. Therefore, $a_1 = n$, and if we combine everything we know now, we conclude that $f^\prime(x) = n x^{n-1}$. This marks the end of our proof.

Notice that in this proof, we’ve assumed that the number $n$ was a positive whole number. The result $\dfrac{d}{dx}[x^\alpha] = \alpha x^{\alpha-1}$ is actually true for any real number $\alpha$ at all, but this requires a much more sophisticated understanding of what $x^\alpha$ means, and so I will not try to prove that here. But, the reader should know that this is true.

Example: As an example, we compute the derivative of the function $f(x) = 3x^2 + 7x - 4$ mentioned earlier:

$f^\prime(x) = 3 \dfrac{d}{dx}[x^2] + 7 \dfrac{d}{dx}[x] - \dfrac{d}{dx}[4] = 3(2x) + 7(1) - 0 = 6x + 7.$

Conclusion

We’ve now seen how to evaluate some derivatives. Notice the key principle we needed – we needed to cancel out terms on the top and bottom of a fraction. This is the key principle in evaluating derivatives using the limit definition. Any time you do a derivative in this way, you will be looking to cancel out terms in exactly the same way I have here.

We have now covered some of the easier to see ways in which derivatives can be calculated. In my next post, we will look into some trickier and even more useful rules that derivatives follow. If you want to have an exercise to work on your calculus intuition, convince yourself that all functions that have derivatives are also continuous. Also, think of examples of graphs that have special points that don’t have a tangent line.