In a previous post, I spent time talking about what limits are on a conceptual level, and some details about how they work on a practical level. Now, I want to run through a few examples to show how computing limits actually works, and in particular a very special kind of situation – which you might call *zero divided by zero* *scenarios* – in which limits give us an especially important insight.

**Limits by Substitution** **and Continuous Functions**

Most functions that we actually encounter have a property that mathematicians usually called *continuous functions*. In visual terms, you can think of a continuous function as one without any rips/jumps or holes (or being continuous at some particular point means there are no rips or wholes at a particular point). What exactly do we mean here? It is helpful to give two examples.

An easy example of a whole would be a function like . So long as is not equal to 0, the value of is perfectly well-defined and in fact equal to 1. But if , then no longer makes any sense. You’ve divided by zero, after all. So, this division by zero “pokes a hole” in your graph. If you’d like, you could ‘smooth out’ your graph to fill in that hole. But that represents a genuine change.

As for the example of a “rip” or “jump”, think about the idea of rounding. To make this easy to write, let’s round down instead of rounding ‘to the nearest integer’. In case this anyone has forgotten, you ’round down’ a positive number by slicing off its decimal part (we need not deal with negative numbers here), and we define as “ rounded down.” As you gradually alter from 1.5 to 2.5, there are only two options for – 1 and 2. Once you get all the way to 2, you find you instantly jump up from a value of 1 to 2 without ‘connecting’ through all the numbers in between. When you graph this, it looks like you’ve leaped up from 1 to 2, like a staircase. This is the sort of thing that happens when you have a “jump” in your function.

We’ve briefly defined what we mean by “jumps” and “holes.” But these ideas are not so much important in and of themselves. It is more important to understand what it means for a graph to *not* be like that. In non-mathematical terms, functions that do not have these defects are sometimes described as being *“something you can draw without lifting up your pencil”* (try to draw a “jump” or “hole” without lifting your pencil to see what we mean here). These functions are called *continuous* – they *continue* from one number to another without any holes, so to speak. The aforementioned property of continuous functions is actually extremely important (it leads to what is called the intermediate value theorem, which will be discussed later) and so should be clearly understood as the defining property of continuous functions.

But in terms of the mathematics, what is continuity? In terms of limits, we can describe a continuous function as one for which the limit towards a value is the same as the value . In other words:

**Definition**: A function is called continuous at the point whenever

.

This definition is important to how calculus operates. Although you don’t usually have to refer to the actual definition itself, keeping in the back of your mind the concept of a function not having any defects like jumps or holes is a helpful concept to keep in mind. It is almost always sitting somewhere in the background, but only occasionally does it come up as the crucial concept. But when it does (as in the intermediate value theorem, coming in my next most in this series) it is extremely important and essential.

**One-Sided Limits**

There is a minor variation on the idea of limits that can also be useful. Instead of just asking how close you are to a number, you might also ask which ‘side’ of the number you are on – the greater-than side or the less-than side. In normal limits, both of these sides must be taken into account – and in fact both must agree in order for the actual limit to exist (if the two sides disagree, this can be thought of as a “rip” or “jump” in your function). Apart from this ‘directional’ aspect, there isn’t really any difference at all between the computation of one-sided limits and regular limits, and so there won’t really be any need to mention one-sided limits after this brief discussion. The one thing worth noting is that if you are dealing with a function that is defined in different “pieces,” using one-sided limits might be necessary in order to evaluate regular limits.

**The 0/0 Scenario**

One of the cardinal sins of school mathematics is dividing by zero. This is strictly forbidden. There is a good reason why, too. The reason it is forbidden is that it is impossible to assign an actual value to, say, 1/0, without crushing the rest of the rules of addition and multiplication. Since is never true, can also never be true. And yet, there is something perplexing about zero divided by zero, because the logic I used against 1/0 doesn’t work any more. It is actually true that , and so… in some sense… we should want to say that is always true. That is still ridiculous, but there is a tiny little hint of truth in there that limits help us discover in a more careful and correct way.

Instead of thinking of dividing zero by itself, let’s think about dividing a variable by itself. What is the value of ? Well, if , then the ‘s cancel out and . But, it , then we are in the conundrum from earlier.

Here is where limits come to save us. We have this expression, , that is equal to 1… almost. There is a tiny little blip at 0 where we can’t make any sense of it. But we can make sense of its limit approaching 0. In fact, when we say that approaches zero, we assume isn’t actually zero, but just approaching ever-closer to it. Therefore, the limit enables us to actually cancel out the two ‘s without the whole zero-divided-by-zero problem, and we conclude that . Notice that we can use the exact same reasoning for to conclude that , or even with a situation like , where we would conclude that .

This explains the vague idea derived before that “0/0 can be any number.” Limits help us with this situation.

**Why the 0/0 Scenario Matters**

This is a neat observation, but why should we care? Why would we ever want to make sense of expressions that lead us to strange zero-divided-by-zero situations anyways? Nothing in our experience appears to suggest something like that, in fact our experience teaches us to avoid zeros on the bottom of fractions like its the plague. Why should we care?

Well, it is true that it isn’t as obvious why this might tell us as some other, simpler pieces of mathematics do. But, as we will later see, this zero-divided-by-zero insight is the single most foundational observation that we need in order to understand the core of *how things change over time*.