## Derivatives, Tangent Lines, and Change (Explaining Calculus #5)

We’ve spent a few posts now developing the ideas of limits and continuity, two of the foundational ideas of calculus. We are now going to introduce the third foundational idea, the derivative. The derivative can be thought of as a way to capture the way that things change over time into one single formula.

In order to explain what exactly the derivative is, it will be helpful to first take a detour into a bit of geometry. Since graphs of functions are expressed in the two-dimensional plane, geometry will actually help us out a lot here.

What is a Tangent Line?

The geometric idea we need is called a tangent line. The notion of line should be familiar – a line is something that is straight, and in math we usually assume that lines are both straight and extend forever. Another way you might view a line is the shortest way to travel between two points (this is more of a comment, this view isn’t really necessary in understanding calculus). What then, is a tangent line? Well, a tangent line has to be a tangent line of some other thing. Sometimes, secondary schools will actually define tangent lines for circles, so that’s where I will start. On a circle, a tangent line is a line that “just barely touches” the circle.

What do I mean here? Let’s pause. Visualize a circle – or if you struggle with visualization, draw a circle on some paper. If you draw a random line, then very likely that line will either touch the circle twice or not at all. These lines are not very interesting most of the time. A line that doesn’t touch the circle at all doesn’t even really have a name, and the line that touches the circle twice is called a secant line (I only mention this because secant lines are useful later). The tangent lines are the super-special lines that only touch the circle once. The easiest example to describe would be a horizontal line that just barely touches the top of the circle. That is a tangent line. Take some time to visualize this, or to draw it if you can’t visualize it.

My goal now is to transition this idea of tangent lines for circles to tangent lines for anything at all. Now, for this discussion we don’t care about lines that don’t touch the circle. We thus only care about tangent lines (those that touch the circle exactly once) and secant lines (those that touch the circle exactly twice). For a given line, choose a point P where that line touches the circle. Now, imagine zooming in, and in, and in on that point. Notice now that as we zoom in, if the line is a secant line, we can easily tell the difference between the secant line and the circle, because the don’t really go in the same direction near P. In other words, zooming in really close to a point on the secant line will give an image looking like a crossing of two roads. But tangent lines are not like this. For a tangent line, if you zoom in nearby P, it actually becomes increasingly difficult to differentiate between which one is the circle and which one is the line. For the circle, imagine the horizontal example I mentioned earlier. What I mean here is that if I pick a point on the circle really close to P, then the circle is really, really close to being horizontal at that point.

If this is difficult for you, think more about it. This concept matters a great deal. As it turns out, this distinction actually defines the difference between tangent lines and secant lines. A given line is a tangent line at P if, when we zoom in towards P, the circle and line only ever become harder to tell apart and never easier. A secant line, however, might look a lot like the circle by zooming in just a little bit, but if you zoom in a lot, there will be a clear difference between the two.

The reason this new way of framing things is so helpful is because we don’t have to use circles anymore. We can use any curved shape we like – because the idea of zooming in towards a point has nothing to do with whether or not that shape is a circle. So, going forward, the idea of ‘just barely touching’ the curve is a good definition, but the definition we came up with using ‘zooming in’ is an even better definition for tangent lines.

What is a Derivative?

The idea of the derivative is tightly connected with tangent lines. In fact, I can now define what a derivative is. This is key, so pay attention.

As we’ve discussed before, we can graph functions $f(x)$ on the $xy$-coordinate plane. As it will turn out, we need $f(x)$ to be continuous in order for any of this to make sense, so assume that $f(x)$ is continuous. What we want to do is to define a totally new function, which we call $f^\prime(x)$ (read this out loud as “f prime of x”). We call this new function the derivative of $f(x)$. Now, suppose that for a given value of $x$, the line $L$ is the tangent line to the graph of $f(x)$ with coordinates $(x, f(x))$. Then the definition of $f^\prime(x)$ is that the slope of $L$ is exactly $f^\prime(x)$.

This definition is clear enough – that is, it is unambiguous. But it doesn’t help us very much with actually finding any numerical values of $f^\prime(x)$. We want to know now how we might find actual numerical values for $f^\prime(x)$. This is the task to which we now turn.

This task we are now beginning is the reason I initially defined a secant line. Recall that our goal is to find the slope of the tangent line at the point $P$ on the graph of $f(x)$. For ease, let’s just say the $x$-coordinate of $P$ is $x$. Then what we we to do? This is where the continuity of the function $f(x)$ comes into play. Because, if I know that $f(x)$ is continuous, I know that if the $x$-coordinate of a point $Q$ on the graph of $f(x)$ is really close to the $x$-coordinate of $P$, then $Q$ is actually really close to $P$ as well. Phrased differently, for continuous functions, small changes in $x$-values means small changes in $y$-values, and since the points $P, Q$ are defined by these two values, which are both small, $P$ and $Q$ must be close by. In fact, let’s say that the $x$-coordinate of $Q$ is $x+h$, for some very small (maybe positive, maybe negative) value $h$.

Now, let’s think about “zooming in” to the point $P$. Let’s also compare two lines – the tangent line $L$ and the line $L^*$ that travels through both $P$ and $Q$. Now, if you try some examples yourself (which I highly encourage) then you’ll discover that the closer together $P$ and $Q$ are, then closer together $L$ and $L^*$ are, which means that we have to zoom in further to really tell the difference between $L$ and $L^*$.

Now, this next step is the reason we spent some much time on limits. What is we let $h \to 0$ as a limit? Then, in the limit $P = Q$. This would mean that $L$ and $L^*$ could not longer be isolated, because they would be the same line. Our conclusion now follows:

$f^\prime(x) = \text{Slope of } L = \lim\limits_{h \to 0} (\text{Slope of } L^*).$

But, the, what is the slope of $L^*$? Simple – this is the rise-over-run formula. Since the two points we defined are $P = (x, f(x))$ and $Q = (x+h, f(x+h))$, then the slope of $L^*$ is $\dfrac{\text{rise}}{\text{run}} = \dfrac{f(x+h) - f(x)}{(x+h) - x} = \dfrac{f(x+h) - f(x)}{h}$. Therefore,

$f^\prime(x) = \lim\limits_{h \to 0} \dfrac{f(x+h) - f(x)}{h}.$

We have now defined a legit numerical formula for the derivative in terms of limits. To move away from numbers and back to concepts, what we have done is to say that tangent lines can be approximated by secant lines of super-close-together points. If you are not yet convinced of this, do the following. Draw a very large circle, and draw a point on that circle. Draw a tangent line at that point using a ruler to make sure it is straight. Then, draw a point as close to that other point as you can, then use a ruler to draw a line connecting those two points. You’ll discover that, nearby the circle itself, these lines are very difficult to tell apart.

A Second Way of Writing Derivatives

There are two common ways of writing down derivatives in calculus. The first uses the “prime” notation, which is the $f^\prime(x)$ I’ve been using. But there is also a second way that, although it means exactly the same thing, can in many situations be more convenient. This notation is often used when we have in mind a graph $y = f(x)$ of some function. What, then, is the derivative of $y$? The notation we’ve used so far is $y^\prime$, and that is perfectly acceptable. But another way is sometimes useful. Sometimes, we will write down $\dfrac{dy}{dx}$ to talk about the derivative of $y$. This has the advantage of being very clear about what the $x$-variable is, and is also convenient when the problem in question is more about graphs than about more abstract functions. There are other situations in which this notation is helpful – those situations will become clear as they arise.

One thing to note. Just because we use the “fraction” notation $\dfrac{dy}{dx}$ does not mean that this is literally a fraction. Although, as we shall see later, the derivative when written in this form does share much in common with ordinary fractions. But it isn’t really a fraction. Care must be taken here. The motivation for this notation is that derivatives are like slopes, and slopes are “changes in y divided by changes in x.” The lowercase d essentially is shorthand for “infinitely small change,” hence the connection of derivatives with limits and this new way of writing.

Also, sometimes I may write $\dfrac{d}{dx}[ something ]$. This is normally done when writing down some kind of function or graph for something would just be an unnecessary annoyance. This, too, will be used sometimes. For the remainder of the calculus series, the reader should basically think of $y', \dfrac{dy}{dx},$ and $\dfrac{d}{dx}[y]$ as synonyms and that we can use whichever is most convenient at any moment.

What Do Derivatives Mean?

Change is the essence of what a derivative is. I find that it is most helpful here to rely on examples. If we are talking about position – about where we are – then the derivative of that tells us how are position is changing over time. But that is just what we mean by speed or velocity – changes in location over time. So, the relationship between speed and place embodies the idea of derivatives. If you want to know about speeding up or slowing down – that is, accelerating or decelerating – that is another derivative! Just as you can think of speed as the derivative of position, you can think of acceleration is the derivative of speed.

While numerous other examples can be given, and will be given as I continue in this series, I think this one is the best and clearest. It is also immediately apparent why derivatives are so important. Determining how fast things are going is a very common problem in engineering and physics, and the derivative naturally gives them the tool to study that aspect of our world. There are so many others as well, more than can even be listed, because of how important the notion of things changing over time is in our world. So, if you think change, think derivative.

As we move forward in the series, we will spend some time talking about developing shortcuts for calculating derivatives more quickly. After that, we can enter into the massive world of using derivatives to solve problems.

## Critical Thinking Toolkit: Clarifying Definitions

This is possibly the most important post I’ve written in my “Critical Thinking Toolkit” series so far. Ensuring we are clear on our definitions is so, so important. Every conversation we can ever have relies on definitions of certain important words, and so this tool always matters. Furthermore, when I look around in the world of political and social discourse, I see this rule being violated in every single area of discussion. Whether in political, religious, ethical, or philosophical conversations, I find that people tend to assume that everybody is using words in the same way they are without every stopping to realize that, sometimes, the picture is more complicated. In this article, I will lay out why it is so important to clarify definitions in our conversations and provide some examples of where I think this practice would aid us in our public discourse today.

The Problem

This problem can be framed in two different sorts of ways. One, as I have already done, is to frame the problem as differences in definitions. For instance, if Person A and Person B define the term “equal rights” in different ways, they can totally agree that everyone should have equal rights while disagreeing on whether or not a certain situation qualifies as a violation of someone’s rights. Say, for example, that Person A thinks a particular thing is a violation of someone’s rights but Person B does not think so. Both people might be inclined to think that the other is being biased, bigoted, etc. because they seem to be affirming equal rights but are then disagreeing with about how to implement equal rights. In reality, though, that isn’t what is happening – both people may well be applying their understanding of equal rights in an unbiased, rational manner. But, as long as these two people are unaware of this difference in definition, the discussion between them will be fruitless.

There is another way we might think of this problem – in terms of “starting places” instead of in terms of definitions. To see what I mean here, take your favorite topic of debate, let’s call it X for shorthand. You might believe something like “the debate about X really boils down to Y.” Now, imagine that you are in a debate with someone who, unknown to you, believes that “the debate about X really boils down to Z,” where Y and Z are totally unrelated. By way of analogy, perhaps the debate topic is the best basketball player in history. Maybe you think it boils down to championships, or maybe you think it boils down to overall statistics. Maybe you think offense is more important than defense, or maybe you think the two are equally important. If you begin your analysis of who the best basketball player is with these various starting points, you are likely to land at quite different conclusions. The point I want to make here is that even though it looks like the disagreement is about the topic itself – which I called X – it actually is not about that at all. The disagreement is really about the underlying starting points – which I called Y and Z.

This is key. Now… what do we do about these problems?

The Solution

If you are in a debate, you ought to do the best you can to boil down the disagreement to the source of the disagreement instead of focusing on mere consequences of that source. Arguing about consequences won’t normally get anywhere. But when we focus on the real core of our disagreements, and understand why someone with a different starting point from us would believe different things from what we believe, we can have much more fruitful conversations.

Some Examples

I’ve picked out some controversial topics to lay out thoughts on. I’m trying as best I can to be unbiased. I am not trying to pick a side on any of these issues. My point here is to attempt to lay out what I see as fundamentally different starting points between the two perspectives that are not brought up often enough. Since I’ve argued that debate should always focus on the source of the disagreement and not consequences of that source, my goal is to locate the real source in each controversy.

The Abortion Debate: This is a big one. On the issue of abortion, the liberal tends to think that the issue is all about bodily autonomy and the conservative tends to think it is all about the sanctity of human life. Furthermore, the liberal tends to think of a fetus as not having the same rights as its mother, while the conservative tends to think of the fetus as a genuine baby even before born, having the same rights as its mother. Now, notice the different perspectives. It isn’t true that the conservative is rejecting the right of bodily autonomy, and it also isn’t true that liberals are intentionally justifying murder. Both of those conclusions can only be arrived at by applying the starting point of one group to the conclusion of the other group. If in fact the mother’s bodily autonomy is the central issue of abortion, then the liberal position makes perfect sense. If on the other hand the right to life of the unborn child outbalances the right of bodily autonomy, the conservative position makes perfect sense. Therefore, a debate about abortion should begin not where it normally begins (on the level of feminism, etc.) because that is just a consequence of the real disagreement, which is on the level of what kinds of rights an unborn child has and how those rights compare to those of its mother.

The Genesis/Science Conversation: There is an ongoing debate among young earth creationist Christians (whom I will call YECs) and those Christians who disagree, that I shall simply refer to as NYEC’s (not young earth creationists). This debate is the debate on the age of the universe. The YEC might argue, say, that the universe is about 10,000 years old. The NYEC will likely say that the universe is something around 13.8 billion years old. I’m sure my readers have heard these debates before, and I don’t want to lay out the sides. What I want to do right now is to show how the criticism of each side of against the other is often flawed. The NYEC might, for instance, criticize the YEC of being anti-science. But actually, this need not be true. For you could have a YEC who deeply values and trusts science, but deeply values and trusts the Bible even more because it has in their view a divine origin, which gives it greater authority than science, which although extremely reliable has human origin. If that person think that the Bible and science disagree on a topic, can’t we at least see why they go with the option that they believe to be most reliable? Certainly, an atheist would choose science over the Bible in the very same situation because that atheist very likely considers the Bible less reliable than science. Similarly, the YEC may accuse the NYEC of denying the authority of Scripture. But this is not necessarily so. In the Genesis passage that talks about creation, there are six ‘days.’ But in the very same passage, the Hebrew word ‘day’ is used to refer to a variety of different time periods. There are many other indication in the Bible itself that a great many historians (both Christian and non-Christian) believe indicates that Genesis is not even trying to use the word day as a literal 24-hour period in these passages. And if the Bible is not trying to describe this model, then of course we can affirm the authority of the Bible and reject this model – in fact that is precisely what we ought to do. So, the mainstream NYEC and YEC criticisms of each other are often off base. The dialogue needs to shift in order to be more productive.

The God/Science Conversation: The previous point was largely about science and religion, but I mean something different here. I mean the common position of the atheist that “science has replaced religion.” There are so many problems with this general attitude, but I want to point out one. The person who says this implicitly assumes that religion and science were trying to answer the same questions, and that science answers those questions better than religion does. This is often called the god-of-the-gaps model – where the only role of God in the universe is to explain things we don’t know how to explain with science yet. But this is just not what religious beliefs were ever meant to explain. That is not at all why Christians believe that God exists. Although there might be some individual Christians who think this way, this is plainly not the teaching of mainstream, historical Christian teaching. I could spend a long time talking about this (and will later write much more about it) but the summary version of the problem is that the atheist here thinks that religion is trying to answer how questions, but in reality religion is primarily focused on why questions – questions of meaning, purpose, and value. Science does not say anything at all about meaning, purpose, or value. It just doesn’t. Thus, the debate on these questions is almost entirely based upon a false presupposition about what the role of God is in religions.

The Gay Marriage Conversation: I actually don’t take much of a side on the legal debate about civil marriage between two people of the same sex. So I want to be clear that I’m not biased on the law here. However, the debate is largely misguided. The liberal side – the side that thinks same-sex marriage should obviously be legal – see the issue as one of equal rights. But the level of disagreement between conservative and liberal positions here has nothing to do with equal rights, because the conservatives also believe that there should be equal marriage rights. The disagreement at its heart is on what the word marriage ought to mean. This is the reason the term ‘civil union’ keeps coming up. Because to the conservative, the words ‘marriage’ and ‘civil union’ mean something different. The word ‘marriage’ has as part of its definition that the members must be of opposite biological sexes. From this perspective, a same-sex marriage is like a square circle or a married bachelor – it just makes no sense. But a same-sex civil union makes perfect sense. So, the issue on the level of legality has more to do with what marriage actually is than any debate about equal rights.

Conclusion/Why It Matters

These things matter. When you debate about a topic that you actually agree on, we never make any progress. Productive debate absolutely requires both people involved to come to an understanding of where they actually differ. Without making the effort to dig down to the true nature of disagreement – whether it is a definition of a word or how to apply those definitions – is absolutely central in having productive debate. And of course, productive debate is always better an unproductive debate. Unproductive debate is nothing more than a pointless squabble or power struggle – productive debate leads to a better world.

Side-note: As I was in the process of writing this very post, I watched a video from one of my favorite YouTube pages – Capturing Christianity – that just so happened to be on almost exactly this topic. The video is an interview with a prominent academic philosopher talking about how to more effectively get at truth in our conversations, and it brings up a lot of the same ideas I thought of here, although much more fleshed out than what I write here.

Thought I’d leave the link for anyone interested: https://youtu.be/udFMuRWub7U

## COVID Testing and Bayes’ Theorem

Yesterday, an interesting conundrum came to me. Sometimes, people take two COVID tests on the same day. Imagine that one came back negative and the other came back positive… which can and does happen. Here is the tricky question… which one do you trust? The positive or the negative? You might think there is no way to compare them… but you’d be wrong. Mathematicians have an entire theory for dealing with problems exactly like this one. It is called Bayesian probability theory.

What is Bayesian Probability?

There isn’t any need to go into the incredible depth of what Bayesian probability is capable of, and so we won’t. Actually, all we need is a brief account of what Bayesian probability is meant to do and a brief explanation of how to do it. I can then walk you through how I, with my mathematical training, could go about evaluating which of the test results is probably right and which is probably wrong.

In short, Bayesian statistics or Bayesian probability is a theory of how evidence works. More specifically, Bayesian probability theory explains how evidence and probability interact and much influence new evidence should have on your current beliefs. This theory allows us to analyze which pieces of evidence are more important than others and how to correctly add new information into the overall picture.

This is the natural way we process evidence. We all understand that seeing the raincloud on the weather app does give us some reason to think it will rain tomorrow, and we all understand that the 30% is an even better piece of evidence that, although it might rain tomorrow, it probably will not. Notice how after every new piece of evidence, the way we think about the situation changes. If you were to look at a different weather service and saw a 70% chance of rain, then your opinion should shift to account for that new information. When a mathematician, scientist, or philosopher talks about Bayesian probability or Bayesian statistics, all we mean is that we are using a mathematical theory that helps us keep all of our information up to date with the broad scope of evidence available to us.

Why Use This Stuff Anyways?

You’d be very surprised how many people us Bayesian thinking. I don’t think many of us would be surprised to see a scientist or mathematician using this method of thinking… but how about historians, philosophers, and detectives? Well, they do! Whenever you hear somebody speaking about the best way to explain something or the most likely explanation of something, very often they are using a process something like Bayesian probability. The strength of this way of doing mathematics is that we can directly compare pros and cons. In the weather example, we can figure out whether there being rain or no rain tomorrow is a better explanation of why we see the forecasts that we do. In history, if you have sources that give you slightly different information about a historical event, you want to know which historical events would be the most reasonable way to explain how those differences emerged. As a philosopher, you want to understand which ideas fit better with the observations we can all agree on about the world. As a detective, you want to know which suspect better fits with the evidence.

Notice in all of this that I have never once required that the match be perfect. Bayesian probability cares nothing for perfection. This method is designed specifically to deal with imperfect, messy situations. And that is what we are always dealing with, day in and day out.

How Does the Math Work?

The cornerstone of Bayesian probability theory is called Bayes’ Theorem. Before I can state what this is, I need to introduce the notation we use to write it down. When I use the capital letter $P$, this is always referring to the probability of something or other. Capital letters other than $P$ are used as shorthand for some event we care about – like whether or not it will rain. For example, I might use $A$ as shorthand for “it will rain tomorrow,” and then $P(A)$ would be the likelihood/probability that it will rain tomorrow. If I want to talk about the opposite of something, we can just put the word ‘not’ on the front – so $P(\text{not }A)$ is the likelihood that it will not rain tomorrow. (It is sometimes convenient to use a minus sign or some other symbol instead of the word not, but I won’t do that here.)

The last bit we need is called conditional probability. The notation I will use is $P(A|B)$, which you should read as “the probability of $A$ given $B$.” The key word there is given. When we ask about $P(A|B)$, we aren’t just asking about the likelihood of $A$ – we are asking about the likelihood of $A$ given that we already know $B$. This is the aspect of Bayesian probability theory that makes it possible to take new evidence into account. In the weather example, $B$ would be the weather reports we are looking at.

With all of this written down, I can now express Bayes’ Theorem. Before I write it in symbols, I’ll describe it in words. The goal is to calculate $P(A|B)$. The idea is that we can use the fact that we already know $B$ to get a head start. Since we already know $B$ happened, we can eliminate from consideration all possible situations where $B$ doesn’t happen. What we do now is we flip the order of the letters for a bit. We want to ask if $A$ would have made $B$ likely to happen – in other words, we want to know $P(B|A)$. If we also know $P(A)$, the likelihood of $A$ happening at all, then $P(B|A)P(A)$ basically tells us the proportion of situations where both $A$ and $B$ happen. Likewise, $P(B|\text{not} A)P(\text{not}A)$ tells us the proportion of situations where $B$ happens but $A$ doesn’t. Since $A$ either happens or doesn’t, those are the only two possibilities. Then to compute an overall probability, we pick the case we want to know about – the case where $A$ actually happens – and divide that by all the possible situations – that is just $P(B)$ since we already know $B$ happened. What we’ve done is leveraged the fact that we know $B$ has happened already to figure out $P(A|B)$. In terms of formula, the preceding discussion tells us what we wanted to know.

Bayes’ Theorem: For any two events $A$ and $B$,

$P(A|B) = \dfrac{P(B|A) * P(A)}{P(B)}.$

To see how this works, let’s go back to our weather example. In that example, $A$ represents “it will rain tomorrow” and we will say $B$ represents “the weather forecast says it will rain tomorrow.” The values of $P(A)$ and $P(B)$ are found using background information – so $P(A)$ would be the likelihood it will rain on a random day, and $P(B)$ would be the percent of days that weather services predict rain in your town. You could count up a few months of old weather predictions to find $P(B)$, and you could count up a year’s worth of actual rainy days to find $P(A)$. Now, $P(B|A)$ would be the likelihood that the weather service would predict rain if in fact it will rain tomorrow. That should be a reasonably high number. Let’s say, just to put numbers to it, that $P(B|A)$ is 90% – so on rainy days, weather services had predicted that rain the day beforehand in their forecasts 9 out of 10 times. Let’s say that in your town it rains on 15% of days overall and that your weather service predicts rain on about 18% of days overall.

Then $P(A|B)$, the probability that it actually will rain tomorrow given the forecast, is

$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)} = \dfrac{0.9*0.15}{0.18} = 0.75.$

This means that, three times out of four, it actually will rain tomorrow. Notice the cool thing about this – I never actually used the weather service’s posted probability to do any of this. I came up with the likelihood entirely on my own. This is the power of Bayesian probability.

Back to the original situation now. You’ve taken two back-to-back tests. One is positive, one is negative. What to do? Bayesian probability, that’s what!

Remember that the first step to any Bayesian problem is to set up your background information. The first piece of background information would be where you live. You’d want to estimate the probability that a random person near you currently has COVID-19. You could use your city, the area of your city you live in, or perhaps just your college campus if you live on a campus and never leave. I will call the probability of a random nearby person being sick $p$. The second piece of background information is the quality of the test you use. The relevant term is the specificity of the test – which tells you how likely the test is to give you a negative result if you are truly no sick with COVID. I will call this factor $\sigma$. Notice that the specificity is phrased in Bayesian terms already – it is the likelihood that your test will come back negative given that you are not sick. We will use this later. Lastly, we need to know the rate of positive tests in the area you live in – I’ll just call this $q$. To make everything easier to write down, I will use $+$ as shorthand for testing positive and $-$ as testing negative.

We now carry forward. What we want to know is the likelihood of obtaining a certain test result given your actual health condition. There are four probabilities that need to be calculated: $P(+|sick), P(+|well), P(-|sick)$, and $P(-|well)$. Now, we call on $\sigma$, the specificity of the test. This is the probability that you will test negative given that you are well. We’ve been using the notation $P(-|well)$ for this. This means that $P(-|well) = \sigma$. Now, three to go.

We can now use a principle of probability that we mentioned earlier. Either an event happens, or it doesn’t. Pretty simple. In probability language, this means opposite events add up to 1. Since $+$ is opposite of $-$, $P(-|well)$ is opposite to $P(+|well)$. This means that $1 = P(-|well) + P(+|well)$. We now know that $P(+|well) = 1 - \sigma$. Now we know two out of the four.

This is as far as we can go without using Bayes’ theorem. I’ll now use Bayes’ theorem to calculate $P(well|-)$. The theorem tells us that

$P(well|-) = \dfrac{P(-|well)*P(well)}{P(-)}.$

Now, we need to fill in the blanks. We already know that $P(-|well) = \sigma$. Our background information gave us the value of $P(well)$, which is $1-P(sick) = 1-p$. The background information similarly tells us that $P(-) = 1 - q$. Therefore,

$P(well|-) = \dfrac{\sigma (1-p)}{1-q}.$

We now use the same “adding up to one” trick we used before to see that $P(sick|-) = 1 - \dfrac{\sigma (1-p)}{1-q}$. This trick has brought the “sick” category back into the formula, which is what we needed. We now need Bayes’ theorem again to calculate $P(-|sick)$. Using the background information much the same way as before,

$P(-|sick) = \dfrac{P(sick|-)*P(-)}{P(sick)} = \dfrac{\bigg( 1 - \dfrac{\sigma (1-p)}{1-q} \bigg) * (1-q)}{p} = \dfrac{(1-q) - \sigma(1-p)}{p}.$

Notice that if we use the original meanings of $\sigma, p, q$, then this means that

$P(-|sick) = \dfrac{P(-) - P(-|well)P(well)}{P(sick)}.$

This other way of forming the expression is easier to read. To find $P(-|sick)$, we count all the negative tests, subtract away those people who are actually well, and divide by the number of sick people. This is actually pretty intuitive when you slow down and think about it – we can describe the whole process in terms of counting people – and yet look at how far it has gotten us! As before, the fourth number, $P(+|sick)$, can be found using the “adding up to 1” trick.

We now have all the numbers we need:

$P(-|well) = \sigma,$

$P(+|well) = 1 - \sigma,$

$P(-|sick) = \dfrac{(1-q) - \sigma(1-p)}{p},$

$P(+|sick) = 1 - \dfrac{(1-q) - \sigma(1-p)}{p}.$

Now what? Well, the likelihood of your test results if you are actually well is the product of all the probabilities given that you are well. In Bayesian terms, this is

$P(-|well)*P(+|well) = \sigma(1-\sigma).$

Likewise, the likelihood of your test outcomes if you are sick is

$P(-|sick)*P(+|sick) = \dfrac{(1-q) - \sigma(1-p)}{p} \bigg(1 - \dfrac{(1-q) - \sigma(1-p)}{p}\bigg).$

We are essentially done now. All that remains is to compare the two values. Whichever one is larger is the more likely of the two.

This is about as far as we can go with these simple probability methods. But, there is more to be said. First, we might notice that if we treat $\dfrac{(1-q) - \sigma(1-p)}{p}$ as a single unit, perhaps call it $s$, then the likelihood of your outcome if you are actually sick is $s(1-s)$. We can view this whole problem now in terms of values of a function, $f(x) = x(1-x)$. Using a different part of mathematics called calculus, we can discover that the biggest possible value of this expression is precisely $1/4$, which happens when $x = 1/2$. We also learn from calculus that the closer $x$ is to $1/2$, the bigger $f(x)$ is. This means that all you really need to do is to find $s$ and figure out whether it is closer to $1/2$ than $\sigma$ is.

And of course, in real life there is a speculative nature to all of this. You can only be sure of your final answer to the degree that you know your background information is reliable – and that is something that, although we do have it for many diagnostic tests, we do not yet have for COVID tests. So, while we can use the power of mathematics in these situations and many, many more, we must always use caution and humility in doing so.

## Database: What Do History’s Top Scientists Think About God?

This is now the third entry in a collection of inter-related database posts. The object of these posts is to collect data about the religious convictions of leading figures in the history of various intellectual disciplines. As of this moment, I have undergone this project for mathematicians and philosophers. This list is aimed at scientists.

As a note, I realize that you can’t really prove the truth of falsity of a religious position via sociological statistics. I wouldn’t try to do that. This is not my intention here. The motivation for this project for me was to understand how culture has impacted the development of math, science, and philosophy. For example, if only 5% of humans hold to religious belief X but 20 out of the “Top 100” scientists in history believed in X, then that gives us good reason to believe that this religion either actively or passively (via other positions it affirms) encourages the pursuit of science. For instance, Christianity teaches that the universe was created by a God and that God designed the human brain to be capable of reason and intellectual pursuits. People who believe Christianity, then, would have some very good reasons to believe that science can be done – since God’s creating the universe gives them good reason to think the universe would have patterns in it, and knowing God created them in His image gives them good reason to think they are capable of discovering those patterns. On the other hand, if a religion taught as a central doctrine that the world is completely chaotic, people who believe that religion aren’t going to be doing much science, if any.

So, we can gain from these observations a sort of empirical understanding of whether certain religious backgrounds do or do not encourage scientific study. Of course, there are other factors at play here. But looking at this kind of data ought to help us understand these questions better.

As a Christian, I am particularly motivated to understand the role of Christians in science. Especially in light of the firestorm of modern accusations that Christianity is, at its core, anti-science. I’ll readily admit that there are many Christians today who oppose various scientific ideas. But, over the course of history, the data seems to show exactly the opposite. Christians appear all throughout the sciences – in fact, the data suggests that Christians are by far the dominant group in the history of science. There is a similar phenomenon in the data I collected for mathematics and philosophy.

I think it is then quite reasonable to conclude, absent other evidence, that Christianity is in no way anti-intellectual at its core, even if certain subcultures within Christianity might be anti-intellectual. If you don’t believe me, take a look at the data for yourself and see what conclusions you come to. I will be periodically updating this post with any new information I come across, as I admit readily that pinning down the religious beliefs of historical figures is often tricky to do.

Link for the Top 100 I used: https://www.sapaviva.com/

Top 100 Scientists

1. Isaac Newton – Christian
2. Leonhard Euler – Christian (Calvinist)
3. Gottfried Leibniz – Christian
4. Carl Friedrich Gauss – Christian
5. Michael Faraday – Christian (Protestant)
6. Euclid of Alexandria – Likely polytheist
7. Galileo Galilei – Christian (Catholic)
8. Nikola Tesla – Likely Deist
9. Marie Curie – Agnostic
10. Albert Einstein – Agnostic (maybe pantheist)
11. Alhazen ibn al-Haytham – Muslim
12. Louis Pasteur – Christian (Catholic)
13. Johannes Kepler – Christian
14. Liu Hui – Likely Taoist or Buddhist
15. Max Planck – Theist (Christian most of his life)
16. Augustin Louis-Cauchy – Christian (Catholic)
17. James Clerk Maxwell – Christian (Evangelical or Presbyterian)
18. Avicenna of Persia – Muslim
19. Amedeo Avogadro – Likely Christian
20. Dmitri Mendeleev – Deist
21. Robert Koch – Agnostic
22. Ernest Rutherford – Likely Theist (Likely Christian)
23. Nicolaus Copernicus – Christian (Catholic)
24. Bernhard Riemann – Christian
25. Zhang Heng – Daoist/Confucian
26. Blaise Pascal – Christian (Catholic)
27. Muhammad ibn Musa Al-Khwarizmi – Muslim
28. Henri Poincaré – Atheist
29. Abu Rayhan Al-Biruni – Muslim
30. Isambard Brunel – Unclear
31. Claudius Galen of Pergamon – Unclear
32. Joseph-Louis Lagrange – Agnostic
33. Qin Jiushao – Unclear
34. Paul Ehrlich – Likely Jewish
35. Archimedes of Syracuse – Likely polytheist
36. Nasir Al-Din Al-Tusi – Muslim
37. Robert Boyle – Christian
38. Pierre-Simon Laplace – Agnostic
39. Zhu Shijie – Likely Shamanism
40. Wernher von Braun – Christian (Evangelical)
41. Henri Becquerel – Unclear
42. David Hilbert – Agnostic
43. Niels Bohr – Atheist
44. Srinivasa Ramanujan – Hindu
45. Gregor Mendel – Christian (Catholic, Augustinian friar)
46. Emmy Noether – Likely Jewish
47. Antoine Lavoisier – Christian (Catholic)
48. Brahmagupta – Hindu
49. Edward Jenner – Christian
50. Pierre de Fermat – Christian (Likely Catholic)
51. Zu Chongzhi – Unclear
52. James Watt – Deist
53. René Descartes – Christian (Catholic)
54. John von Neumann – Likely Agnostic
55. Omar al-Khayyam – Agnostic/Atheist
56. Hermann von Helmholtz – Unclear
57. Robert Hooke – Likely Christian
58. George Washington Carver – Christian
59. Pythagoras of Samos – Pythagoreanism
60. Joseph Louis Gay-Lussac – Atheist
61. Aryabhata – Likely Hindu
62. Alessandro Volta – Christian (Catholic)
63. Christiaan Huygens – Christian (Protestant)
64. Carl Linnaeus – Likely Christian (Lutheran)
65. Walter Hermann Nernst – Unclear
66. Hippocrates of Cos – Unclear
67. Charles-Augustin de Coloumb – Likely Christian (Catholic)
68. Girolamo Cardano – Likely Catholic
69. Andrey Kolmogorov – Unclear
70. Hans Christian Oersted – Theist (Maybe Christian)
71. Andreas Vesalius – Christian (Likely Catholic)
72. Daniel Bernoulli – Christian (Protestant)
73. Heinrich Hertz – Christian (Lutheran)
74. Jean le Rond d’Alembert – Likely Christian
75. Shen Kuo – Daoist or Buddhist
76. Bhaskaracharya of India – Hindu
77. John Dalton – Christian (Quaker)
78. André-Marie Ampère – Christian
79. Enrico Fermi – Agnostic
80. Claude Bernard – Likely Atheist (Maybe Catholic… this is disputed)
81. Johann Heinrich Lambert – Christian (Likely Protestant)
82. James Prescott Joule – Christian
83. Seki Kowa Takakazu – Unclear
84. Hendrik Antoon Lorentz – Likely Atheist (“Freethinker”)
85. Otto Hahn – Christian (Lutheran)
86. Luigi Galvani – Likely Christian (Catholic)
87. Jean-Baptiste Joseph Fourier – Christian (Likely Catholic)
88. Abu-Kamil ibn Aslam Shuja – Muslim
89. Georg Simon Ohm – Christian (Protestant)
90. William Thomson Kelvin – Christian
91. John Bardeen – Likely Deist
92. Li Shizhen – Likely Neo-Confucian
93. James Joseph Sylvester – Jewish
94. Wilhelm Conrad Roentgen – Likely Christian
95. Sergei Pavlovich Korolev – Likely atheist
96. Antoinie von Leeuwenhoek – Christian (Reformed/Calvinist)
97. Jesse Ernest Wilkins, Jr. – Unclear
98. Humphry Davy – Deist
99. Lise Meitner – Christian (Lutheran convert, ethnically Jewish)
100. Alexander Fleming – Unclear (Presbyterian or Agnostic?)

Statistics on Beliefs

Total Counted: 88 (Uncounted: 12)

Theist / Atheist + Agnostic / Other: 61 / 15 / 12

Christian: 44

Atheist/Agnostic: 15

Uncommitted Theist/Deist: 8

Muslim: 6

Jewish (religiously): 3

Hindu: 4

Other: 8

## Critical Thinking Toolkit: The Fallacy of Arguing from Ignorance

When debating important topics, whether with your friends, online with strangers, or in a public debate with spectators, there is always a high emphasis placed on proving your case. People ask why you believe what you do, ask you to show your evidence, ask you to give them your arguments. There is nothing wrong with an inclination to ask people for reasons for what they believe – this is healthy. But there is a very real danger of letting this instinct run amuck. This danger goes under the name of arguing from ignorance. Here, we’d like to have a brief discussion of what this logical fallacy is and why it is fallacious, as well as pointing out some examples of how this fallacy can play out in conversation.

What is an Argument from Ignorance?

The key word to understand is perhaps the word proof. This word has at least two commonly-used meanings. In mathematics, logic, and (sometimes) philosophy, the word proof denotes something irrefutable. Proofs in this context are required to be so persuasive that rejecting the proof amounts to rejecting one of the foundational rules of logic. Outside of these very narrow fields (and even occasionally within some of these fields), this super-charged kind of proof doesn’t exist. Most of the time, when we use the word proof we really mean evidence-based reasoning. This is much, much broader and is the normal mode of reasoning in the physical sciences, politics, historical studies, and pretty much anything else you can think of.

With an understanding of how we use the word proof, we can talk about arguments from ignorance. An argument from ignorance looks something like: because you can’t prove X, X must be false or because you can’t disprove X, it must be true.

Why is this Fallacious?

There are a great many things wrong with arguments from ignorance. In order to treat the issue with the right level of finesse, I need to distinguish between the two kinds of provability. The most obvious difficulty comes up when you try to apply the strong type of provability to a realm like experimental science, history, or politics, fields in which that kind of provability is completely outside the picture. This issue is hardly worth commenting on because of how obviously bad it is – it would be on par with taking issue with a political party platform because it does not prove any mathematical theorems. Such an objection would be completely off base. But we can do even better. If we try to use the correct kind of proof in the correct domains of inquiry, the fallacy still shows up.

Mathematics has Debunked Arguments from Ignorance

The previous discussion should convince the thinking person that arguments from ignorance are fallacious. But what is even more surprising, in the realm of mathematics arguments from ignorance have been proven to be fallacious – in the strong sense of the word prove, the kind of proof that is irrefutable. This is one of the most unexpected developments in the entire history of mathematics, and only came about in very recent times.

Now, what is this development? These are the two “incompleteness theorems” of mathematician and logician Kurt Gödel. What these are is summarized well by the Stanford Encyclopedia of Philosophy [1]:

“Gödel’s two incompleteness theorems are among the most important results in modern logic, and have deep implications for various issues. They concern the limits of provability in formal axiomatic theories. The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F. According to the second incompleteness theorem, such a formal system cannot prove that the system itself is consistent (assuming it is indeed consistent).”

The second incompleteness theorem is fascinating in its own right, but doesn’t really concern us here. We are concerned with the first incompleteness theorem. Let’s slow down and explain the philosophical jargon. A formal system is just the body of true things that can be proven mechanically using only logic and a finite number of axioms which are taken to be self-evident truths within the formal system. The system being consistent means that there are no statements that are both true and false within that system – since that is inherently absurd. When it talks about a certain amount of arithmetic, this is essentially referring of the ability of this logical system to accurately describe the basic properties of addition, subtraction, and counting (the exact level of detail doesn’t matter much for our purposes here). The conclusion of the theorem is that, if all the previous assumptions work, there is a statement G within that formal system that is neither provable nor disprovable. We can extrapolate this to mean that there is a statement about whole numbers that, even though it is true, cannot ever be proven to be true. This is a disproof of the idea of arguing from ignorance. Arguments from ignorance say that because you can’t prove something, it must be false. But Gödel’s first incompleteness theorem uses the laws of logic to deduce that there are statements about whole numbers that are true but cannot be proven. This is a direct refutation of the entire concept of arguing from ignorance.

What About that Other Kind of “Proof”?

But what about if by “prove” we mean “provide reasonable evidence for”? Surely, a claim about which we can’t find any evidence must be false. Again, no. I could produce endless examples, but just one should do.

Suppose you go back in time a few thousand years, back to a time when nobody knew yet that the earth is a sphere and not flat. Imagine then that you get into a debate with an ancient human about the shape of the earth. Being so far back in time, civilization is not yet developed enough to produce the evidence that we need to show that the earth is round. If the method of argument from ignorance is valid, then the ancient man can conclude that since you cannot show him evidence that the earth is round, therefore the earth must be flat. But of course, the earth is not flat. Ergo, the method of argument that concluded that the earth is flat is flawed, ergo the method of argument from ignorance is flawed.

You could do the same thing with any scientific theory, or a great number of other truths we take for granted.

Conclusion

I think this ought to be a clear enough presentation that you can’t dismiss something as definitely false on the basis that nobody has proved it to be true. Of course, we should be skeptical of claims about which no evidence has been produced. But we equally must be skeptical that we have adequately looked for evidence in an unbiased way. For me, it would take years of serious searching to come anywhere near concluding that there is no evidence at all for some position, and in fact I think there is evidence for many positions that I disagree with – including the position that God does not exist. Now, I don’t think the evidence there is very convincing, not nearly as convincing as the evidence in favor of the truth that God does exist, but I’m quite willing to grant that the atheist does have a nonzero amount of evidence. I think that is the proper position to take, and I think humanity would benefit greatly from such an approach in the hotly debated issues of our time.

Citations

## Understanding Continuity (Explaining Calculus #4)

In both life and academic science, we often come across things that change incrementally over time. If we are buying a fast car, we want to know how quickly that car accelerates from 0 to 60 miles per hour – and the answer is most certainly not zero seconds – we first have to hit every number between 0 and 60. Just like many other situations we encounter in day to day life and in more academic areas, we don’t “leap” from one point to another suddenly without first crossing over everything in between the two points.

Now, what do we make of this mathematically? How might we express these ideas of continual change over time in mathematical terms? The answers, as it turns out, lies in the ideas of limits we have developed so far.

Understanding “Continuity”

As in much of mathematics, we will phrase the idea of input-output type relationships as functions. In the car example from earlier, we might call out input the amount of time we’ve been slamming the gas pedal and the output as our current speed. In mathematics, we have a name for the idea of continual development of a function over time – we call these continuous.

How then do we define continuity? I think the most helpful way to view this is by thinking about what it would mean to be not continuous – usually called discontinuous. Imagine that the function $f(x)$ is not continuous at 7. This would mean that, at 7, things jump in some way. Another way we could think of the same thing is that one part of the graph is ripped apart from another. Before we move on, let’s use two numerically helpful examples.

Example 1: Define the function $f(x)$ to be $x$ rounded down to the nearest whole number. Let’s think about what happens to $x$ near 2. If you pick any number $a$ that satisfies $1 < a < 2$, then $f(a) = 1$. If you instead choose a number $b$ that satisfies $2 < b < 3$, then $f(b) = 2$. Now, suppose I tell you that $c$ is very close to 2 and ask you what value $f(c)$ has. You can’t really tell me anything, because there is a huge difference between cases when $c > 2$ and when $c < 2$.

In terms of limits, we can say here that $\lim\limits_{x \to 2} f(x)$ does not exist, since the “left-hand side” and the “right-hand side” of $f(x)$ look extremely different.

Example 2: This example is different than the first one in a key way. We now define $f(x)$ differently. Define $f(x) = 0$ whenever $x \not = 0$ and $f(0) = 1$. Unlike in Example 1, the left and right sides of 0 are actually similar – in the language of limits, we have $\lim\limits_{x \to 0} f(x) = 0$. But, the value of the limit is not the same as the actual value of $f$ at 0. It is as if we poked a hole at 0 and shifted that point up without changing anything else.

In this case, we still have discontinuity. There is a “microscopic jump” that happens at 0.

A Precise Definition of Continuity

How do we use these examples of jumps to make a more careful, mathematical definition of continuity? In other words, how do we define what it means to not have a discontinuous jump? Here is the standard mathematical definition, which we have now motivated by discussing examples where the idea breaks down.

Definition: A function $f(x)$ is continuous at the point $a$ whenever we have

$\lim\limits_{x \to a} f(x) = f(a).$

Example 1 was an example of a situation where the expression $\lim\limits_{x \to a} f(x)$ does not make sense. Example 2 was a situation where both $f(a)$ and $\lim\limits_{x \to a} f(x)$ make sense. You could, if you’d like, use an example like $f(x) = \dfrac{1}{x}$ as an example where $f(a)$ doesn’t make sense (in this case, $f(0)$ doesn’t make sense).

Most functions that we deal with are continuous, or at least continuous everywhere that we care about. So remembering this definition verbatim is not necessarily important unless you have to deal with an especially difficult function. In conceptual terms, the idea of continuity should be thought of as function whose graph has no rips, jumps, or holes.

Real-World Meaningfulness of Continuity

The idea of “no rips, jumps, or holes” can be reframed in a way that has obvious real-world implications. If a function $f(x)$ is continuous, it means that if we change $x$ by a small amount, then the output value $f(x)$ also changes by only a small amount. What this ends up implying is that continuous functions are especially easy to approximate, which is useful in computer science and engineering – and is one of the facts that allows us to use calculators to find decimal expansions for complicated numbers and functions.

Here is an example of how that might work. Let’s say you want to approximate the value of $\sqrt{3}$. We can do this by using the continuous function $f(x) = x^2$. Since this function is continuous, then “$y$ is close to $\sqrt{3}$” and “$y^2$ is close to 3″ are basically the same statement. You could then find a fraction whose square root you know how to compute, and if that fraction is close to 3, then its square root is very close to $\sqrt{3}$. Here is an example. The number $2.89 = \dfrac{289}{100}$ is pretty close to 3, and its square root is exactly $1.7 = \dfrac{17}{10}$. This means, since $f(x) = x^2$ is continuous, that $1.7 \approx \sqrt{3}$. This is just one example of a way that knowing about continuity can be helpful.

An Important Result of Continuity

Continuous functions are very helpful in the real world, but they are also massively important conceptually. There are so many reasons for why, most of which I won’t get into. But I will lay out two of them. The first, called the Intermediate Value Theorem, can be used to guarantee that lots of equations definitely have solutions, even if you don’t know how to write down what that solution is. The second, called the Extreme Value Theorem, guarantees that under a very general situations, questions about maximum and minimum values always make sense.

The Intermediate Value Theorem (IVT) basically says “If $f(x)$ is a continuous function and hits the numbers $a$ and $b$ on its graph, then it hits every number between $a$ and $b$ too. The Extreme Value Theorem (EVT) basically says “If $f(x)$ is a continuous function, then it always has a maximum and minimum value in any ‘finite window’.”

Here are more formal ways of expressing these two ideas.

Intermediate Value Theorem: Suppose that $f(x)$ is a continuous function and that $f(a), f(b)$ are two unequal numbers. Then if $M$ is a number between $f(a)$ and $f(b)$, then $f(x) = M$ has a solution with $x$ between $a$ and $b$.

Extreme Value Theorem: Suppose that $f(x)$ is a continuous function. Then if $f(x)$ makes sense for all $x$ satisfying $a \leq x \leq b$, then there are both a largest value and smallest value of $f(x)$ for $x$ satisfying $a \leq x \leq b$.

It isn’t important that we go too deep into exactly why these work (but perhaps the visual ways I’ve explained continuity might suggest to you why both of these must be true). But because of the IVT and EVT, continuity lets us know that many real-world functions that matter either definitely have maximum or minimum values (which matters a great deal if that function counts how much money your company makes) or that certain equations definitely have solutions (which matters a great deal if that equation tells you something about the structural integrity of a building).

Continuity is an extremely important concept in calculus, and is one that is worth reviewing over and over again until you really deeply understand what it means. Going forward, pretty much every situation we find ourselves in will make use of continuous functions, so this is good to keep in mind.

## What is Living with ADHD Like?

I write about a wide variety of interests and issues on my blog. Most recently, I’ve been writing about calculus and general facts about academic argumentation and logic that are helpful for both day-to-day thinking and big picture questions. I’m also in the beginning of a long reading project that will eventually lead to a lot of posts about physics, math, and theology.

Another interest of mine is psychology and mental health. I write much less about this than other topics, mainly because I know less about it, so I normally don’t feel qualified. But although I don’t have much academic knowledge about psychology, I was diagnosed with attention deficit hyperactivity disorder – known as ADHD – as a child and have lived with this my whole life. Since it is now ADHD Awareness Month (which in all-too-typical ADHD fashion I didn’t know until halfway through the month), I figured I’d make a post discussing a slice of my experience living with ADHD towards the end of October.

Before reading on, it is extremely important to understand that I am not a mental health professional. I speak from personal experience, but ADHD is among the most versatile of mental health conditions and can actually have diametrically opposed symptoms between different people. So, this is a very complicated topic, and nobody should take my word over a professional’s word. But, perhaps what I have to say can help you begin to empathize more with people in your life who have ADHD.

On a quick Google search, I found a brief yet helpful definition of ADHD. To summarize this, ADHD can cause impulsivity, hyperactivity, and problems directing focus. I’d like to provide additional clarification on what “problems with directing focus” means. Most people who have heard of ADHD think that it means things like distractibility and zoning out a lot (ironically, I zoned out while writing this sentence). But, while that is part of the experience of an ADHD has, it is often not the complete story. Allow me to give a more concrete analogy to help explain why this is incomplete.

Imagine that your level of focus is expressed on a scale of 1 to 100, where a 1 would be being asleep or braindead and 100 is so engaging you couldn’t take your eyes away even if someone shot you. Of course, nobody is really ever at 1 or 100. For a person without ADHD, they have pretty good control over where on the scale they are at any given moment, and probably they are always somewhere between 35 and 65. A person with ADHD differs from this model in two ways. Firstly, we have a lot of trouble maintaining control over where on the scale we are at any given moment. Secondly, our day to day experience has a broader range of ratings on the scale, let’s say 20 to 80. The lower numbers represent zoning out and being distracted. The higher numbers represent are called hyperfocus – such an intense concentration that the rest of the world might as well not be there.

Let me give you some examples of hyperfocus and my inability to control what I focus on, since these are the two aspects mentioned above that are least understood.

Hyperfocus: When I was in college, I greatly enjoyed homework in several of my classes. There were times that I sat down in the library, intending to do 1 hour or so of homework, and what felt like 20 minutes later, 8 hours had passed and I had done an entire weeks worth of homework. Even setting a notification on my phone to get me to do other things would occasionally not work.

Lack of control: When I was a child, I had trouble responding to people talking to me if I was watching TV. It wasn’t that they were far away and I couldn’t hear them – I could. But the flashing lights of a TV screen kept my brain in a loop, and even though my brain was able to in some sense recognize that someone was trying to talk to me, I was for whatever reason not able to make the second step of responding to that person. It sometimes took my mom 10 or 15 attempts to get me to respond to her when my issue was at its worst.

Since I’ve lived with ADHD my whole life, it isn’t like I have any external reference frame for my experiences. However, I do notice that other people seem to think about certain things differently from me.

ADHD and Socialization: Socializing is one of those things that is extremely difficult to do without control over your level of focus. It takes extended and consistent concentration to pick up many social cues. Perhaps, for example, someone’s body language is telling me that I should stop talking about a certain topic. I might notice, or I might be so engrossed in the topic itself that I don’t notice. I also have a problem with interrupting people. There are a couple reasons this happens. Sometimes it is because the person takes a very brief pause and I don’t notice that they were about to say more. Sometimes, I have an idea or question and before I think long enough to realize that the other person is still talking, I’m talking. I don’t interrupt people on purpose, but it happens a lot on accident. It is something I have always had to work on, and will probably have to work on for the rest of my life.

ADHD, Loneliness, and Guilt: Although I miss a lot of social cues, I have always had a sense that I was missing things and that I didn’t fit in quite as naturally as others. Knowing on a subconscious that I was not experiencing conversations in the same way other people do, along with other similar differences, leads to a strong sense of social insecurity and isolation on a subconscious level. I had a hard time making friends. For instance, the first close friend I ever made in a school context was in eighth grade – and I think the only reason we were able to connect so well was that he also has ADHD. I never really felt like people were actively rejecting me or intentionally trying to be mean to me (although, as many people have, I did experience some bullying). The problem was more that I could tell nobody understood how I thought about or experienced the world. Being extremely into mathematics from a young age didn’t help this feeling, but I think the general sense I had of not understanding social cues very well (or at any rate, not quickly enough to be of any help) was in many ways the deepest root of my insecurities growing up.

Let me give a specific example of how ADHD can lead to a low self-esteem. Zoning out is a helpful example. As a child, I did get distracted easily, and naturally, occasionally people became frustrated at what must have felt like being ignored. I wasn’t trying to ignore anybody, and I know the people close to me didn’t blame me. But realizing that people often became frustrated about something I felt I had no control over can do a lot of damage to a person’s self esteem. I think this happened to me – and to this day I still feel guilty when I zone out in the middle of a conversation or don’t notice that someone is uncomfortable.

A lot of people with ADHD take prescription medication that helps with some of the issues that ADHD brings into life. Many do not take medication, but many do as well. I am one of those who does. So, what is this stuff anyways?

What Medication Does: There are actually two very, very different kinds of medication that can help with ADHD. I don’t know much about one of them, but I can talk a little bit about my understanding of the family of medications called stimulants.

At first glance, you wouldn’t think that a stimulant would help with ADHD – after all, aren’t these people already hyper? But these actually can help. Here is the way I’ve come to think about it (I’m not a doctor, so I may well be wrong here). Imagine that your brain is constantly extremely under-stimulated (by stimulation, I mean roughly the amount of chemical responses in your brain). In such a case, your brain will try to reach healthy levels of stimulation by grabbing onto whatever happens to give you a boost in that moment. So sometimes that could be an intense focus on one thing, other times, jumping randomly between many things. Your brain wants to, well, feel normal, and the only way for it to feel normal is to constantly seek out intense stimuli to make up for a deficiency. To me, this explains how an under-stimulated brain could be an ADHD brain.

A stimulant would help by raising the level of chemical stimulation to a level nearer to normal. That way, your brain will not be so “needy” and will have more freedom in deciding what to focus on. This enables us to exercise more control over our attention span, which levels out both hyperfocus and inattentiveness.

There is a class of medications called depressants, which are as I understand it the polar opposite biochemically from stimulants, that can also help with ADHD (to give some extreme examples, meth is technically a stimulant and alcohol is a depressant, although neither of these would be used medically for ADHD). I don’t understand as much how depressants work, so I don’t want to comment on that.

Pros: The pros of medication are obvious – for many people, they work. This includes me. I have taken a stimulant since not long after being diagnosed with ADHD as a child, and it has always been helpful for me. I can tell when I forget to take it. If I forget to take it, my levels of motivation will drop to almost zero, I feel as if I have less control over my actions, and I am very spacy and unproductive. When I do take my medicine, I am much more alert and productive and am almost never plagued by the incessant, almost tortuous feeling of boredom that takes over when I forget. It isn’t as if medication takes away all the symptoms of ADHD – I can still tell that I have it – but it moderates the symptoms.

Cons: Medication works for some people, but not for everyone. As with any drug, there can be side effects, or the drug could just simply fail to work for a given ADHD person. The side effects can be both physical (like loss of appetite) and psychological (anxiety, depression, etc.). For some people, taking a medication for ADHD completely alters their personality. I have a friend who is in that situation, and because of how radically it changes him, he chooses not to take anything.

There are, of course, other treatment options as well, and different combinations of the various alternatives work better for different people. For example, a lot of people benefit most from learning how to notice when they are struggling and learning techniques for regaining control over their focus. This would be parallel to someone with depression learning how to notice when they are having toxic negative thoughts and how to effectively remind themselves that these thoughts are false, or someone with anxiety learning how to notice when they are becoming anxious and ways they can calm themselves down. As in all of life, nothing is perfect here, but lots of things can help.

I hope this has been a helpful read for anyone who takes the time to read it all the way through. Gaining a better understanding of what your friends with ADHD might be struggling with is one of the best things you can do to help them with the feelings of isolation and loneliness that often plague us. At the end of the day, knowing that people care about me and want to understand how ADHD affects me is one of the things that helps the most.

## Critical Thinking Toolkit: False Dilemma

Choices are important. Every day, we make lots of decisions about what to do and what to say. When decisions are especially important is when two possible paths we might take are totally opposite another or grind against one another. There is a certain tension that arises in decisions that carry a great deal of importance and are completely different. In story-telling, love triangles are an example of such tension – if I like two people and can only choose one, who do I choose? Sophie’s choice – a heartbreaking decision of which of your children dies and which live – would be another powerful example of how weighty decisions can be.

This weight carries over into the intellectual sphere as well. Sometimes, believing one thing versus another carries with it huge implications. Is there really any such thing as morality, or is it all a matter of biologically-wired, emotionally-charged opinions? If there is such a thing as morality, what system of morality is most accurate? Does God exist or not, and what are the implications of each? These are hotly contested for a reason – you can’t choose both, and either option you choose in these debates has widespread implications.

Situations like these are often called dilemmas. This comes from the prefix di-, meaning two, and -lemma, meaning premise or proposition. Thus, what a dilemma does is to put forward two ideas, two premises, two propositions, that conflict with each other in a foundational way. When such conflict is encountered, there are three options. You can accept the first premise and reject the second, you can accept the second premise and accept the first, or you can conclude that something fundamentally wrong with the ideas involved. Dilemmas are very often used in logic and argumentation, and so it is important to understand the relevant terminology.

In philosophy, dilemmas are used both to clarify positions and to try to prove a point of your own. Any dilemma will involve two statements, which we will call A and B, which we call the horns of the dilemma. You might also think of these as ‘forks in a road.’ In philosophy, it is much more common to use dilemmas as a way to prove a point of some kind, usually that something is impossible. The dilemmatic way of thinking would proceed as follows: “Given such-and-such a way of thinking, one of A and B must be true, and they cannot both be true. So either A is true, B is true, or such-and-such way of thinking is absurd and must be rejected.”

A good example of this kind of argument would be moral dilemmas. One famous example is the example of hiding a Jewish family from Nazis. Now, we know that generally speaking we are morally obligated to not lie and we are also morally obligated to save lives. But, if we were in Nazi Germany hiding a Jewish family, and a Nazi officer comes to our door and ask if we are hiding any Jews, what do we do? In this circumstance, no matter what we do, we will violate one of the moral principles we’ve just mentioned. If we say yes, we avoid lying by allowing harm to come to others, but if we say no, then we save others by lying. So, what do we say here? There are three options you could hypothetically take here. You could argue that protecting the family is more important and so you should even though lying is normally bad, you could argue that telling the truth is more important and so you should say yes, or you could say that this proves that this proves that morality is a construct that cannot fit into yes/no categories at all.

I view this example as legitimate dilemma. But there are attempts at similar dilemmas that are, in a way, fundamentally broken. These broken examples are the topic of this post.

What is a False Dilemma?: A false dilemma, sometimes called a false dichotomy, happens when you present an either/or situation when there are actually other options. In the language I’ve been using before, this would be telling someone that they have to choose between A or B, but really there is another option C that is also viable. There are many well-rehearsed examples of false dilemmas in popular culture that many people do not realize are false dilemmas, and we will overview some of these in some examples later. For now, it is important to discuss how to overcome a false dilemma if someone puts you into one.

How to Overcome a False Dilemma: If someone walks up to me and gives me a false dilemma, in other words, if they present an argument in which they claim that I absolutely have to choose between two options A and B that I do not actually accept, how do I overcome this problem? It is actually quite simple. All you have to do is show them that actually, there is another option, say C. You don’t even actually have to believe C yourself, but ideally you ought to believe C, since otherwise your interlocutor would just construct a trilemma (like a dilemma, but with three instead of two options) and you’d still be stuck.

A Few Examples

Here are a few examples of false dilemmas. To rehearse your understanding of what false dilemmas are, find additional alternatives that defeat these false dilemmas:

• “Steve just disagreed with that atheist’s argument, so he must be a Christian.”
• “I recently heard Sally say that she doesn’t like the Democratic platform. She must be a Republican.”
• “That guy just said he approved of President Trump’s recent executive order. He must agree with everything that guy says.”
• “I don’t think I’ve ever seen Anna eat meat. She must be a vegan!”

Some or all of these are probably obviously flawed. That’s ok – that’s the point. These particularly egregious examples of false dilemmas serve as “muscle memory” for your brain. Simpler examples of fallacies help you spot harder ones.

Now, let’s take a look at a harder one.

A Tricky Example – Euthyphro’s Dilemma

Euthyphro’s dilemma is one of the older dilemmas in philosophy, and continues to raise its head in discussions of ethics today. Many people think it is a legitimate dilemma, and a great many disagree. Before we come to any conclusions, let’s take a look at the dilemma. It will serve as an example of a dilemma that is tricky enough that it still occasionally retrieves attention in academia today.

In Plato’s dialogue Euthyphro, Socrates asks Euthyphro, “Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?” This dilemma is meant as a kind of refutation of those who believe in God and in objectively-binding moral values and duties. In more modern language, the dilemma is often presented in the following way: Is something good because it is God wills it, or does God will it because it is good? This is a dilemma – it presents two alternatives. If the first is true – if things are good because God wills them – then if God willed that we murder people, murder would become good. The difference between good and evil would become arbitrary, which is an unacceptable conclusion. If, on the other hand, if God wills things because they are good, then goodness is outside of God, which contradicts the classical position of those who believe in God. So, as the dilemma says, we should conclude that something is terribly wrong with the idea of God.

But, is this a real dilemma? I don’t think so. To see more clearly why, let’s use some shorthand. Let’s represent “God wills it” by A and “being good” as B. To say that “something is good because God wills it” is to say “because God wills something, therefore it is good.” In other words, the first horn of the dilemma states that “A implies B.” In just the same way, the other horn of the dilemma can be stated in the form of “B implies A.”

Now, take a minute to think about this. Is is always true that, given two statements A and B, one must imply the other? Well, of course not. Certain statements are logically independent, so that neither one implies the other or rejects the other. So at least a priori, this is a false dilemma. The one who supports the Euthyphro dilemma must provide explanation of why these are the only alternatives. But the person who says God exists has options. For example, the Christian can say that it isn’t true that one of these explains the other, but that actually God’s will and the ultimate source of goodness are just the same thing. So, it isn’t true that one comes before the other (as the dilemma suggests). To say that “A and B are different ways of expressing the same thing” is not at face value the same thing as either horn of the dilemma, and so the advocate of the Euthyphro dilemma must either show how this new position actually is one of the horns or how this position is impossible. Therefore, I think this another example of a false dilemma.

Even if you disagree with me here, this should serve as an example of how to identify and approach trickier instances of false dilemmas.

## Working With Limits (Explaining Calculus #3)

In a previous post, I spent time talking about what limits are on a conceptual level, and some details about how they work on a practical level. Now, I want to run through a few examples to show how computing limits actually works, and in particular a very special kind of situation – which you might call zero divided by zero scenarios – in which limits give us an especially important insight.

Limits by Substitution and Continuous Functions

Most functions that we actually encounter have a property that mathematicians usually called continuous functions. In visual terms, you can think of a continuous function as one without any rips/jumps or holes (or being continuous at some particular point means there are no rips or wholes at a particular point). What exactly do we mean here? It is helpful to give two examples.

An easy example of a whole would be a function like $f(x) = \dfrac{x}{x}$. So long as $x$ is not equal to 0, the value of $f(x)$ is perfectly well-defined and in fact equal to 1. But if $x = 0$, then $f(x)$ no longer makes any sense. You’ve divided by zero, after all. So, this division by zero “pokes a hole” in your graph. If you’d like, you could ‘smooth out’ your graph to fill in that hole. But that represents a genuine change.

As for the example of a “rip” or “jump”, think about the idea of rounding. To make this easy to write, let’s round down instead of rounding ‘to the nearest integer’. In case this anyone has forgotten, you ’round down’ a positive number by slicing off its decimal part (we need not deal with negative numbers here), and we define $\lfloor x \rfloor$ as “$x$ rounded down.” As you gradually alter $x$ from 1.5 to 2.5, there are only two options for $\lfloor x \rfloor$ – 1 and 2. Once you get all the way to 2, you find you instantly jump up from a value of 1 to 2 without ‘connecting’ through all the numbers in between. When you graph this, it looks like you’ve leaped up from 1 to 2, like a staircase. This is the sort of thing that happens when you have a “jump” in your function.

We’ve briefly defined what we mean by “jumps” and “holes.” But these ideas are not so much important in and of themselves. It is more important to understand what it means for a graph to not be like that. In non-mathematical terms, functions that do not have these defects are sometimes described as being “something you can draw without lifting up your pencil” (try to draw a “jump” or “hole” without lifting your pencil to see what we mean here). These functions are called continuous – they continue from one number to another without any holes, so to speak. The aforementioned property of continuous functions is actually extremely important (it leads to what is called the intermediate value theorem, which will be discussed later) and so should be clearly understood as the defining property of continuous functions.

But in terms of the mathematics, what is continuity? In terms of limits, we can describe a continuous function as one for which the limit towards a value $x$ is the same as the value $f(x)$. In other words:

Definition: A function $f(x)$ is called continuous at the point $a$ whenever

$\lim\limits_{x \to a} f(x) = f(a)$.

This definition is important to how calculus operates. Although you don’t usually have to refer to the actual definition itself, keeping in the back of your mind the concept of a function not having any defects like jumps or holes is a helpful concept to keep in mind. It is almost always sitting somewhere in the background, but only occasionally does it come up as the crucial concept. But when it does (as in the intermediate value theorem, coming in my next most in this series) it is extremely important and essential.

One-Sided Limits

There is a minor variation on the idea of limits that can also be useful. Instead of just asking how close you are to a number, you might also ask which ‘side’ of the number you are on – the greater-than side or the less-than side. In normal limits, both of these sides must be taken into account – and in fact both must agree in order for the actual limit to exist (if the two sides disagree, this can be thought of as a “rip” or “jump” in your function). Apart from this ‘directional’ aspect, there isn’t really any difference at all between the computation of one-sided limits and regular limits, and so there won’t really be any need to mention one-sided limits after this brief discussion. The one thing worth noting is that if you are dealing with a function that is defined in different “pieces,” using one-sided limits might be necessary in order to evaluate regular limits.

The 0/0 Scenario

One of the cardinal sins of school mathematics is dividing by zero. This is strictly forbidden. There is a good reason why, too. The reason it is forbidden is that it is impossible to assign an actual value to, say, 1/0, without crushing the rest of the rules of addition and multiplication. Since $x * 0 = 1$ is never true, $x = \dfrac{1}{0}$ can also never be true. And yet, there is something perplexing about zero divided by zero, because the logic I used against 1/0 doesn’t work any more. It is actually true that $x*0 = 0$, and so… in some sense… we should want to say that $x = \dfrac{0}{0}$ is always true. That is still ridiculous, but there is a tiny little hint of truth in there that limits help us discover in a more careful and correct way.

Instead of thinking of dividing zero by itself, let’s think about dividing a variable by itself. What is the value of $\dfrac{x}{x}$? Well, if $x \not = 0$, then the $x$‘s cancel out and $\dfrac{x}{x} = 1$. But, it $x = 0$, then we are in the $\dfrac{0}{0}$ conundrum from earlier.

Here is where limits come to save us. We have this expression, $\dfrac{x}{x}$, that is equal to 1… almost. There is a tiny little blip at 0 where we can’t make any sense of it. But we can make sense of its limit approaching 0. In fact, when we say that $x$ approaches zero, we assume $x$ isn’t actually zero, but just approaching ever-closer to it. Therefore, the limit enables us to actually cancel out the two $x$‘s without the whole zero-divided-by-zero problem, and we conclude that $\lim\limits_{x \to 0} \dfrac{x}{x} = 1$. Notice that we can use the exact same reasoning for $\dfrac{2x}{x}$ to conclude that $\lim\limits_{x \to 0} \dfrac{2x}{x} = 0$, or even with a situation like $\dfrac{x^2}{x}$, where we would conclude that $\lim\limits_{x \to 0} \dfrac{x^2}{x} = \lim\limits_{x \to 0} x = 0$.

This explains the vague idea derived before that “0/0 can be any number.” Limits help us with this situation.

Why the 0/0 Scenario Matters

This is a neat observation, but why should we care? Why would we ever want to make sense of expressions that lead us to strange zero-divided-by-zero situations anyways? Nothing in our experience appears to suggest something like that, in fact our experience teaches us to avoid zeros on the bottom of fractions like its the plague. Why should we care?

Well, it is true that it isn’t as obvious why this might tell us as some other, simpler pieces of mathematics do. But, as we will later see, this zero-divided-by-zero insight is the single most foundational observation that we need in order to understand the core of how things change over time.

## Database: What Do History’s Top Philosophers Think About God?

Philosophy has been going on for thousands of years. Many greatest and most influential names in the history of human intellect come out of the philosophical tradition. It is therefore interesting to look at what these great minds think about any issue – whether that be politics, religion, ethics, or anything else. Whether or not we agree with these great thinkers, thinking through their ideas is helpful in formulating our own.

The purpose of this article in my database series is to give some statistics on what some top philosophers across history think about the existence of God, and of religion in particular. Although such statistics cannot prove much, I’ve found it does help in dispelling popular misconceptions. For example, a lot of people seem to believe that the Christian church has always been anti-intellectual. If this were actually true, you’d expect to see almost no Christians among the most influential academics of history, and you’d expect any Christians you do find to be quite “bad Christians,” so to speak. Thus, looking at the religious beliefs of historically influential philosophers, mathematicians, scientists, historians, etc. could serve as evidence for or against a claim like “Christians are anti-intellectual.” And of course Christianity is just an example here – you could ask similar questions about atheism, Buddhism, certain political affiliations, or any other set of beliefs.

If there is significant underrepresentation of people with certain beliefs in a variety of academic fields, then that might possibly serve as evidence that those beliefs tend to draw those people away from those fields. Similarly, if there is an overabundance, it might be evidence that those people are particularly drawn to those fields. These conclusions would be especially true of groups with greater degrees of freedom to choose their careers – in other words, the more freedom you have, the more we can learn about you based on the decisions you actually make.

For these reasons, it is helpful to be aware of the beliefs of important intellectuals in various fields. It cannot actually prove anything about the truth of those beliefs, but what it does do is give us one piece of insight into how certain belief systems lead people to behave. As with my previous database on mathematicians, this database of philosophers is meant to eliminate as much bias as possible by letting history speak for itself. I have made this list of “Top 100 Philosophers” based on an article that doesn’t appear to be coming from either a religious or non-religious perspective.

1. Aristotle – Theist
2. Plato – Unclear (Platonic idealism)
3. Socrates – Unclear
4. Confucius – Confucianism (Founder)
5. Pythagoras – Pythagoreanism (Founder)
6. Guatama Buddha – Buddhism (Founder)
7. Augustine of Hippo – Christian (Catholic Bishop and Saint)
8. Thales – Pantheism (or perhaps polytheism)
9. Immanuel Kant – Deism (maybe Christianity)
10. Epicurus – Polytheist
11. René Descartes – Christian (Catholic)
12. Thomas Aquinas – Christian (Catholic Priest and Saint)
13. Niccolò Machiavelli – Likely Atheist (maybe agnostic)
14. Jean-Jacques Rousseau – Christian (Calvinist)
15. Laozi – Taosim (Founder)
16. Heraclitus – Theist
17. Avicenna – Muslim
18. Seneca the Younger – Theism (Stoic philosopher)
19. Sun Tzu – Unclear
20. John Locke – Christian (Protestant)
21. Democritus – Likely Atheist (but pretty unclear)
22. Plutarch – Greek Polytheism (Priest at Temple of Apollo)
23. Friederich Nietzsche – Atheist
24. Diogenes of Sinope – Unclear
25. Georg Wilhelm Friederich Hegel – Christian (Lutheran)
26. Thomas Hobbes – Likely Christian (Heretical)
27. Thomas More – Christian (Catholic Saint)
28. Anaximander – Unclear
29. Desiderius Erasmus – Christian (Catholic Priest)
30. Parmenides – Unclear
31. Baruch Spinoza – Likely pantheist/panentheist
32. Francis of Assisi – Christian (Catholic Friar)
33. Protagoras – Agnostic or Atheist
34. Zeno of Elea – Unclear
35. Empedocles – Theism (Classical theism?)
36. Anaxagoras – Unclear
37. Plotinus – Theism
38. Averroes – Muslim
39. David Hume – Agnostic or Atheist
40. Pliny the Elder – Likely Atheist (Naturalist)
41. Arthur Schopenhauer – Atheist
42. Friederich Engels – Atheist
43. Francis Bacon – Christian (Anglican)
44. Ludwig Wittgenstein – Atheist
45. Lucretius – Atheist (or Agnostic)
46. Charles de Secondat – Likely Deist
47. Anaximenes of Miletus – Likely panentheist (in his case, ‘air is divine’)
48. Peter Abelard – Christian (Catholic)
49. Gorgias – Likely Atheist/Agnostic
50. Auguste Comte – Atheist (positivist)
51. Michel Foucault – Atheist
52. Origen – Christian (Catholic, Church Father)
53. Xenophanes – Deist
54. Mencius – Confucianism
55. Karl Popper – Agnostic
56. Tertullian – Christian (Catholic, Church Father)
57. Michel de Montaigne – Likely Atheist
58. Epictetus – Unclear
59. Isidore of Seville – Christian (Catholic Archbishop, Church Father)
60. Zeno of Citium – Pantheism (Founder of Stoicism)
61. Theophrastus – Unclear
62. Arius – Christian (Heretical Catholic)
63. Henri Bergson – Likely Jewish (seems complicated)
64. Leucippus – Unclear
65. Rudolf Steiner – Pantheist (Founder of Anthroposophy)
66. Roger Bacon – Christian (Catholic Friar)
67. Zhuangzi – Taoist
68. Al-Farabi – Muslim
69. Eusebius of Caesarea – Christian (Catholic Bishop)
70. William of Ockham – Christian (Catholic Friar)
71. Comenius – Christian (Protestant)
72. Antisthenes – Theist or Henotheist
73. Anicius Manlius Severinus Boethius – Christian (Catholic Martyr)
74. Nagarjuna – Buddhist
75. Al-Ghazali – Muslim
76. Philo – Jewish
77. Diogenes Apollonia – Pantheism (A bit fuzzy)
78. Nasreddin – Muslim (Sufi)
79. Pyrrho – Unclear
80. Mikhail Bakunin – Atheist
81. Clement of Alexandria – Christian (Catholic, Church Father)
82. Edmund Husserl – Christian (Lutheran)
83. Søren Kierkegaard – Christian
84. Jürgen Habermas – Likely Agnostic (Maybe Atheist)
85. George Berkeley – Christian (Bishop in Church of Ireland)
86. Johann Gottlieb Fichte – Likely Atheist
88. Aristippus – Unclear
89. Ludwig Andreas Feuerbach – Atheism
90. Walter Benjamin – Jewish (maybe agnostic/atheist)
91. Herbert Spencer – Agnostic
92. Zhu Xi – Confucianism
93. Mozi – Likely pantheist (Founder of Mohism)
94. Posidonius – Unclear
95. John of Damascus – Christian (Catholic monk)
96. Antonio Gramsci – Agnostic or Atheist (Marxist)
97. Athanasius of Alexandria – Christian (Catholic Bishop, Church Father)
98. Karl Jaspers – Theist (Likely Deist)
99. Wilhelm von Humboldt – Unclear
100. Carl von Clausewitz – Unclear

Statistics on Beliefs

Total Counted: 84 (Uncounted: 16)

Theist / Atheist + Agnostic / Other: 44 / 21 / 19

Christian: 27

Atheist/Agnostic: 21

Uncommitted Theist/Deist: 9

Muslim: 5

Jewish (religiously): 3

Buddhist: 3

Other: 16

Some Interesting Observations

• Authority Figure in Catholic Church: 15
• Includes Friars, Church Fathers, Martyrs, Saints, Priests, Bishops, Archbishops.
• It is especially interesting to me that of the 27 Christians in the list (at the time of writing), 15 of these have some kind of religious role in the Catholic church. There is a huge preeminence of Catholics in the list.
• Founders of Religions/Religious Schools: 5
• Includes Confucianism, Pythagoreanism, Buddhism, Taoism, and Mohism.