I'm not sure most people know how very often the fundamental theorem of calculus comes up. I'm even more surprised how many people are hard pressed to be able to describe it, even if they work in a technical field were calculus is used.

The fundamental theorem of calculus basically says that deriving the rate of change of something and finding integrating the area are like ying/yang hot/cold addition/subtraction. They're complementary and undo each other.

Anyway, I'm writing this because I think I've stumbled across a kinda unintuitive result of it that shows how widespread this duality between area and rate is. I was looking at program written in Mathematica and came across the following snippet:

Mean@Differences@list

This code takes the mean of the differences of a list of numbers called list. The differences are simply the differences in the adjacent numbers: First minus Second, Second minus Third, and so on. The mean is just the average of them. The code here is inefficient. A small amount of algebra shows that there is a quicker way to compute this value than to take all of the differences and then take their means.

Let's say that our list of numbers is (a,b,c,d,e,f). Then our list of differences is (a-b,b-c,c-d,d-e,e-f). To average them, we add them all up and divide by the length of the list which is 6:

(a-b+b-c+c-d+d-e+e-f)/6

which is equal to (a-f)/6

It's not too hard to see. This result however is essentially just the fundamental theorem of calculus. This fact is not as clear - after all, there are no derivatives and integrals really used. They are discrete versions however and they are hidden in the actual problem.

First, the Differences function is a kind of discrete derivative. It is changing our list of numbers into a list of the rate of changes in the numbers. Rate of change is essentially just a derivative.

The second thing is that taking the mean of a set of numbers is kinda like integrating. In fact, most people will remember that you can take the average value of a continuous function by integrating it and dividing by the range over which you are averaging.

Putting these two together shows that they undo each other. The integral from a to z of a derivative is just the original function evaluated at a minus it evaluated at z. We divide by the length of the z-a to get the average.

Actually the correspondence between the difference operator and differentiation is kinda fun in general:

http://en.wikipedia.org/wiki/Difference_operator