Mar 15, 2010

Lambda calculus for Tensor Analysis

I know I cant shut up about it, but I really like lambda calculus. Even better llama calculus, the anything goes fast and loose version I use for everything. In fact, the only place I dont like using lambda calculus for is theoretical computer science. I think it offers an abstraction thats really needed in clean math and this is extremely apparent now that im taking this tensor analysis course. Take for example, pullbacks, where we're essentially dealing with higher order functions. Or the fact that we can make a map from a vector space V to its dual with a bilinear. In math notation, Wikipedia says this is done by the map:

$ v \rightarrow \textless v , o \textgreater$

This just seems ... wrong to me. I prefer because of its possibility to handle more complex ideas:

$ \lambda v. \lambda x. \textless v, x \textgreater $


More commonplace is to see this definition of the tensor product:

$ \tau \oplus \theta(v,w) = (\tau v) (\theta w) $

where i feel more comfortable writing in my notes:

$\oplus = \lambda \tau, \theta. (\lambda v, w. (\tau w) (\theta w)) $

Truth be told, I would write both, but trust the lambda notation more. The ideas are defined much more concretely. For pullbacks:

$ F* = \lambda p. p \circ f $

Then of course we have an easy way to talk about the function that makes a pullback:

$ \lambda f. \lambda p. p \circ f $

Iono, maybe Im crazy for mixing different notation like this, but it makes things clear to me.

This all ties back with earlier content in this blog -- Good notation concerns me alot. The symbols used for math reflect the content of the framework we are working in and can be seen as agents of them. Not only do concretely defined notations that are intuitive make learning and recording content easier, they may as I have argued free us from unintended psycholinguistic effects.