There are some overarching ideas from math that seem to make their way into my everyday life. Orthogonality is one of the newer ones. I've come to appreciate it more recently, seeing it in so many different aspects in so many different areas and applications.

Most engineers and scientists use the term to refer solely to vectors, while the use can be extended to many different types of mathematical objects. The idea however is a very broad one. Parts of code for example can be orthogonal. If changing one part doesn't affect the other parts, this is intuitively like orthogonal vectors. Part of good software design is then a kind of orthogonalization. ( My spell check keeps on warning me "orthogonalization" is technical jargon, but I can't seem to find a suitable expression for it in normal English. )

Analysis is best done by breaking a complicated thing into orthogonal components. This is often an assumed fact. It is clearly intuitive in linear algebra. It is not so clear when it comes to Legendre polynomials for example where it appears to a student that we are adding complexity to make orthogonality. It is not so clear that this makes the computation easier. The best example of the perils of poorly orthogonalized systems might come from statistical modeling in the form of Multicollinearity. A multicollinear model is nothing more than a statistical model whose variables are not well orthogonalized. Multicollinearity is an cause of poor statistical model design in the same way that strongly linked objects are a cause of poor program design.