


In this context, an orthonormal basis has four nice properties.
Mathematica taylor expansion series#
This is directly related to some of the answers here that cast Fourier series in the language of linear algebra, where the sines and cosines compose an orthonormal basis for the space of functions you are looking at. The physical relevance of expansions in an orthonormal basis But there is at least one physical reason for choosing one over the other, and that is that the expansion coefficients of a vector written in an orthonormal basis reveal particular types of physical information about the system being described by the function, and the type of physical information that is revealed depends on the choice of basis: Many of the other answers are addressing the practicalities of expanding in Fourier series versus Taylor series. In more complicated cases, you can render the problem almost-diagonal in Fourier space, and treat it perturbatively. In the simplest cases, this renders problems diagonal in Fourier space, allowing you to write down the exact solution in one step. The Taylor series is completely useless for this task.)įourier series are useful in this sense because many phenomena in nature exhibit spatial or temporal translational invariance. (For example, it is in principle true that $\cos(x)$ is described everywhere by its Taylor series, but try calculating $\cos(10^8)$ using that series and see how many terms you need to get a reasonable answer. "Rigorous" notions of convergence are not useful here because they talk about the limit of infinitely many terms, which obviously is never attained in practice. In fact, it is often very useful to use asymptotic series, which aren't even pointwise convergent.īroadly speaking, what makes a series useful is how numerically accurate a result you can get with it, while using only as many terms as is practical. A practical calculation doesn't care whether something is pointwise or uniformly convergent. If you're asking about practical advantage, then you need to forget about everything you learned in analysis class. This fact prevents you from taking advantage of that technique.

By contrast, diffentiating a polynomial takes you down the ladder to a lower-order polynomial, so you never get back to where you started, no many how many derivatives you take. As you gain experience with Fourier transforms, you'll see that this fact allows you to convert many linear differential equations into algebraic ones that are much easier to deal with. For any finite-order Taylor expansion, you need to manually truncate the solution outside of a single fundamental period, which is a little awkward.īut probably the most important reason is that you are dealing with differential equations, and sines and cosines have the very special property of remaining unchanged (up to a scaling factor) after two derivatives (and the complex exponential version is unchanged up to a scaling factor after even a single derivative). A general pattern in physics is that if your problem setup has some symmetry, you definitely want to take advantage of that symmetry in solving it.Īnother issue is that, as The Photon mentioned, polynomials inevitably get unboundedly large at large $x$, which doesn't match up with the periodic nature of the solutions. In many situations the differential equation is translationally invariant, and there's no natural point to Taylor expand around, so you need to pick an arbitrary point. One reason that complex exponential expansions (which end up turning on sines and cosines for real-valued problems) are more natural that Taylor series expansions is that they don't require picking a special point to expand around.
