which I am calling unfamiliar not because it's a particularly deep result, but because it's just not something one usually writes out. It could be proven by induction from the more "standard" product rule (uv)' = u'v + uv'.
But why don't we teach this to freshmen? Sure, the notation might be a bit of a barrier; I get the sense that a lot of my students learn the Σ notation for the first time when we teach them about infinite sequences and series, at the end of the second-semester calculus course; of course they learn about derivatives, including multiple derivatives, sometime in the first semester. (If it is true that they are seeing the Σ notation for the first time then, it doesn't seem quite fair, because then we're asking them to assimilate this weird notation and some real understanding of the concept of infinity at the same time. Indeed, at Penn we often tell them not to worry so much about the notation.) But ignoring the notational difficulties, fix a value of k -- say, 4 -- and get
so basically we notice two things: there are four primes in each term, and the coefficients are the binomial coefficients, which are familiar to most students.
One doesn't take the fourth derivative of a product that often; but even knowing that might be preferable to
Also, one can expand a rule like this to products of more than two terms; we have
Again, this doesn't come up that often, and I don't want to try to write it for derivatives of products of an arbitrary number of factors. Still, the idea is fairly natural but how many freshmen would even know that
?
I really don't know the answer to this -- but products of three factors are not incredibly rare, and the rule here is quite simple -- just take the derivative of each factor in turn, and sum up the results. There's even a nice "physical" interpretation of it -- how much does the volume of a three-dimensional box change as we change its various dimensions?
The coefficients seem kind of arbitrary, though; the point of Hardy's paper is that if things get recast in terms of partial derivatives they go away, both here and in Faa di Bruno's formula for the derivative of a composition of functions. One way to think of this is to imagine that, in the product rule, we have a "prime 1", a "prime 2", and so on up to a "prime k" if we're taking kth derivatives; we attach these primes to the variables in all possible ways, sum up the results, and then "collapse" them by reading all the primes to just mean ordinary differentiation.
Reference
Hardy, Michael, "Combinatorics of Partial Derivatives", Electronic
Journal of Combinatorics, 13 (2006), #R1.
http://www.combinatorics.org/Volume_13/PDF/v13i1r1.pdf
3 comments:
I had never known that extension of the product rule, but I'm glad I know it now.
I can't help but notice the similarity between the expansion of $(uv)^{(n)}$ and the expansion of $(u+v)^n$. Is there an underlying connection between them?
(and how do I do math notations in comments,anyway?)
The extension of this formula also works for the partial redivatives, with multiindexes instead of indexes, Lars Hormander calls it the Leibniz rule (see any of his books on PDEs). As for the presence of the binomial coefficients, it's rather natural, and can be proven by induction on the order of the differentiation, the same way that the binomial formula is.
In response to blaisepascal, although I don't know if you will later see this:
$+$ is a kind of "or", where for natural number arithmetic we're counting ways of doing something. You should think of $(u+v)$ as "you can do $u$ or $v$". $\times$ corresponds to "do one thing and then the next" (think of these as functions or matrices; "or" is naturally commutative, but "and then" is not). In any case, $(u+v)^n$ is "do $u$ or $v$, and then do $u$ or $v$, and then..." But $uv = vu$, so the binomial coefficient counts exactly the number of ways of doing that many $u$s and $v$s.
Now if "do $u$" is "differentiate $u$", then we get the coefficients on $(uv)^{(n)}$.
Let me give a different proof. If $U = e^{ux}$ and $V = e^{vx}$ for constants $u$ and $v$, then $UV = e^{(u+v)x}$, and differentiation corresponds to multiplication: $(UV)^{(n)} = (u+v)^n UV$. If you believe in Fourier, then you think that every function can be written as a sum of exponentials, and since both sides of the (general) Leibniz formula are linear in $U$ and $V$, the result follows.
Post a Comment