As someone who tends to prefer problem-solving to theory-building, I find the following particularly interesting (my translation):

A student who takes more than five minutes to calculate the average of sin^{100}x with a precision of 10% has no mastery of mathematics, even if he has studied nonstandard analysis, universal algebra, supervarieties, orplongement[I don't know this word] theorems.

Go ahead, try that problem! (Curiously, it's not one on the list.) More generally, it's an interesting bunch of questions in various branches of mathematics, biased towards but by no means exclusively in calculus and differential geometry; this is no surprise as it's meant for physicists.

(Why the French version, you ask? The article was originally in Russian; there's an English translation but it's not free. Links to free versions in Russian or English, if they exist, would be appreciated.)

**Edit, 12:07 am Sunday**: An anonymous commentator provides the English version, and Dmitri Pavlov the Russian.

## 39 comments:

>or plongement [I don't know this word] theorems.

Embedding theorems. (I suppose Arnold means Sobolev embedding theorems.)

>Links to free versions in Russian or English, if they exist, would be appreciated.)

Russian version:

http://www.ega-math.narod.ru/Arnold.htm

plongement means "embedding", and so he's referring to Embedding Theorems there. I think.

calculate theaverageof sin^100(x)Wouldn't the average rather depend on the distribution of

xin the first place?I can't help but feel there's somethng unstated there...

For example, if x is uniform on (0, 2pi), you get a different answer than if x has a degenerate distribution at 0.

Does he mean given any distribution of x, write the average of sin^100(x) in terms of that original distribution?

Efrique,

the original has "moyenne" there; "mean" would probably be a better translation.

But you're right that there's something unstated; I think uniform on (0, 2pi) -- that is, a single period -- is the intended interpretation.

That's a 90-second problem.

[x-(x^3/3!)+(x^5/5!)]^100 [Third term for giggles; and in the end, you are really only concerned about the first four or five terms of the final formula, so round as convenient]

David, of course you're right, but then you miss the appearance of sqrt(pi)! (That's an exclamation there, not a factorial like that which appears in the exact answer.)

Incidentally, the original Russian says "значительно больше пяти минут", which means "much/significantly more than five minutes", which differs from the French.

David, it shouldn't even be 90 seconds.

I take back my "David, of course you're right"!

David and unapologetic, I do not see how your method can succeed. Did you actually compute the answer?

Boris,

I agree that it doesn't seem to work. For one thing, what are the limits of integration?

Erm, I can find the mean of sin^100 (x) (from 0 to 2 pi), where sin^100 is taken to mean sin nested 100 times (that is, sin(sin(...sin(x)...))). In fact, I thought the notation f^n (x) generally meant exactly that--I never understood why trig functions got special notation treatment.

The more interesting solution to the mean of sin(x)^(2n) (with the intended interpretation) should be binomial(2n,n)/2^(2n) (try a few well-timed trig identities)--which could then be approximated to within 10% with, say, stirling. This took me more than five minutes.

Boris, who said I used the same method as David did? There's a much faster way (using the assumption Isabel was willing to make about the initial distribution).

Oh, and I'm reading the question the same as Nathan did.

It is clear that Arnol'd meant the average of the hundredth power of sin x over a single period (or any integral number of periods, or the limit of the average over [0,x], or the limit of the average over [-x,x]).

I have never seen sin^2(x) = sin(sin(x)) and would love to see any such source. The problem is trivial if the sines are nested (odd periodic functions have zero mean).

Boris--I also have never seen that notation for trig functions. I was merely pointing out that there could be some confusion. After all, this is a translated text which could follow different conventions (though you seem to be reading it in the original). But I find it disturbing (especially for students encountering trig for the first time) that there is a notational discontinuity for trig functions at -1 (sin^n (x) = (sin x)^n for n!=-1, while sin^(-1) (x) != (sin x)^(-1)).

Nathan, I see your point.

But, I mean, I solved the problem after only looking at Isabel's doubly-translated version; I thought it was clear then. Afterwards, I looked at the original Russian, as well as the published translations into French and English. Fun fact: the original Russian actually says "среднего от сотой степени синуса" which is "the average of the hundredth power of sine"

written out, leaving little to no ambiguity. The French and English translators decided to use mathematical notation here themselves.I agree with your complaint regarding notation here. Here and in other places in math, I use a preëmptive approach: I write arcsin to mean the inverse, f(x)^2 but sin^2(x) as usual, and always specify that I mean iteration whenever writing f^2(x). Unfortunately, this doesn't help me at all with reading other people's notation ...

Another common notational problem is that some (most?) people use \subset to mean \subseteq, while others think it should mean \subsetneq by analogy with < and \le. To avoid this, I always write the extremes, \subsetneq or \subseteq, which are unambiguous.

(PS. I've only looked at a couple of the full list of 100 problems, but I think problem 2 is cute -- I even know someone who solved it by using l'Hôpital's rule seven times!)

I thought "plongement" might refer to embedding theorems like those for abelian categories. Arnol'd is famous for putting down highly "abstract" mathematics, and I can imagine this type of embedding theorem would be a

ne plus ultraof the kind of thing that puts him off.Similar to Nathan, I guess, one could try sine as an average of complex exponentials, and raise to the power 100 using the binomial theorem, where only the trivial character on R mod 2pi gives any contribution to the mean, and use Stirling's formula as he suggests. But this is from my head, and I didn't try to do the computation to within 10 per cent. I'm pretty sure I could do it, albeit preferably without Arnol'd standing there with stopwatch in hand.

Maybe someone with a real mastery of mathematics would just happen to know that the integral of (sin(x))^100 over a period is very close to 0.5. I didn't know that until I used a numerical integrator, but I suppose some people might. Then you've got 5 whole minutes to divide by 2pi. :)

I don't think I understand the bit about the "trivial character on R mod 2pi." Anyone care to elaborate?

I was just using high-falutin' language, as if to get Arnold's goat :-) . A character of a topological, let us say compact, abelian group like the circle given by the real numbers R modulo 2pi, is by definition a continuous homomorphism to the circle. For R mod 2pi, the characters are each of the form x |--> e^{inx} for some integer n. If n is nonzero (a nontrivial character), the integral of the character over the whole group, that is, over a whole period in this case, is zero, so it gives no contribution.

That's all that was meant.

Boris, "clear" is about the worst word a mathematician could ever use when trying to convince someone with whom he disagrees. Using superscripts to indicate powers in the

compositionalsense is well-established.Yes, often people go ahead and write a superscript on "sin" rather than after the "(x)", as they should, but even here superscripts are used in terms of compositions. How do you read "sin^{-1}(x)"? That's one that students screw up year after year because of your "clear" notational shortcut.

I have to admit that my gut reaction was 1/2 as this is the average of sin^2(x).

But to get it right, make the substitution sin(x) = u, and convert to an integral over u.

You will have an integral of u^100 for u between -1 and +1 times dx which is du/sqrt(1-u^2).

The u^100 factor will be very small for values of u that are not near +1 and -1. In those regions, you can write an excellent approximation for 1/sqrt(1-u^2) by taking the first nonzero term of the Taylor expansion around 1. This puts the integral into easy form, and should be pretty dang close.

Uh, well over 5 minutes.

unapologetic, perhaps I'll agree with you that simply stating that something is "clear" is unconvincing. However, neither is misquoting.

You write "That's one that students screw up year after year because of

your"clear" notational shortcut." [emphasis mine]It's not

mynotation. I don't even like it. I stated above that I write arcsin instead of sin^{-1}. (But I did not explain explicitly that the reason I do write sin^2(x) is because I feel more mathematicians will more easily parse it that way, simply because that's how it's usually written, not because I like the notation.)Also, I never claimed the

notationwas clear! I said the problem was clear. Here's why: [I can independently justify the interpretation of "average" as well.]1. As I already said, I have never seen a text where sin^2(x) means sin(sin(x)). I would still love to see such a source. (And come on, this point is reason enough.)

2. The problem is trivial (yes, I know that's another great word) otherwise. This is

Arnol'dwe're talking about, and in the context "if you can't solve this in five minutes, you don't have a mastery of mathematics". Trivial problems don't demonstrate mastery, nor do they take five minutes. This is a red flag of misinterpretation.3. "A precision of 10%" doesn't make much sense if the answer is 0. Moreover, "a precision of 10%" suggests that one cannot easily answer exactly.

4. Mathematicians often like to take a problem for general n and turn it into a problem for a single, specific n. Since the number 100 seems arbitrary here, we suppose this is what is happening. Now consider the two interpretations of the problem. If we use the iterated function approach, this is still a thing that people commonly do, but mostly to test if you're "scared" by the number 100. If we use the correct interpretation, the number 100 just means "solve for general n, and plug in n=100". Seeing as how Arnol'd claimed to be testing mastery, not fearlessness, this is slight evidence towards the correct interpretation.

Convinced?

I couldn't solve the problem in five minutes by hand, but I can come up with the answer using Maxima (via Sage).

Surprisingly (to me), the answer is an exact rational:

12611418068195524166851562157/158456325028528675187087900672

I'm convinced by (what you say is) the original Russian, written out.

Yes, the problem in the compositional sense is trivial, but

only if you understand it. Most students with just a toolbox understanding of the calculus would start rushing in to calculate and get mired down in a swamp of numerical estimates, while those with real understanding would step back and see the long view.The number 100 is there to scare, yes. But it's because the answer in the compositional sense doesn't matter what exponent you pick. Students with real understanding would pick up on that.

The 10% figure is a red herring. It's more window dressing designed to make engineers' numerical modes ping, while mathematicians take up that part only if and when it's needed.

Yes, as Nathan and I observed earlier, the exact answer is rational because it's exactly 2^{-100} \binom{100}{50}, and a quick back-of-envelope calculation using Stirling's approximation gives an answer consonant with what someone whose handle begins as "651" said, and I'm guessing what you (cwitty) said, except that the denominator of your answer seems to have been truncated.

Yes, as Nathan and I observed earlier, the exact answer is rational because it's exactly 2^{-100} \binom{100}{50}, and a quick back-of-envelope calculation using Stirling's approximation gives an answer consonant with what someone whose handle begins as "651" said, and I'm guessing what you (cwitty) said, except that the denominator of your answer seems to have been truncated.

What's desired is the 0 coefficient of the Fourier series for sin^n(x). So it's natural to think about this in terms of Fourier transforms. And sin^n(x), as n goes to infinity, approaches a constant (that depends on n) times a delta function at pi/2 and -pi/2.

Of course to get the right answer you have to use the right scaling factor. That brings in a factor of 1/2 pi.

Jinkies, I should have checked back sooner.

The Taylor expansion of the sine around 0 is x-x^3/3!+ ...

Of course, I whipped that out, and now that I look at it again, I realize that I had a duh moment. You rightly point out that I missed the limits of integration [0, 2π], not [-1,1] (or [0,1] which was what was running through my head when I wrote that]). So points off on the exam for that.

You need to get to terms on the order of n, where x^n/n! <= (0.1)^(0.01), which by quick calculation is n=15. And that's, uhh, ugly.

There, I would have run out of time. You can do some trickery otherwise. The trig identity wiki [I had to look it up]

gives you a reduction of powers to frequencies. sin^100(x) = ((((sin^2(x))^2)^5)^5). Powers of cosine never turn back into powers of sine, so you have to figure out how many of the constant factors you need to figure out.

The reason I would think of this is that basic quantum mechanical wave functions are [sines|cosines]. When you look at the particle in the box [canonical first quantization problem], you see sin^2(x) = 1+cos(2x)/2; the reduction of powers to frequencies should be a mental jump that you should make if you are trained well [I think that's the author's point].

This is long, I hope it's not stupid.

Trivially, the average value of sin(x)^100 is equal to the average value of cos(x)^100 on (-pi/2, pi/2). Now, cos (x) is close to exp(-x^2/2) on (-pi/2,pi/2), thus the answer is very close to

$$

\int exp(-100 * x^2/2) dx / \pi ~ \int exp(-x^2/2) dx /(10 \pi) = \sqrt(2 pi) /(10 pi) = \sqrt(2/pi)/10 ~ 0.08

$$

It's easy to make this solution rigorous. And you can make this computation in your head in 5 minutes.

Yury,

that's the solution I had in mind. I'm not saying it's the best one -- the solutions that people have given in terms of binomial coefficients are nice too -- but I do like the trick of approximating the integral by a Gaussian integral. (Somewhere it reminds the probabilist in me of the Central Limit Theorem, I suppose.)

You also get \sqrt(2/pi)/10 if you take the characters/Fourier/trig identities to binomial coefficient to Stirling approach. Perhaps they are "equivalent" somehow?

I like yury's approach. My own first thought was just to approximate cos(x) by 1-x^2/2 since taking 100th powers is going to kill off what happens outside a fairly small neighbourhood of zero. So we look for (\pi)^{-1} times I, where I is the integral of (1-x^2/2)^{100} over the interval on which the integrand is positive.

Being lazy, let's replace this with the integral of 1-50x^2 over the interval (-\sqrt{0.02},\sqrt{0.02}), then basic calculus tells us this is 2x_0-x_0^3/3 where x_0 is \sqrt{0.02}. Continuing to be crude, we may as well say this is 2x_0, and since 1/50 is roughly (1/7)^2 we'll say 2/7 as our estimate for I. Now we divide by 22/7, to continue the tradition of lazy estimating, which gives me

final estimate: 1/11

i.e. a bit more than 0.09. Which given the lack of book-keeping, doesn't strike me as *too* bad when compared with the 0.08 that was obtained with more finesse.

The calculation itself took about a minute, but I expect I'd have wasted well over five minutes trying to spot a trick or a trap, if I hadn't seen some of the solutions on this thread.

Sorry: that should have been

2(x_0-50x_0^3/3)

which then makes the estimate for I about 4x_0/3 ~ 4/21, so that the estimate of the desired quantity is about 2/33, which in turn is about 2/3 times 0.09, which is 0.06 and so looking a bit small.

That said, I haven't made any attempt to correct for reinforcing biases in the errors for these estimates.

I second anonymous rex's comment: the binomial distribution for large n can be estimated also by using a Gaussian approximation (and IIRC from a discussion over at Tim Gowers's blog, this observation gives one approach to proving Stirling's approximation).

The solution is the integral of sin(x)^100 between 0 and \pi/2 and divided by \pi/2.

Taking the 100st-power converts the sinus to a function that is almost everywhere 0, except close to \pi/2, where we could approximate it by 1.

The value of sin(x)^100 is 1/2 at x=1,453, so we could assume that this is the point where the function flips from 0 to 1.

The mean value would then be (pi/2-1,453)/(pi/2)=0,0749.

This is an ugly solution, but it is computed faster than I can type it.

Hmmm—this is taking far longer than five minutes.

I conclude the readership has no mastery of mathematics.

(sorry about earlier posts, can't get my things right).

Sponzen, yours is the neates approach for me so far.

However...

1. "sin(x)^100 is 1/2 at x=1,453"

now .. how do you get that without computer?

That amounts to proving

sin(x)=(1/2)^ 1/100 then x=1,453...

and I don't know how's that calculated in a simple way

Post a Comment