03 April 2008

The uniform distribution as a sum?

Yesterday, I was asked the following question: the sum of two uniformly distributed random variables with the same support has a triangular distribution. Is there a random variable X such that X + Y has a distribution which is uniform, where X and Y are independent and identically distributed?

I don't know the answer, but I started thinking as follows. First, it's enough to show that there aren't independent identically distributed X, Y, such that X+Y has a distribution uniform on [-1, 1]; linearly scaling gets the general result. Now, the characteristic function of a uniform distribution on [-1, 1] is φ(t) = (sin t)/t. The characteristic function of X+Y is the product of the characteristic functions of X and Y. (If you're more familiar with analysis than probability, note that characteristic functions are basically Fourier transforms, and the probability density function of X+Y is the convolution of the probability density functions of X and Y.) Thus, if X exists it has characteristic function ψ(t) = [(sin t)/t]1/2 -- this is already a bit problematic, because we want ψ to be continuous, but even with that restriction we still have to specify which square root is being taken on each of the intervals ... [-3π, -2π], [-2π, -π], [-π, π], [π, 2π], [2π, 3π] ... (Informally, we have to make a new choice every time (sin t)/t goes through 0.

At this point I think one wants to use Bochner's theorem, which says that the functions which are characteristic functions of measures on the real line are exactly the positive definite functions -- but how does one show that this function is positive definite?

The other thing to do is to look at the discrete analogue; consider the probability generating function of a random variable which is uniformly distributed on the set {0, 1, ..., n-1}. This is χ(x) = (1+x+x2+...+xn-1)/n. Now, if this random variable were the sum of two independent identically distributed random variables, its p.g.f. would be the square of a polynomial with positive real coefficients. It's not.

But what about the continuous case?

Ori Gurel-Gurevich said...

Seems like you forgot to add the word "independent" throughout the post.

Michael Lugo said...

Ori,

thanks for pointing that out. Without independence, of course, the problem is trivial.

Anonymous said...

I have no comment on your post itself, but thanks for writing it. I've never really grokked the difference between the density and distribution functions. After looking up the characteristic function, it is a little clearer.

Continuous probabilities really defy my quantitative intuition. [Even though they are much easier for me to work with].

Anonymous said...

The answer (as I'm sure you already suspect) is no, you can't write the continuous uniform distribution as the sum of two iid random variables. I don't remember why (except, as you suggest, it has something to do with characteristic functions).

CarlBrannen said...

Forgive me for spending way too much time in the presence of physicists (which may have contributed to early onsent adult dementia), but it seems to me it follows from the fact that the Fourier transform of the uniform distribution, i.e. sin(w)/w, has negative numbers.

The Fourier transform of the convolution of A and B (which I think is how you take two densities and sum them) is the product of the Fourier transform of A and the Fourier transform of B. If A=B, then this is non negative.

Again, sorry if I'm all wet. In QFT, when one wishes to sum over virtual particles in an intermediate state one works in Fourier space because the convolution is turned into the product for this reason.

The whole idea of Feynman diagrams can best be abbreviated for mathematicians as the convolution of Green's functions to make another Green's function (in the "coordinate domain," which becomes the product of the Fourier transforms in the "momentum domain" ).

CarlBrannen said...

Oh, I think you have to add further argument to what I've written as the product could be negative if the Fourier transform of a density had imaginary values. Maybe make an argument on Hermiticity.

Or use the cosine transform instead of the Fourier transform and keep everything real.

Anonymous said...

OK, I studied stats as an undergraduate, but I never went any further with it and don't do stats professionally (instead I do computer programming / systems administration). So I ask this with some trepidation: what do you think of the following sketch of an answer?

We have X, Y, iid random variables. Can X+Y be uniform?

First of all, if X+Y is uniform, it's uniform with support [a,b]. Let's use your stipulated [-1, 1] with your scaling argument (sounds fine to me). Therefore, it's not possible that the distribution of X (and therefore Y) can take values less than -1/2 with greater than probability 0, or else X+Y could be less than -1. By the same argument, no values greater than 1/2 with more than probability 0. So all the non-zero probability density of the function must be between -1/2 and 1/2.

However, we want the distribution of X+Y to be uniform. The only source of probability that X+Y = -1 is at the endpoint -1/2. The only source of probability that X+Y = 1 is at the endpoint 1/2. And both of these contribute to the probability that X+Y = 0.

That is, {X=-1/2, Y=-1/2}, and {X=1/2, Y=1/2} are each half as probable as the union of {X=-1/2, Y=1/2} and {X=1/2, Y=-1/2}.

Therefore, the probability density of the midpoint {X+Y=0} is at least twice as likely as the probability of either endpoint of the interval. Therefore the sum cannot be uniformly distributed.

I hope there is not some painfully obvious flaw in this reasoning.

alex said...
This comment has been removed by the author.
Anonymous said...

I didn't know whether to put this here, or in the dimensionally correct post, but this is the more relevant, I think.

For another problem I'm thinking about [it's all self-indulgent, not professional], I am reading an article about potential games. I majored in physics, so my ears prick up when I see that. This paper is background info for a different one that I'm trying to grok. But two things stood out very quickly.

The definition of P has wonky dimensions and which are not meaningful, and defies my normal idea of a 'scalar field' [in the sense that it maps R^n -> R]

But what really caught my eye was equation 1.2, and it inspired my thinking. I think it got me closer to good thinking about the problem in this post.

Say I have an n dimensional vector space, with real coefficients, q_1, ..., q_n, constrained so that sigma(q_1, q_n) = Q. That says that you are looking at the set of points whose L_1 norm is Q. What does that set of points look like in the L_2 norm?

L_2(Q) = L_1^2(Q) - sigma(q_i*(Q-q_i))

To find the L_2 norm of Q, we need to specify some q_i.

There is more precision that I need to get at, but this is what I'm thinking about.

By the way, you wouldn't know how to form a differential (measure?) in terms of the L_1 norm would you?

G. Jay Kerns said...

This question was - in part - the subject of my PhD dissertation. (I found this blog while searching for another topic).

The answer is "no", there does not exist a legitimate probability density (in the sense of Kolmogorov) for which two iid r.v.'s from the density will convolve to a continuous uniform r.v.

The answer is "yes", if one extends the classical set of densities to include those which are integrable, but not necessarily nonnegative (i.e. L1 functions), and whose integral is one.

Yes, these 'negative probabilities' are illegal in classical probability theory.

For details, please see the link to a copy of my dissertation at www.cc.ysu.edu/~gjkerns/personal.php

Happy characteristic functioning. :-)
Jay

Amit said...

What happens if X and Y are independent but not identically distributed. Is it possible to have X, Y such that X+Y is uniform?

Amit said...

I guess the questions above needs more clarification. X and Y are both discrete and independent random variables in {0, 1, ... n} with same support. We want a distribution on X and Y such that X+Y is uniform over {0, 1, ... 2n}

Anonymous said...

I was stuck in this problem when i was in high school. Even if it seems to be impossible (according to its Fourier or Laplace transform), I think there is only one way to obtain it.

Kind Regards,

Anonymous said...

I found this site using [url=http://google.com]google.com[/url] And i want to thank you for your work. You have done really very good site. Great work, great site! Thank you!

Sorry for offtopic

Anonymous said...

Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!

Anonymous said...

Interesting article you got here. I'd like to read more concerning this theme. Thnx for sharing this information.