Degenerate scaling of random variables

There are well-known results about the scaling and centering laws of a sequence of \mathbb{R}-valued random variables. That is the problem of finding the right coefficients a_n and b_n such that the sequence \frac{X_n-b_n}{a_n} converges in law. See, for example, Section 14.8 of [1]. Normally, one tries to avoid the degenerate type, which is when the limit distribution is a Dirac mass at 0. I recently ran into a problem where the degenerate type is desirable. More specifically:

Let (X_n) be an i.i.d. sequence of \mathbb{R}-valued random variables. Let (a_n) be a positive sequence such that \lim a_n=\infty. Under what condition of (a_n) can one conclude that \lim\frac{X_n}{a_n}=0 almost surely?

This problem seems to be quite classic. If X_1 has a bounded range, i.e. \mathbb{P}(X_1\in[a,b])=1 for some numbers a and b, then no additional condition of (a_n) is needed. How about the case X_1 has an unbounded range? This is essentially Problem 51 on page 263 of [1]. The textbook does not ask for a full characterization of the sequence a_n. I find the problem quite interesting.

The first observation is that the event A=\left[\lim\frac{X_n}{a_n}=0\right] is a tail event. By Kolmogorov’s 0-1 Law, either \mathbb{P}(A)=0 or \mathbb{P}(A)=1. We just need to decide between the two. One may be tempted to think that \frac{X_n}{a_n} is essentially the same as \frac{X_1}{a_n}, so it should almost surely converge 0 as n\to\infty. This argument is not correct because \frac{X_n}{a_n} only has the same distribution as \frac{X_1}{a_n}. If the range of values of X_1 is unbounded, then so is the range of values of \frac{X_1}{a_n}. Let us consider two following cases, one of which gives \mathbb{P}(A)=0 and the other gives \mathbb{P}(A)=1. Let us denote the cumulative distribution function of |X_1| by F(x).

Case 1: there exists a positive sequence (c_n) such that

\displaystyle \lim\frac{c_n}{a_n}=0 and \displaystyle\sum(1-F(c_n))<\infty.

Let E_n=[|X_n|\le c_n] and E=\cap_{n=1}^\infty E_n. Then \mathbb{P}(E_n)=F(c_n) and

\displaystyle \mathbb{P}(E)=\prod_{n=1}^\infty F(c_n)=\prod_{n=1}^\infty (1-(1-F(c_n))

which has a finite positive value because \sum(1-F(c_n))<\infty. It is easy to see that E\subset A and thus \mathbb{P}(A)=1.

Case 2: there exists a positive sequence (c_n) such that

\displaystyle \lim\frac{c_n}{a_n}>0 and \displaystyle\sum(1-F(c_n))=\infty.

Let E_n=[|X_n|> c_n] and E=\limsup E_n. The events E_n are independent and

\displaystyle \sum \mathbb{P}(E_n)=\sum(1-F(c_n))=\infty.

By Borel-Cantelli’s Lemma, \mathbb{P}(E)=1. It is easy to see that E\subset A^c. Thus, \mathbb{P}(A)=1-\mathbb{P}(A^c)=0.

Final thoughts: Cases 1 and 2 do not cover all possibilities. One can see this through the simple case F(x)=1-e^{-x} (the case when the random variables X_n are i.i.d. exponentials of mean one) and a_n=2\ln n. It would be interesting to see a full characterization of the sequence (a_n) for which one has \mathbb{P}(A)=1. I think this should be known somewhere in the literature.

Update 01/01/2024: I have resolved the problem.

References:

  1. A modern approach to Probability Theory” (1997) by Friestedt and Gray.

Leave a Reply

Your email address will not be published. Required fields are marked *