Potential flow near physical boundary

In a 2D or 3D channel flow, the vorticity is created at the physical boundary and diffuses into the body of the flow. Far away from the boundary, the flow can be assumed to be irrotational, i.e. having zero vorticity [1], [2, Chapter 14]. Let u=u(x,t) be the velocity field. The vorticity is \omega=\nabla\times u. Here, \nabla is the differential operator with respect to spatial variables only. The assumption that \omega=0 is equivalent to the assumption that the flow is potential, i.e. u=\nabla h for some scalar function h=h(x,t). Again, this assumption/approximation is only good in the body of the flow far away from the boundary. For u to satisfy the Navier-Stokes equations

u_t-\Delta u+u\nabla u+\nabla p=0,\ \ \nabla\cdot u=0,

h must be a harmonic function and p=-h_t-\frac{|\nabla h|^2}{2}. J. Serrin [3] pointed out that a weak solution of this kind can have no regularity in time. Solutions of the form u(x,t)=a(t)\nabla h(x) is usually referred to as Serrin’s example.

Near the physical boundary, potential flow is no longer a good approximation for the true flow due to large vorticity. Interestingly, this fact is also confirmed at the mathematical level: one cannot impose a non-slip boundary condition u=0 on the solution u=\nabla h without forcing it to be identically zero (static flow).

Proposition: Let \Omega be an open connected subset of \mathbb{R}_+^n such that \Gamma= \partial \Omega\cap \{x_n=0\} is the closure of an open subset of \{x_n=0\}. Let h:\bar{\Omega}\to\mathbb{R} be a function that is continuous on \bar{\Omega} and harmonic on \Omega. If \nabla h=0 on \Gamma then h is a constant function.

Proof: Because \nabla h=0 and \Gamma is path-connected, h must be constant on \Gamma. Subtracting a constant from h if necessary, we can assume that h=0 on \Gamma. By translation, one can assume that 0\in\mathbb{R}^n is an interior point of \Gamma. One can extend h by Schwarz reflection principle to the domain D=\Omega\cup (-\Omega), where -\Omega=\{(x',-x_n):\ x=(x',x_n)\in\Omega\} as follows:

\tilde{h}(x',{{x}_{n}})=\pm h(x',\pm x_n).

With a little abuse of notation, let us denote \tilde{h} as h. Note that points in the interior of \Gamma are also in the interior of D. Because \Gamma\subset\{x_n=0\}, one can prove by induction on \alpha_1 + \ldots + \alpha_{n-1}\ge 0 that the \alpha-partial derivative

\frac{{{\partial }^{|\alpha| }}h}{\partial x_{1}^{{{\alpha }_{1}}}\partial x_{2}^{{{\alpha }_{2}}}\ldots\partial x_{n}^{{{\alpha }_{n}}}}(0)=0\quad\quad (1)

for any multi-index \alpha=(\alpha_1,\ldots,\alpha_{n-1},\alpha_n)\in\mathbb{Z}_{\ge 0}^{n-1}\times\{0\}. From the fact that \frac{\partial h}{\partial x_n}=0 on \Gamma, one can show that (1) is true for \alpha_n=1. Then from the fact that

\frac{\partial^2h}{\partial x_n^2}=-\frac{\partial^2h}{\partial x_1^2}-\ldots-\frac{\partial^2h}{\partial x_{n-1}^2}=0\ \ \text{on}\ \ \Gamma

one can show that (1) is true for any \alpha_n\ge 2. Therefore, all partial derivatives of h vanish at 0. Together with the fact that h is analytic on D, which is an open connected subset of \mathbb{R}^n, one concludes that h is identically zero.

Remark: for n=2, there is a simple proof using Complex Analysis. Namely, h is the real part of a holomorphic function on D. Let k be the imaginary part. The Riemann-Cauchy equations read \partial h/\partial x=\partial k/\partial y and \partial h/\partial y=-\partial k/\partial x. The condition \nabla h=0 and \Gamma implies that both h and k must be constant on \Gamma. Subtracting a complex constant from h+ik if necessary, we can assume that h=k=0 on \Gamma. Because the holomorphic function h+ik vanishes on \Gamma, it must be identically zero.

References:

  1. A Story of Potential Flow
  2. Applications of Classical Physics (2013) by Blandford and Thorne.
  3. On the interior regularity of weak solutions of the Navier-Stokes equations by J. Serrin. Arch. Rational Mech. Anal. 9, 187–195 (1962).

Characterization of degenerate scaling

This post is a follow-up to my post on 12/29/2023, where I posed the following question:

Let (X_n) be an i.i.d. sequence of \mathbb{R}-valued random variables. Let (a_n) be a positive sequence such that \lim a_n=\infty. Under what condition of (a_n) can one conclude that \lim\frac{X_n}{a_n}=0 almost surely?

I have found a full characterization of the sequence (a_n). After all, it is an interesting exercise.

Proposition: Let (X_n) be an i.i.d. sequence of \mathbb{R}-valued random variables Let (a_n) be a positive sequence such that \lim a_n=\infty. If \sum P\left(|X_1|>\frac{a_n}{c}\right)<\infty for any constant c>0, then \lim\frac{X_n}{a_n}=0 almost surely. Otherwise, almost surely \frac{X_n}{a_n} does not converge.

As an application, if (X_n) is an i.i.d. sequence of mean-one exponentially distributed random variables, then almost surely the sequence of \frac{X_n}{\ln n} does not converge.

Proof: First, suppose that there exists c>0 such that \sum P\left(|X_1|>\frac{a_n}{c}\right)=\infty. Then the independent events E_n=[|X_n|>\frac{a_n}{c}] satisfies

\displaystyle \sum \mathbb{P}(E_n)=\infty.

By Borel-Cantelli’s Lemma, almost surely the events E_n occurs for infinitely many n. Therefore, with probability 1, the sequence (X_n/a_n) does not converge to 0. Now fix b\neq 0. We show that almost surely X_n/a_n\not\to b. Consider the independent events E'_n=\left[|X_n|<\frac{|b|a_n}{2}\right]. One has

\displaystyle \sum\mathbb{P}(E'_n)=\infty

By Borel-Cantelli’s Lemma, almost surely the events E'_n occurs for infinitely many n. Therefore, with probability 1, the sequence (X_n/a_n) does not converge to b.

Next, suppose that for every c>0, one has \sum P\left(|X_1|>\frac{a_n}{c}\right)<\infty. For a fixed constant c>0, the event A_c=\left[\limsup\frac{|X_n|}{a_n}\le \frac{1}{c}\right] is a tail event. Thus, \mathbb{P}(A_c)=0 or \mathbb{P}(A_c)=1 according to Kolmogorov’s 0-1 Law. Note that \cap_{n=1}^\infty \left[|X_n|\le \frac{a_n}{c}\right]\subset A_c and

\displaystyle \mathbb{P}\left(\bigcap\limits_{n=1}^\infty \left[|X_n|\le \frac{a_n}{c}\right]\right)=\prod_{n=1}^\infty \left(1-P\left(|X_1|>\frac{a_n}{c}\right)\right)

which is positive because \sum P\left(|X_1|>\frac{a_n}{c}\right)<\infty. Thus, \mathbb{P}(A_c)>0 and therefore, \mathbb{P}(A_c)=1. Now note that

\displaystyle \mathbb{P}\left(\lim\frac{X_n}{a_n}=0\right)= \mathbb{P}\left(\limsup\frac{|X_n|}{a_n}=0\right)=\lim_{k\to\infty} \mathbb{P}(A_{k})=1.

This completes the proof.

Degenerate scaling of random variables

There are well-known results about the scaling and centering laws of a sequence of \mathbb{R}-valued random variables. That is the problem of finding the right coefficients a_n and b_n such that the sequence \frac{X_n-b_n}{a_n} converges in law. See, for example, Section 14.8 of [1]. Normally, one tries to avoid the degenerate type, which is when the limit distribution is a Dirac mass at 0. I recently ran into a problem where the degenerate type is desirable. More specifically:

Let (X_n) be an i.i.d. sequence of \mathbb{R}-valued random variables. Let (a_n) be a positive sequence such that \lim a_n=\infty. Under what condition of (a_n) can one conclude that \lim\frac{X_n}{a_n}=0 almost surely?

This problem seems to be quite classic. If X_1 has a bounded range, i.e. \mathbb{P}(X_1\in[a,b])=1 for some numbers a and b, then no additional condition of (a_n) is needed. How about the case X_1 has an unbounded range? This is essentially Problem 51 on page 263 of [1]. The textbook does not ask for a full characterization of the sequence a_n. I find the problem quite interesting.

The first observation is that the event A=\left[\lim\frac{X_n}{a_n}=0\right] is a tail event. By Kolmogorov’s 0-1 Law, either \mathbb{P}(A)=0 or \mathbb{P}(A)=1. We just need to decide between the two. One may be tempted to think that \frac{X_n}{a_n} is essentially the same as \frac{X_1}{a_n}, so it should almost surely converge 0 as n\to\infty. This argument is not correct because \frac{X_n}{a_n} only has the same distribution as \frac{X_1}{a_n}. If the range of values of X_1 is unbounded, then so is the range of values of \frac{X_1}{a_n}. Let us consider two following cases, one of which gives \mathbb{P}(A)=0 and the other gives \mathbb{P}(A)=1. Let us denote the cumulative distribution function of |X_1| by F(x).

Case 1: there exists a positive sequence (c_n) such that

\displaystyle \lim\frac{c_n}{a_n}=0 and \displaystyle\sum(1-F(c_n))<\infty.

Let E_n=[|X_n|\le c_n] and E=\cap_{n=1}^\infty E_n. Then \mathbb{P}(E_n)=F(c_n) and

\displaystyle \mathbb{P}(E)=\prod_{n=1}^\infty F(c_n)=\prod_{n=1}^\infty (1-(1-F(c_n))

which has a finite positive value because \sum(1-F(c_n))<\infty. It is easy to see that E\subset A and thus \mathbb{P}(A)=1.

Case 2: there exists a positive sequence (c_n) such that

\displaystyle \lim\frac{c_n}{a_n}>0 and \displaystyle\sum(1-F(c_n))=\infty.

Let E_n=[|X_n|> c_n] and E=\limsup E_n. The events E_n are independent and

\displaystyle \sum \mathbb{P}(E_n)=\sum(1-F(c_n))=\infty.

By Borel-Cantelli’s Lemma, \mathbb{P}(E)=1. It is easy to see that E\subset A^c. Thus, \mathbb{P}(A)=1-\mathbb{P}(A^c)=0.

Final thoughts: Cases 1 and 2 do not cover all possibilities. One can see this through the simple case F(x)=1-e^{-x} (the case when the random variables X_n are i.i.d. exponentials of mean one) and a_n=2\ln n. It would be interesting to see a full characterization of the sequence (a_n) for which one has \mathbb{P}(A)=1. I think this should be known somewhere in the literature.

Update 01/01/2024: I have resolved the problem.

References:

  1. A modern approach to Probability Theory” (1997) by Friestedt and Gray.