Formalizing Wigner’s Semicircle Law

5 Convergence of Matrix Moments

5.1 Convergence in Expectation

Lemma 5.1.1 Matrix Powers Entries

Let \(Y\) be an \(n\times n\) matrix and \(k \in \mathbb {N}\). Then, for each \((i, j)\)-th entry of \(Y^{k}\), we have:

\[ (Y^{k})_{ij} = \sum _{1 \leq i_2, \ldots ,i_{k} \leq n} Y_{ii_{2}} Y_{i_{2}i_{3}}\ldots Y_{i_{k}j} \]
Proof

We proceed by induction on \(k\).

Our base case is \(k=1\), then:

\[ Y_n^1 = Y_n \quad \Rightarrow \quad [Y_n^1]_{ij} = Y_{ij}, \]

which matches the formula since the summation over zero indices just gives the term \(Y_{ij}\).

Our inductive step is to assume that the formula holds for some \(k \ge 1\), i.e.,

\[ \left[Y_n^k\right]_{ij} = \sum _{1 \le i_2, \dots , i_k \le n} Y_{i i_2} Y_{i_2 i_3} \cdots Y_{i_k j}. \]

We must show that it holds for \(k + 1\). Note that:

\begin{align*} \left[Y_n^{k+1}\right]_{ij} & = \sum _{\ell = 1}^n \left[Y_n^k\right]_{i\ell } Y_{\ell j} \\ & = \sum _{\ell = 1}^n \left( \sum _{1 \le i_2, \dots , i_k \le n} Y_{i i_2} Y_{i_2 i_3} \cdots Y_{i_k \ell } \right) Y_{\ell j} \\ & = \sum _{1 \le i_2, \dots , i_k, i_{k+1} \le n} Y_{i i_2} Y_{i_2 i_3} \cdots Y_{i_k i_{k+1}} Y_{i_{k+1} j}. \end{align*}

Thus, the formula holds for \(k + 1\).

By induction, the result holds for all \(k \ge 1\).

Definition 5.1.2 Matrix Multi Index
#

Let \(\mathbf{i} \in [n]^k\) be a \(k\)-index, \(\mathbf{i}=\left(i_1, i_2, \ldots , i_k\right)\). Let \(Y\) be a symmetric matrix. The matrix multi index \(Y_{\mathbf{i}}\) is defined as:

\[ Y_{\mathbf{i}} = Y_{i_{1}i_{2}} Y_{i_{2}i_{3}} \ldots Y_{i_{k}i_{1}} \]
Lemma 5.1.3 Matrix Powers Trace

Let \(Y\) be an \(n\times n\) matrix and \(k \in \mathbb {N}\). Then, the trace of \(Y^{k}\) is given by:

\[ \operatorname{Tr}[Y^{k}] = \sum _{\mathbf{i} \in [n]^k} Y_{\mathbf{i}}. \]
Proof

We can use the result from Lemma 5.1.1 to compute the trace of \(Y^k\):

\begin{align*} & \operatorname{Tr}[Y^{k}] = \sum _{i=1}^{n} (Y^{k})_{ii} = \sum _{i=1}^{n} (\sum _{1\leq i_{2}, \ldots i_{k} \leq n} Y_{i i_{2}} Y_{i_{2}i_{3}} \ldots Y_{i_{k} i}) \\ & = \sum _{1\leq i_{1}, i_{2}, \ldots , i_{k} \leq n} Y_{i_{1} i_{2}} Y_{i_{2} i_{3}} \ldots Y_{i_{k} i_{1}}. \end{align*}
Lemma 5.1.4 Trace of Expectation of Matrix
\[ \mathbb {E}(\operatorname{Tr}(\mathbf{Y}_{n}^{k})) = \sum _{\mathbf{i} \in [n]^{k}} \mathbb {E}(Y_{i}) \]
Proof

By linearity of expectation.

Lemma 5.1.5 Matrix Multi Index and Graph Equivalence

Let \(\mathbf{i} \in [n]^k\) be a \(k\)-index, \(\mathbf{i}=\left(i_1, i_2, \ldots , i_k\right)\). Then

\[ Y_{\mathbf{i}} = Y_{i_{1}i_{2}} Y_{i_{2}i_{3}} \ldots Y_{i_{k}i_{1}} = \prod _{w \in w_{\mathbf{i}}} Y_{w} \]
Proof

We see from definition \(\ref{def:graph_walk_multi_index}\) that the path is defined as:

\[ w_{\mathbf{i}} = ((i_1, i_2),(i_2, i_3), \ldots ,(i_{k-1}, i_k),(i_k, i_1)). \]

if we use each each edge \(w \in w_{\mathbf{i}}\), due to symmetry, we can write the product:

\[ \prod _{w\in w_{\mathbf{i}}} Y_{w} = Y_{\{ i_{1}, i_{2}\} } Y_{\{ i_{2}, i_{3}\} } \ldots Y_{\{ i_{k-1}, i_{k}\} } Y_{\{ i_{k}, i_{1}\} } = Y_{i_{1}i_{2}} Y_{i_{2}i_{3}} \ldots Y_{i_{k}i_{1}} = Y_{\mathbf{i}} \]

as required.

Lemma 5.1.6 Graph Walk and Graph Count Equivalence

Let \(\mathbf{i} \in [n]^k\) be a \(k\)-index, \(\mathbf{i}=\left(i_1, i_2, \ldots , i_k\right)\). Let \(Y\) be a symmetric matrix. Then, we have the following:

\[ Y_{\mathbf{i}} = \prod _{1 \leq i \leq j \leq n} Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}. \]
Proof

Using lemma 5.1.5, we already have that \(Y_{\mathbf{i}} = \prod _{w \in w_{\mathbf{i}}} Y_{w}\). We consider the following cases for each \((i, j)\):

\((i, j) \notin w_{\mathbf{i}}\): In this case, the entry \(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )} = 1\), making no contribution to the product.

\((i, j) \in w_{\mathbf{i}}\): In this case, the entry \(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}\) is equal to the number of times the unordered edge \(\{ i, j\} \) appears in the path \(w_{\mathbf{i}}\).

These two cover all possibilities, so putting them together we obtain

\[ Y_{\mathbf i} \; =\; \prod _{1\le i\le j\le n} Y_{ij}^{\, w_{\mathbf i}(\{ i,j\} )}. \]

Indeed, if \((i,j)\notin w_{\mathbf i}\) then \(w_{\mathbf i}(\{ i,j\} )=0\) and the corresponding factor contributes \(Y_{ij}^{0}=1\), and if \((i,j)\in w_{\mathbf i}\) (hence also \((j,i)\) if \(j{\lt}i\)), the exponent \(w_{\mathbf i}(\{ i,j\} )\) counts exactly how many times the unordered edge \(\{ i,j\} \) appears in the walk, so the factor is \(Y_{ij}^{\, w_{\mathbf i}(\{ i,j\} )}\).

Because every unordered pair \(\{ i,j\} \) with \(1\le i\le j\le n\) is covered by one of these two cases and the product lists each edge exactly once.

Let \(\mathbf{i} \in [n]^k\) be a \(k\)-index, \(\mathbf{i}=\left(i_1, i_2, \ldots , i_k\right)\). Let \(Y\) be a symmetric matrix and let \(\{ Y_{ij}\} _{1\le i\le j}\) be independent random variables, with \(\{ Y_{ii}\} _{i\ge 1}\) identically distributed and \(\{ Y_{ij}\} _{1\le i{\lt}j}\) identically distributed. Then, we have the following:

\[ \mathbb {E}(Y_{\mathbf{i}}) = \prod _{e_s \in E_{\mathbf{i}}^s} \mathbb {E}(Y_{11}^{w_{\mathbf{i}}(e_s)}) \cdot \prod _{e_c \in E_{\mathbf{i}}^c} \mathbb {E}(Y_{12}^{w_{\mathbf{i}}(e_c)}). \]
Proof

Because each \(Y_{ij}\) is independent, we can write:

\[ \mathbb {E}(Y_{\mathbf{i}}) = \mathbb {E}\left(\prod _{1 \leq i \leq j \leq n} Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}\right) = \prod _{1 \leq i \leq j \leq n} \mathbb {E}(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}) \]

using lemma 5.1.6. Consider the following cases for each \((i, j)\) (we assume each \((i, j) \in w_{\mathbf{i}}\) since if they aren’t, then \(w_{\mathbf{i}}(i, j) = 0\) and this contributes nothing to the product):

\(\mathbf{i = j}\): in this case, we have \(Y_{ij} = Y_{ii}\), and therefore the edge \((i, i) \in E_{\mathbf{i}}^{s}\). Since each \(Y_{ii}\) is identically distributed for all \(i\), we have:

\[ \mathbb {E}(Y_{ii}) = \mathbb {E}(Y_{11}) \]

so the factor in the product above becomes \(\mathbb {E}(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}) = \mathbb {E}(Y_{11}^{w_{\mathbf{i}}(i, i)})\).

\(\mathbf{i {\lt} j}\): in this case, we have \(Y_{ij} \in E_{\mathbf{i}}^{c}\). Since each non diagonal entry \(Y_{ij}\) is identically distributed for all \(i \neq j\) (and by symmetry \(Y_{ij} = Y_{ji}\)), we have:

\[ \mathbb {E}(Y_{ij}) = \mathbb {E}(Y_{12}) \]

so the factor in the product above becomes \(\mathbb {E}(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}) = \mathbb {E}(Y_{12}^{w_{\mathbf{i}}(i, j)})\).

Plugging both of these cases into the earlier product over \(E_{\mathbf{i}}^{c}\) and \(E_{\mathbf{i}}^{s}\), we have:

\[ \prod _{1 \leq i \leq j \leq n} \mathbb {E}(Y_{ij}^{w_{\mathbf{i}}(\{ i, j\} )}) = \prod _{e_s \in E_{\mathbf{i}}^s} \mathbb {E}(Y_{11}^{w_{\mathbf{i}}(e_s)}) \cdot \prod _{e_c \in E_{\mathbf{i}}^c} \mathbb {E}(Y_{12}^{w_{\mathbf{i}}(e_c)}) \]

as required.

Definition 5.1.8 Product of Expectation of Matrix Multi Index

Let \(\mathbf{i} \in [n]^{k}\) be a \(k\)-index, \(\mathbf{i} = (i_1, i_2, \ldots , i_{k})\). We define \(\Pi (w_{\mathbf{i}})\) as follows:

\[ \Pi (w_{\mathbf{i}}) = \prod _{e_s \in E_{\mathbf{i}}^s} \mathbb {E}(Y_{11}^{w_{\mathbf{i}}(e_s)}) \cdot \prod _{e_c \in E_{\mathbf{i}}^c} \mathbb {E}(Y_{12}^{w_{\mathbf{i}}(e_c)}) = \mathbb {E}(Y_{\mathbf{i}}). \]
Lemma 5.1.9 \(\mathbf{i} \sim \mathbf{j} \Rightarrow \mathbb {E}(Y_\mathbf {i}) = \mathbb {E}(Y_\mathbf {j})\) : R-1-7 : lem:eq_equiv_eq_expect

Given two \(k\)-indexes \(\mathbf{i} = (i_1,...,i_k)\) and \(\mathbf{j} = (j_1,...,j_k)\), suppose \(w_{\mathbf{i}} \sim w_{\mathbf{j}}\) under 4.0.11. Then \(\mathbb {E}(Y_\mathbf {i}) = \mathbb {E}(Y_{\mathbf{j}})\).

Proof

Let \(\varphi \) be the permutation that maps \(w_\mathbf {i}\) to \(w_{\mathbf{j}}\) in the equivalence relation. Given \(Y_\mathbf {i} = Y_{i_1 i_2}Y_{i_2 i_3} \cdots Y_{i_{k-1} i_k}Y_{i_k i_1}\), we have

\[ Y_{\mathbf{j}} = Y_{j_1 j_2}Y_{j_2 j_3} \cdots Y_{j_{k-1} j_k}Y_{j_k j_1} = Y_{\varphi (i_1) \varphi (i_2)}Y_{\varphi (i_2) \varphi (i_3)} \cdots Y_{\varphi (i_{k-1}) \varphi (i_k)}Y_{\varphi (i_k) \varphi (i_1)}. \]

Observe that \(\{ i_{\lambda _l},i_{\lambda _{l+1}} \} \) is a singleton and only if \(\{ \varphi (i_{\lambda _l}),\varphi (i_{\lambda _{l+1}}) \} \) is a singleton. The fact that \(\{ i_{\lambda _l},i_{\lambda _{l+1}} \} = \{ i_{\lambda _\mu },i_{\lambda _{\mu +1}} \} \) if and only if \(\{ \varphi (i_{\lambda _l}),\varphi (i_{\lambda _{l+1}}) \} = \{ \varphi (i_{\lambda _\mu }),\varphi (i_{\lambda _{\mu +1}}) \} \) completes the proof.

Lemma 5.1.10 Partitioning into double summation : R-1-10 : lem:equation_4.5_1
\[ \mathbb {E}\operatorname{Tr}(\mathbf{Y}_n^k) = \sum _{w \in \mathcal{G}_k} \sum _{\substack {\mathbf{i} \in [n]^k \\ w_\mathbf {i} = w}} \mathbb {E}(Y_\mathbf {i}). \]
Proof

This follows from ‘partitioning’ the summation appearing in Lemma 5.1.4 using the equivalence relation defined in Definition ??.

Definition 5.1.11 \(\Pi (G,w)\): R-1-11 : def:Pi.G.w

Given \(w \in \mathcal{G}_k\), let \(w_{\mathbf{i}}\) be an element of its equivalence class. We define

\[ \Pi (w) = \mathbb {E}(Y_\mathbf {i}). \]
Lemma 5.1.12 Re-indexing the sum with counting argument : R-1-12 : lem:equation_4.5_2
\[ \mathbb {E}\operatorname{Tr}(\mathbf{Y}_n^k) = \sum _{w \in \mathcal{G}_k} \Pi (w) \cdot \# \{ \mathbf{i} \in [n]^k : w_\mathbf {i} = w \} . \]
Proof

This follows from re-indexing the sum of Lemma 5.1.10 by using Lemma 5.1.9 and Lemma 4.0.21.

Lemma 5.1.13 Re-introducing the renormalization factor : R-1-13 : lem:equation_4.5_3
\[ \frac{1}{n} \mathbb {E}\operatorname{Tr}(\mathbf{X}_n^k) = \sum _{w \in \mathcal{G}_k} \Pi (w) \cdot \frac{n (n-1) \cdots (n - |V_w| + 1)}{n^{k/2+1}}. \]
Proof

Combining with the renormalization factor \(n^{-1}\) of Proposition ?? gives

\[ \frac{1}{n} \mathbb {E}\operatorname{Tr}(\mathbf{X}_n^k) = \frac{1}{n^{k/2+1}} \mathbb {E}\operatorname{Tr}(\mathbf{Y}_\mathbf {i}^k). \]

Substituting the term \(\mathbb {E}\operatorname{Tr}(\mathbf{Y}_\mathbf {i}^k)\) with the expression in Equation 5.1.12 gives

\[ \frac{1}{n} \mathbb {E}\operatorname{Tr}(\mathbf{X}_n^k) = \sum _{\substack {(G,w) \in \mathcal{G}_k}} \Pi (G,w) \cdot \frac{n (n-1) \cdots (n - |G_w| + 1)}{n^{k/2+1}}. \]
Lemma 5.1.14 \(\Pi (G,w) = 0\) : R-1-15 : lem:Pi.prod_eq_zero_if_w_le_two

Given \(w \in \mathcal{G}_k\), suppose there exists an edge \(e \in E\) in which it is traversed only once in the walk \(w\), i.e \(1 \in EC_w\). Then

\[ \Pi (w) = 0. \]
Proof

Let \((G,w) \in \mathcal{G}_k\) and let \(\mathbf{j}\) be the \(k\)-index generated by \((G,w)\). Suppose there exists an edge \(e \in E_\mathbf {j}\) such that \(w_\mathbf {j}(e) = 1\). This means, in Lemma 5.1.7, a singleton term \(\mathbb {E}(Y_{ij}^{w_\mathbf {j}(e)}) = \mathbb {E}(Y_{ij})\) appears. The rest of the proof follows from the assumption of Proposition ?? that \(\mathbb {E}(Y_{ij}) = 0\) for every \(i\) and \(j\).

Lemma 5.1.15 Simplifying the summation with the fact \(\Pi (G,w) = 0\) in certain cases: R-1-16 : lem:equation_4.8
\[ \frac{1}{n} \mathbb {E}\operatorname{Tr}(\mathbf{X}_n^k) = \sum _{w \in \mathcal{G}_{k, w \geq 2}} \Pi (w) \cdot \frac{n (n-1) \cdots (n - |V_w| + 1)}{n^{k/2+1}}. \]
Proof

This follows from applying the result of Lemma 5.1.14 to Lemma 5.1.13.

Lemma 5.1.16

\(n(n-1)\cdots (n-|V|+1) \le n^{|V|}\).

Proof

Use Nat.ascFactorial_eq_div. Or prove directly.

The sequence \(n\mapsto \frac1n\mathbb {E}\operatorname{Tr}(X_n^k)\) is bounded.

Proof

The only part that depends on \(n\) is the big fraction. Since we only care about \(w \ge 2\), \(|G_w| \le k/2 + 1\). Use the fact that the product \(n(n-1)\cdots (n-|G|+1)\) is asymptotically equal to \(n^{|G|}\) to conclude that the fraction is bounded and doesn’t explode.

Lemma 5.1.18

Suppose \(k\) odd. Then, \(\frac{n(n-1)\cdots (n-|V_w|+1)}{n^{k/2+1}} \le \frac{1}{\sqrt{n}}\).

Proof
Proposition 5.1.19

Suppose \(k\) odd. Then, \(\lim _{n\to \infty } \frac{1}{n}\mathbb {E}\operatorname{Tr}(X_n^k) = 0\).

Proof

Since \(|G_w|\le \# E+1 \le k/2+1\) and \(|G_w|\) is an integer, it follows that \(|G_w|\le (k-1)/2+1 = k/2 + 1/2\). Hence, in this case, all the terms in the (finite \(n\)-independent) sum in Equation 4.8 are \(O(n^{-1/2})\)

\(|\sum _{\mathcal{G}_{k}, w \ge 2} \Pi (w) \cdot \frac{n (n-1) \cdots (n - |G_w| + 1)}{n^{k/2+1}} - \sum _{\mathcal{G}_k^{k/2+1}}\Pi (w) \cdot \frac{n (n-1) \cdots (n - |G_w| + 1)}{n^{k/2+1}}| \le |\mathcal{G}_k|/n\).

Proof

\(\frac1n\mathbb {E}\operatorname{Tr}(X_n^k) = \sum _{w\in \mathcal{G}^{k/2+1}_k} \Pi (w) \cdot \frac{n(n-1)\cdots (n-|G_w|+1)}{n^{k/2+1}} + O_k(n^{-1})\)

Proof

If \(|G_w| {\lt} k/2 + 1\), then there is at least one more \(n\) in the denominator than the numerator.

Proposition 5.1.22

\(\lim _{n\to \infty }\frac{n^{k/2}}{n(n-1)\cdots (n-k/2+1)} = 1\).

Proof

some lower bound stuff + other stuff?

\(\lim _{n\to \infty }\mathbb {E}\operatorname{Tr}(X_n^k) = \sum _{w\in \mathcal{G}^{k/2+1}_k} \Pi (w)\)

Proof

Proof: use the fact that \(|G_w|=k/2+1\) and \(n(n-1)\cdots (n-k/2+1) \sim n^{k/2+1}\). Limit as n approaches infinity of \(O(n^{-1/2})\) is 0.

For \(w\in \mathcal{G}^{k/2+1}_k\), \(\Pi (w) = \prod _{e_c\in E^c} \mathbb {E}(Y_{12}^{w(e_c)})\)

Proof

Follows directly.

Lemma 5.1.25

\(\prod _{e_c\in E^c} \mathbb {E}(Y_{12}^{w(e_c)}) = \prod _{e_c\in E^c} \mathbb {E}(Y_{12}^2)\)

Proof

Follows directly.

Lemma 5.1.26

\(\mathbb {E}(Y_{12}^2) = t\)

Proof

Follows directly.

Lemma 5.1.27

\(t^{|E_w|} = t^{k/2}\)

Proof

Follows directly.

\(\Pi (w) = t^{k/2}\).

Proof

Proof slightly outdated.

Let \((G,w)\in \mathcal{G}^{k/2+1}_k\). Since \(w\) traverses each edge exactly twice, the number of edges in \(G\) is \(k/2\). Since the number of vertices is \(k/2+1\), Exercise (Prop) 4.3.1 shows that \(G\) is a tree. In particular there are no self-edges (as we saw already in Proposition 4.4).

1st equality: definition right after equation 4.4 in notes 2nd equality: proposition 4.4 (w < 3) and previous lemma (w = 1 –> 0) 3nd equality: definition (from main proposition) 4th equality: number of edges is k/2.

Proposition 5.1.29

\(\lim _{n\to \infty } \mathbb {E}\operatorname{Tr}(X_n^k) = t^{k/2}\cdot |\mathcal{G}_k^{k/2+1}|\)

Proof

Follows directly from 4.7.5 and 4.8.

Let \(\{ Y_{ij}\} _{1\le i\le j}\) be independent random variables, with \(\{ Y_{ii}\} _{i\ge 1}\) identically distributed and \(\{ Y_{ij}\} _{1\le i{\lt}j}\) identically distributed. Suppose that \(r_k = \max \{ \mathbb {E}(|Y_{11}|^k),\mathbb {E}(|Y_{12}|^k)\} {\lt}\infty \) for each \(k\in \mathbb {N}\). Suppose further than \(\mathbb {E}(Y_{ij})=0\) for all \(i,j\) and set \(t=\mathbb {E}(Y_{12}^2)\). If \(i{\gt}j\), define \(Y_{ij} \equiv Y_{ji}\), and let \(\mathbf{Y}_n\) be the \(n\times n\) matrix with \([\mathbf{Y}_n]_{ij} = Y_{ij}\) for \(1\le i,j\le n\). Let \(\mathbf{X}_n = n^{-1/2}\mathbf{Y}_n\) be the corresponding Wigner matrix. Then

\[ \lim _{n\to \infty } \frac{1}{n}\mathbb {E}\operatorname{Tr}(\mathbf{X}_n^k) = \begin{cases} t^{k/2}C_{k/2}, & k\text{ even} \\ 0, & k\text{ odd} \end{cases}. \]
Proof

5.2 Convergence in Probability

Lemma 5.2.1 Variance Trace Expansion

Let \(\mathbf{Y}_{n}\) be an \(n \times n\) symmetric matrix with independent entries, where \(\{ Y_{ij}\} _{1\le i\le j}\) are independent random variables, with \(\{ Y_{ii}\} _{i\ge 1}\) identically distributed and \(\{ Y_{ij}\} _{1\le i{\lt}j}\) identically distributed, and let \(\mathbf{X}_{n} = n^{-1 / 2}\mathbf{Y}_{n}\) be the corresponding Wigner matrix. Then:

\[ \operatorname {Var}\left(\frac{1}{n}\operatorname{Tr}(\mathbf{X}_{n}^{k})\right) = \frac{1}{n^{k+2}} \sum _{\mathbf{i}, \mathbf{j} \in [n]^{k}} [\mathbb {E}(Y_{\mathbf{i}} Y_{\mathbf{j}}) - \mathbb {E}(Y_{\mathbf{i}})\mathbb {E}(Y_{\mathbf{j}})] \]
Proof

Expanding the variance by definition, we get:

\[ \operatorname {Var}\left(\frac{1}{n}\operatorname{Tr}(\mathbf{X}_{n}^{k})\right) = \mathbb {E}[\frac{1}{n} \cdot \frac{1}{n^{\frac{k}{2}}} \operatorname{Tr}(\mathbf{Y}_{n}^{k})]^2 - (\mathbb {E}[\frac{1}{n}\cdot \frac{1}{n^{\frac{k}{2}}} \operatorname{Tr}(\mathbf{Y})_{n}^{k}])^2 \]

use the linearity of expectation to factor out \(\frac{1}{n^{k+2}}\) from both expectations:

\[ \mathbb {E}[\frac{1}{n} \cdot \frac{1}{n^{\frac{k}{2}}} \operatorname{Tr}(\mathbf{Y}_{n}^{k})]^2 - (\mathbb {E}[\frac{1}{n}\cdot \frac{1}{n^{\frac{k}{2}}} \operatorname{Tr}(\mathbf{Y})_{n}^{k}])^2 = \frac{1}{n^{k+2}} \{ \mathbb {E}[\operatorname {Tr}(\mathbf{Y}_{n}^{k})]^2 - (\mathbb {E}\operatorname {Tr}(\mathbf{Y}_{n}^{k}))^2\} \]

using 5.1.4, we can expand and square:

Definition 5.2.2 R-2-1 : def:common_val_prod_of

Given an ordered triple \((G_{\mathbf{i}\# \mathbf{j}},w_\mathbf {i},w_\mathbf {j})\) generated by two \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\), we define

\[ \pi (G_{\mathbf{i}\# \mathbf{j}},w_\mathbf {i},w_\mathbf {j}) = \mathbb {E}(Y_\mathbf {i} Y_\mathbf {j}) - \mathbb {E}(Y_\mathbf {i}) \mathbb {E}(Y_\mathbf {j}). \]
Lemma 5.2.3 R-2-3-4 : lem:common_val_eq_of_index_pair_rel

Let \(\mathbf{i}_\lambda \) and \(\mathbf{j}_\lambda \) be \(k\)-indexes for each \(\lambda =1,2\) such that \((\mathbf{i}_1,\mathbf{j}_1) = (\mathbf{i}_2,\mathbf{j}_2)\). Then \(\mathbb {E}(Y_{\mathbf{i}_1}Y_{\mathbf{j}_1}) = \mathbb {E}(Y_{\mathbf{i}_1}Y_{\mathbf{j}_2})\).

Proof

This follows a similar reasoning as in Lemma 5.1.9.

Lemma 5.2.4 R-2-4 : lem:sum_eq_sum_over_classes
\[ \text{Var} \biggl(\frac{1}{n} \operatorname{Tr}(\mathbf{X}_n^k) \biggl) = \frac{1}{n^{k+2}} \sum _{(G,w,w') \in \mathcal{G}_{k,k}} \sum _{\substack {\mathbf{i},\mathbf{j} \in [n]^k \\ (G_{\mathbf{i}\# \mathbf{j}},w_\mathbf {i},w_\mathbf {j}) = (G,w,w’)}} [\mathbb {E}(Y_\mathbf {i} Y_\mathbf {j}) - \mathbb {E}(Y_\mathbf {i}) \mathbb {E}(Y_\mathbf {j})]. \]
Proof

This follows from ‘partitioning’ the summation appearing in Lemma 5.2.1 using the equivalence relation defined in Definition 4.0.51.

Definition 5.2.5 R-2-5-1 : def:common_val_prod

Given \((G,w,w')\), let \((\mathbf{i},\mathbf{j})\) be an ordered pair of \(k\)-indexes generated by \(w\) and \(w'\). We define

\[ \pi (G,w,w') = \mathbb {E}(Y_\mathbf {i} Y_\mathbf {j}) - \mathbb {E}(Y_\mathbf {i}) \mathbb {E}(Y_\mathbf {j}). \]
Lemma 5.2.6 R-2-5-2 : lem:inner_sum_eq_common_val_prod_mul_card
\[ \text{Var} \biggl(\frac{1}{n} \operatorname{Tr}(\mathbf{X}_n^k) \biggl) = \frac{1}{n^{k+2}} \sum _{(G,w,w') \in \mathcal{G}_{k,k}} \pi (G,w,w') \cdot \# \bigl\{ (\mathbf{i},\mathbf{j}) \in [n]^{2k} : (G_{\mathbf{i} \# \mathbf{j}},w_\mathbf {i},w_\mathbf {j}) = (G,w,w') \bigl\} . \]
Proof

This follows from re-indexing the sum of Lemma 5.2.4 using Lemma 4.0.52.

Lemma 5.2.7 R-2-8 : lem:expect_mul_eq_prod_expect_edgewise_of_indep
\[ \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) = \prod _{e_s \in E^s_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j}}(e_s)}) \cdot \prod _{e_c \in E^c_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{12}^{w_{\mathbf{i} \# \mathbf{j}}(e_c)}). \]
Proof

This follows from Lemma 5.1.7 and the independency of random variables.

Lemma 5.2.8 R-2-12.1 : lem:expect_pow_edge_count_pair_le

For any \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\), there exists \(\lambda \in \mathbb {N}\) such that

\[ \mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j} (e)}}) \leq r_\lambda \]

for every \(e \in E_{\mathbf{i} \# \mathbf{j}}\).

Proof

Let \((G_{\mathbf{i}\# \mathbf{j}},w_\mathbf {i},w_\mathbf {j})\) be the ordered triple generated by the two \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\). By construction, there are finite number of edges \(e \in E_{\mathbf{i} \# \mathbf{j}}\). Moreover, there are finite number of integer values the counting function \(w_{\mathbf{i} \# \mathbf{j}}(\cdot )\) can take over \(E_{\mathbf{i} \# \mathbf{j}}\). This implies we can choose the maximum element of the set

\[ \mathscr {E} := \{ \mathbb {E} (Y_{11}^\lambda ) : \lambda = w_{\mathbf{i} \# \mathbf{j}}(e) \text{ for some } e \in E_{\mathbf{i} \# \mathbf{j}} \} , \]

and therefore \(\lambda \in \mathbb {N}\) so that \(\mathbb {E}(Y_{11}^\lambda ) = \max \mathscr {E}\). Using the fact that \(\mathbb {E}(Z) \leq \mathbb {E}(|Z|)\) for any random variable \(Z\),

\[ \mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j} (e)}}) \leq \max \{ \mathbb {E}(|Y_{11}^\lambda |), \mathbb {E}(|Y_{12}^\lambda |) \} = r_\lambda . \]
Lemma 5.2.9 R-2-12.2 : lem:expect_pow_edge_count_pair_le_off

For any \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\), there exists \(\lambda \in \mathbb {N}\) such that

\[ \mathbb {E} (Y_{12}^{w_{\mathbf{i} \# \mathbf{j} (e)}}) \leq r_\lambda \]

for every \(e \in E_{\mathbf{i} \# \mathbf{j}}\).

Proof

This follows an identical reasoning as in Lemma 5.2.8.

Lemma 5.2.10 R-2-13 : lem:expect_mul_le_const

For any \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\), there exists \(M_1 \in \mathbb {R}\) such that

\[ \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) \leq M_1. \]
Proof

By Lemma 5.2.7, we have

\[ \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) = \prod _{e_s \in E^s_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j}}(e_s)}) \cdot \prod _{e_c \in E^c_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{12}^{w_{\mathbf{i} \# \mathbf{j}}(e_c)}). \]

By Lemma 5.2.8, there exists \(\lambda _1 \in \mathbb {N}\) so that \(\mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j} (e)}}) \leq r_n\) for every \(e \in E_{\mathbf{i} \# \mathbf{j}}\). On the other hand, by Lemma 5.2.9, there exists \(\lambda _2 \in \mathbb {N}\) so that \(\mathbb {E} (Y_{12}^{w_{\mathbf{i} \# \mathbf{j} (e)}}) \leq r_m\) for every \(e \in E_{\mathbf{i} \# \mathbf{j}}\). Then

\[ \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) = \prod _{e_s \in E^s_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{11}^{w_{\mathbf{i} \# \mathbf{j}}(e_s)}) \cdot \prod _{e_c \in E^c_{\mathbf{i} \# \mathbf{j}}} \mathbb {E} (Y_{12}^{w_{\mathbf{i} \# \mathbf{j}}(e_c)}) \leq 2k \cdot \max \{ r_{\lambda _1},r_{\lambda _2}\} . \]

Setting \(M_1 := 2k \cdot \max \{ r_{\lambda _1},r_{\lambda _2}\} \) satisfies the problem.

Lemma 5.2.11 R-2-14 : lem:expect_pow_edge_count_le

For any \(k\)-index \(\mathbf{i}\), there exists \(\lambda \in \mathbb {N}\) such that

\[ \mathbb {E} (Y_{11}^{w_{\mathbf{i}}(e)}) \leq r_\lambda . \]

for every \(e \in E_{\mathbf{i} \# \mathbf{j}}\).

Proof

This follows a similar reasoning as in Lemma 5.2.8 with the counting function \(w_\mathbf {i}(\cdot )\).

Lemma 5.2.12 R-2-15 : lem:expect_pow_edge_count_le_off

For any \(k\)-index \(\mathbf{i}\), there exists \(\lambda \in \mathbb {N}\) such that

\[ \mathbb {E} (Y_{12}^{w_{\mathbf{i}}(e)}) \leq r_\lambda . \]

for every \(e \in E_\mathbf {i}\).

Proof

This follows a similar reasoning as in Lemma 5.2.8 with the counting function \(w_\mathbf {i}(\cdot )\).

Lemma 5.2.13 R-2-16 : lem:expect_le_const

For any \(k\)-indexes \(\mathbf{i}\), there exists \(M_2 \in \mathbb {R}\) such that

\[ \mathbb {E} (Y_\mathbf {i}) \leq M_2. \]
Proof

This follows an identical reasoning as in Lemma 5.2.8.

Lemma 5.2.14 R-2-17 : lem:abs_expect_mul_sub_mul_expect_le

For any \(k\)-indexes \(\mathbf{i}\) and \(\mathbf{j}\), there exists \(M_{2k} \in \mathbb {R}_{\geq 0}\) such that

\[ | \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) - \mathbb {E}(Y_\mathbf {i}) \mathbb {E}(Y_\mathbf {j}) | \leq 2 M_{2k}. \]
Proof

Combining the result of Lemma 5.2.10 and Lemma 5.2.13 gives

\[ | \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) - \mathbb {E}(Y_\mathbf {i}) \mathbb {E}(Y_\mathbf {j}) | \leq | \mathbb {E} (Y_\mathbf {i}Y_\mathbf {j}) | + | \mathbb {E}(Y_\mathbf {i})| | \mathbb {E}(Y_\mathbf {j}) | \leq M_1 + M_2^{(\mathbf{i})} \cdot M_2^{(\mathbf{j})}, \]

where \(M_2^{(\mathbf{i})}\) and \(M_2^{(\mathbf{j})}\) are the upper bounds acquired by Lemma 5.2.13 with repsect to \(\mathbf{i}\) and \(\mathbf{j}\), repsectively. Setting \(M_{2k} := \max \{ M_1, M_2^{(\mathbf{i})} \cdot M_2^{(\mathbf{j})} \} \) satisfies the problem.

Proposition 5.2.15
\[ \operatorname{Var}\left( \frac1n\operatorname{Tr}(X_n^k)\right) = \sum _{(G,w,w')\in \mathcal{G}_{k,k}\atop w+w'\ge 2} \pi (G,w,w')\cdot \frac{\# \left\{ (i,{j})\in [n]^{2k}\colon (G_{i\# j},w_{i},w_{j}) = (G,w,w')\right\} }{n^{k+2}}. \]
Proof

By construction, every edge in the joined graph \(G_{{i}\# {j}}\) is traversed at least once by the union of the two paths \(w_{i}\) and \(w_{j}\). Suppose that \(e\) is an edge that is traversed only once. This means that \(w_{{i}\# {j}}(e) = 1\), and so it follows that the two values \(w_{i}(e),w_{j}(e)\) are \(\{ 0,1\} \). Hence, the above expansion and the fact that \(\mathbb {E}(Y_{11})=\mathbb {E}(Y_{12})=0\) show that \(\pi (G_{{i}\# {j}},w_{i},w_{j}) = 0\) in this case.

\[ \operatorname{Var}\left( \frac1n\operatorname{Tr}({X}_n^k)\right) = \sum _{(G,w,w')\in \mathcal{G}_{k,k}\atop w+w'\ge 2} \pi (G,w,w')\cdot \frac{n(n-1)\cdots (n-|G|+1)}{n^{k+2}}. \]
Proof

Follows directly.

\[ \operatorname{Var}\left( \frac1n\operatorname{Tr}({X}_n^k)\right) \le \sum _{(G,w,w')\in \mathcal{G}_{k,k}\atop w+w'\ge 2} \pi (G,w,w')\cdot \frac{n^{k+1}}{n^{k+2}} \le \frac1n\cdot 2M_{2k}\cdot \# \mathcal{G}_{k,k}. \]
Proof

Appealing again to 4.0.5, it follows that \(|G|\le k+1\). Hence, \(n(n-1)\cdots (n-|G|+1) \le n^{|G|} \le n^{k+1}\)

Theorem 5.2.18

Theorem 2.4. (Unsure if it goes in this doc.)

Proof

The (potentially enormous) number \(B_k=2M_{2k}\cdot \# \mathcal{G}_{k,k}\) is independent of \(n\), and so we have already proved that the variance is \(=O_k(1/n)\).

Lemma 5.2.19
\[ \operatorname{Var}\left( \frac1n\operatorname{Tr}(\mathbf{X}_n^k)\right) \le \sum _{(G,w,w')\in \mathcal{G}_{k,k}\atop w+w'\ge 2, |G| \leq k} \pi (G,w,w')\cdot \frac{n^{k}}{n^{k+2}} \le \frac{1}{n^2}\cdot 2M_{2k}\cdot \# \mathcal{G}_{k,k}. \]
Proposition 5.2.20 Proposition 4.5 in [ 1 ]
#

Let \(\{ Y_{ij}\} _{1 \leq i \leq j}\) be independent random variables, with \(\{ Y_{ii}\} _{i\geq 1}\) identically distributed and \(\{ Y_{ij}\} _{1 \leq i {\lt} j}\) identically distributed. Suppose that \(r_k = \max \{ \mathbb {E}(|Y_{11}|^k),\mathbb {E}(|Y_{12}|^k)\} {\lt} \infty \) for each \(k\in \mathbb {N}\). Suppose further than \(\mathbb {E}(Y_{ij})=0\) for all \(i,j\). If \(i{\gt}j\), define \(Y_{ij} \equiv Y_{ji}\), and let \(\mathbf{Y}_n\) be the \(n\times n\) matrix with \([\mathbf{Y}_n]_{ij} = Y_{ij}\) for \(1\le i,j\le n\). Let \(\mathbf{X}_n = n^{-1/2}\mathbf{Y}_n\) be the corresponding Wigner matrix. Then

\[ \operatorname {Var}(\frac{1}{n}\operatorname{Tr}(\mathbf{X}_{n}^{k})) = O_{k}(\frac{1}{n^2}) \]