Boundary Conditions and Violations of Bulk-Edge Correspondence in a Hydrodynamic Model

Further Facts on Boundary Conditions

This appendix recalls the results of [19] on self-adjointness of \( 2 \times 6 \) boundary conditions A, cf. Definition 2.2, previously displayed in Sect. 4.4. Next, a proof is provided for Lemma 4.4 and Propositions 4.5, 4.6, 4.7. We later turn our attention to failure of self-adjointness at a single fiber \( k_x \), signaled by \( A(k_x) \) failing to be maximal rank. Whether a boundary condition A is maximal rank a.e. or everywhere in \( k_x \) is the question tackled in Proposition A.7 below.

As anticipated, we start by reviewing the formalism and results of [19]. Recall Eq. (2.15): A scattering state \( \psi _s = }_s (y) \textrm^(k_x x - \omega t)} \), cf. (4.16) in Definition 4.1, satisfies the boundary condition \( A (k_x) \) if

$$\begin A (k_x) \Psi = 0 \,, \end$$

(A.1)

where \( \Psi = ( }_s (0), }_s' (0))^T \) and

$$\begin A (k_x) = A_c + k_x A_k \,, \end$$

(A.2)

as per Eq. (4.35). The matrix A has precisely two rows because two equations suffice to uniquely identify a solution \( \varphi \in L^2 (}_+)^ \) of \( H(k_x) \varphi = \omega \varphi \), cf. Eq. (2.14). On the other hand, one equation is not enough, so that

$$\begin \operatorname A(k_x) \overset\ 2 \end$$

(A.3)

a.e. in \( k_x \). Differently put, all self-adjoint boundary conditions are represented by matrices A of maximal rank.

It is proven in [19] that a given BC \( A: k_x \mapsto A(k_x) \), cf. again Definition 2.2, encodes for a self-adjoint realization \( H^ \) of the formal differential operator \( H(k_x) \) if and only if

$$\begin A (k_x) N= & 0 , \qquad A (k_x) } A^* (k_x) = 0 , \end$$

(A.4a)

$$\begin \displaystyle N= & \begin \nu & \quad 0 \\ 0 & \quad 0 \\ 0 & \quad 0 \\ 0 & \quad 1 \\ 1 & \quad 0 \\ 0 & \quad 0 \end , \qquad } = \begin 0 & \quad 0 & \quad -\lambda & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad - \nu ^ \\ - \lambda & \quad 0 & \quad 0 & \quad 0 & \quad \lambda \nu & \quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad \lambda \nu & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad - \nu ^ & \quad 0 & \quad 0 & \quad 0 & \quad 0 \\ \end , \qquad \nonumber \\ \lambda= & \frac \end$$

(A.4b)

almost everywhere in \( k_x \). Equivalently, cf. (A.2),

$$\begin & A_c N = 0 \, \hspace A_k N = 0 , \end$$

(A.5a)

$$\begin & A_c } A^*_c = 0, \hspace A_c } A_k^* - A_k } A_c^* = 0, \hspace A_k } A_k^* = 0 . \end$$

(A.5b)

Equation (A.5a) implies that two out of the six columns of A are redundant, so that one may generically write

$$\begin A_c = (a_1, \ a_1', \ a_2', \ 0, \ - \nu a_1, \ \nu a_2) \,, \hspace A_k = (0, \ b_1, \ b_2, \ 0, \ 0, \ 0) \,, \end$$

(A.6)

where \( a_1, a_2, a_1', a_2', b_1, b_2 \in }^2 \) and \( (\cdot , \cdot ) \) denotes the horizontal juxtaposition of two (or more) matrices. This is to say, if X is an \( l \times m \) and Y is an \( l \times n \), \( Z = (X,Y) \) is an \( l \times (m+n) \) built up by “gluing” Y to the right of X. Equation (A.6) is equivalent to (4.39), whence our claim (4.41) on the general form of \( A_0, A_x, A_y \) follows.

The structure (A.2) of A, with only four independent columns, prompts the simpler rewriting as a \( 2 \times 4 \) matrix \( \underline \)

$$\begin & \underline (k_x) = \underline_ + k_x \underline_k , \nonumber \\ & \underline_ = (a_1, \ a_1', \ a_2', \ a_2), \hspace \underline_k = (0, \ b_1, \ b_2, \ 0) , \end$$

(A.7)

and the matrix M of (4.33) is read off by direct comparison of \( A_c, \, A_k \) with \( \underline_, \ \underline_ \), cf. (A.6, A.7).

Any BC \( A = \underline M \), with \( \underline \) as in Eq. (A.7), satisfies (A.5a) by construction. On the other hand, (A.5b) is yet to be imposed on \( \underline \). Again by \( A = \underline M \), it translates to

$$\begin A (k_x) } A^* (k_x) = \underline (k_x) M } M^* \underline^* (k_x) \equiv - \underline (k_x) \Omega \underline^* (k_x) \overset\ 0 \end$$

(A.8)

almost everywhere in \( k_x \), where

$$\begin \Omega :=- M } M^* = \begin 0 & \quad \mathbb _2 \\ \mathbb _2 & \quad 0 \end \,, \end$$

(A.9)

the second identity following from Eqs. (4.33, A.4b) and matrix multiplication. By (A.8), we have thus recovered (4.43).

We can now move on to proving Lemma 4.4 and Propositions 4.5, 4.6, starting with the former.

Proof of Lemma 4.4

Recall Eq. (4.41), namely

$$\begin A_y = (0, \ - \nu a_1, \ \nu a_2) \,. \end$$

(A.10)

Then,

DD:

\( \operatorname (A_y) = 0 \) iff \( \operatorname (- \nu a_1, \ \nu a_2) = 0 \), i.e., \( a_1 = a_2 = 0 \);

NN:

\( \operatorname (A_y) = 2 \) iff \( \operatorname (- \nu a_1, \ \nu a_2) = 2 \), i.e., \( a_1, a_2 \) linearly independent.

If \( \operatorname (A_y) = 1 \), there exists an invertible matrix

$$\begin G = \begin }_1^T \\ }_2^T \end \,, \end$$

(A.11)

with \( g_1, g_2 \in }^2 \) and \( g_1, g_2 \) l.i. , such that

$$\begin G (- \nu a_1, \ \nu a_2) = \begin - \nu g_1 \cdot a_1 & \quad \nu g_1 \cdot a_2 \\ - \nu g_2 \cdot a_1 & \quad \nu g_2 \cdot a_2 \end \end$$

(A.12)

has one row equal to zero (w.l.o.g. , the second one), where \( \cdot \) denotes inner product in \( }^2 \). If the second row is zero, \( a_1, a_2 \perp g_2 \) in \( }^2 \), and thus \( a_1 \parallel a_2 \), i.e. , linearly dependent.

The further distinction between ND and DN arises as follows. Since \( g_1 \) is l.i. from \( g_2 \) and non-zero, the inner products \( g_1 \cdot a_1, g_1 \cdot a_2 \) appearing in the first row are only zero for \( a_1 = 0 \) or \( a_2 = 0 \), respectively. If a term in \(\partial _y u\) persists in the BC after elimination of the second row, it means that the \( [G A_y]_ \ne 0 \), whence \( g_1 \cdot a_1 \ne 0 \) and thus \( a_1 \ne 0 \). This characterizes family ND. Family DN is obtained in the same way, requiring \( \partial _y v \) to persist. \(\square \)

Proposition 4.5 reduces to two simpler propositions by means of the following reasoning. The condition to be met is

$$\begin A_1 (k_x) A_2^* (k_x) + A_2 (k_x) A_1^* (k_x) = 0 \end$$

(A.13)

a.e. in \( k_x \), cf. (4.44). Recall the decomposition \( A_i (k_x) = A^0_i + k_x B_i \), cf. (4.37). If one lets \( |1\rangle = (1,0)^\text , \ |2\rangle = (0,1)^\text \), then

$$\begin A_1^0 = | a_1 \rangle \langle 1 | + | a_1' \rangle \langle 2 | \,, \ A_2^0 = | a_2 \rangle \langle 2 | + | a_2' \rangle \langle 1 | \,, \ B_1 = | b_1 \rangle \langle 2 | \,, \ B_2 = | b_2 \rangle \langle 1 | \,. \end$$

(A.14)

The l.h.s. of (A.13) can be rewritten as

$$\begin A_1 (k_x) A_2^* (k_x) + A_2 (k_x) A_1^* (k_x) \equiv C_0 + C_1 k_x + C_2 k_x^2 \,, \end$$

(A.15)

and it equals zero a.e. in \( k_x \) only if the (matrix) coefficients

$$\begin C_0 = A^0_1 A^_2 + \ \text \,, \qquad C_1 = B_1 A^_2 + A^0_1 B_2^* + \ \text \,, \qquad C_2 = B_1 B_2^* + \ \text \end$$

(A.16)

are identically zero (h.c. standing for “hermitian conjugate”). That \( C_2 \equiv 0, \ \forall b_1, b_2 \) is immediate from (A.14). Whether or not \( C_0 = 0 \) (\( C_1 = 0 \)) is the content of Proposition A.2 (Proposition A.3). Their proof rests on the following preliminary claim, to be proven later.

Claim A.1 (i)

If \( \varphi _i \in }^2, \ (i = 1,2) \) are linearly independent, then so are \( | \varphi _i \rangle \langle \varphi _j |, \ (i,j=1,2) \).

(ii)

Let \( }^2 \ni \psi \ne 0 \). Then,

$$\begin | \varphi \rangle \langle \psi | + | \psi \rangle \langle \varphi | = 0 \end$$

(A.17)

iff

$$\begin | \varphi \rangle = \textrm\lambda | \psi \rangle \,, \end$$

(A.18)

for some \( \lambda \in }\).

Proposition A.2

Depending on the cases DD through NN, cf. Lemma 4.4, the equation

$$\begin C_0 = 0 \end$$

(A.19)

amounts to

DD:

\( a_1', a_2' \in }^2 \) (arbitrary);

ND:

For some \( \lambda ' \in }\),

$$\begin } a_1' + a_2' = \textrm\lambda ' a_1 \,; \end$$

(A.20)

DN:

Same as ND up to \( a_1 \leftrightarrow a_2 \);

NN:

For some \( \mu ' \in }\) and \( \lambda _1', \lambda _2' \in }\),

$$\begin \begin a_1' & = & \mu ' a_1 + \textrm\lambda _1' a_2 \\ a_2' & = & - }' a_2 + \textrm\lambda _2' a_1 \,. \end \end$$

(A.21)

Proof

Using the previously introduced Dirac notation, \( C_0 = 0 \) reads

$$\begin | a_1' \rangle \langle a_2 | + | a_2 \rangle \langle a_1' | + | a_2' \rangle \langle a_1 | + | a_1 \rangle \langle a_2' | = 0 \,. \end$$

(A.22)

Consider now cases DD–NN.

DD:

Is evident;

ND:

Eq. (A.22) becomes

$$\begin ( } | a_1' \rangle + | a_2' \rangle ) \langle a_1 | + | a_1 \rangle ( \alpha \langle a_1' | + \langle a_2' | ) = 0\,. \end$$

(A.23)

By (ii) of Claim A.1:

$$\begin } | a_1' \rangle + | a_2' \rangle = \textrm\lambda ' | a_1 \rangle \,, \end$$

(A.24)

as wished for;

DN:

Same as ND up to \( a_1 \leftrightarrow a_2 \);

NN:

Here, \( a_1, a_2 \) are a basis of \( }^2 \), whence

$$\begin | a_1' \rangle&= \mu _1 | a_1 \rangle + \gamma _1 | a_2 \rangle \nonumber \\ | a_2' \rangle&= \mu _2 | a_2 \rangle + \gamma _2 | a_1 \rangle \,, \end$$

(A.25)

\( \mu _i,\gamma _i \in }\).

Then,

$$\begin | a_1' \rangle \langle a_2 | + | a_2 \rangle \langle a_1' | = \mu _1 | a_1 \rangle \langle a_2 | + }_1 | a_2 \rangle \langle a_1 | + (\gamma _1 + }_1) | a_2 \rangle \langle a_2 | \,, \end$$

(A.26)

and likewise for the other two terms of Eq. (A.22).

In view of linear independence of \( a_1, a_2 \), the equations

$$\begin \mu _1 = - }_2 \,, \qquad \gamma _1 + }_1 = 0 \,, \qquad \gamma _2 + }_2 = 0 \end$$

(A.27)

follow by (i) of ClaimA.1. Therefore, setting

$$\begin \mu _1 = - }_2 \equiv \mu ' \,, \qquad \gamma _1 = \textrm\lambda _1' \,, \qquad \gamma _2 = \textrm\lambda _2' \,, \end$$

(A.28)

the desired conclusion is read from (A.25).

\(\square \)

Proposition A.3

Depending on the cases DD through NN, cf. Lemma 4.4, the equation

$$\begin C_1 = 0 \,, \end$$

(A.29)

with \( C_1 \) as in (A.16), amounts to

DD:

\( b_1, b_2 \in }^2 \) (arbitrary);

ND:

For some \( \lambda ' \in }\),

$$\begin } b_1 + b_2 = \textrm\lambda a_1 \,; \end$$

(A.30)

DN:

Same as ND up to \( a_1 \leftrightarrow a_2 \);

NN:

For some \( \mu \in }\) and \( \lambda _1, \lambda _2 \in }\),

$$\begin \begin b_1 & = & \mu a_1 + \textrm\lambda _1 a_2 \\ b_2 & = & - } a_2 + \textrm\lambda _2 a_1 \,. \end \end$$

(A.31)

Proof

Equation (A.29) consists of two halves with interchanged indices \( 1 \leftrightarrow 2 \). It therefore suffices to study

$$\begin B_1 A^_2 + A^0_1 B_2^* = 0 \end$$

(A.32)

in the various cases. Switching to Dirac notation, the previous identity reads

$$\begin | b_1 \rangle \langle a_2 | + | a_2 \rangle \langle b_1 | + | b_2 \rangle \langle a_1 | + | a_1 \rangle \langle b_2 | = 0\,. \end$$

(A.33)

The latter is formally equivalent to (A.22) upon identification \( b_1 \leftrightarrow a_1' \), \( b_2 \leftrightarrow a_2' \). The claims follow by the same reasoning of Proposition A.2. \(\square \)

Once the proof of Claim A.1 is given, considering Propositions A.2 and A.3 together yields Proposition 4.5 as a corollary.

Proof of Claim A.1 (i)

Let \( \langle \varphi _i | \) be the dual basis of \( | \varphi _i \rangle \), i.e.

$$\begin \langle \varphi _i | \varphi _j \rangle = \delta _. \end$$

(A.34)

Then,

$$\begin \sum _ \lambda _ | \varphi _i \rangle \langle \varphi _j | = 0 \end$$

(A.35)

implies \( \lambda _ = 0 \) by taking the matrix element between \( \langle \varphi _ | \) and \( | \varphi _ \rangle \).

(ii)

Let \( \psi \ne 0 \) and assume \( | \psi \rangle = \gamma | \varphi \rangle , \ \gamma \in }\). Then,

$$\begin | \varphi \rangle \langle \psi | + | \psi \rangle \langle \varphi | = } | \varphi \rangle \langle \varphi | + \gamma | \varphi \rangle \langle \varphi | = 0 \ \Leftrightarrow \ \gamma = \textrm\lambda , \ \lambda \in }. \end$$

(A.36)

Thus, \( | \varphi \rangle = \textrm\lambda | \psi \rangle \), implying

$$\begin | \varphi \rangle \langle \psi | + | \psi \rangle \langle \varphi | = 0 \,. \end$$

(A.37)

The other direction follows from point (i).

\(\square \)

Proposition 4.6 is a statement about the particle–hole symmetric subsets of families DD–NN. However, by Definition 2.7 and Eq. (2.28), particle–hole symmetry is a property of the orbit \( [\underline] \), cf. (2.16), under point-wise (in \(k_x\)) multiplication by \( \operatorname (2,}) \), rather than of the single map \( k_x \mapsto \underline (k_x) \). Equivalently, it is a property of the von Neumann unitary \( k_x \mapsto U(k_x) \), cf. Definition 2.2, once the claimed bijective correspondence between U and \( [\underline] \) is proven.

We thus adopt the following strategy. First, the correspondence \( U \leftrightarrow [\underline] \) is shown. Then, the image of the four families DD–NN under \( \upsilon : [\underline] \mapsto U \), cf. (4.51), is found by explicit calculation. Particle–hole symmetry is imposed on the resulting \( k_x \mapsto U(k_x) \) by

$$\begin U (k_x) = \overline \,, \end$$

(A.38)

cf. (2.28), completing the proof of Proposition 4.7. The constraints stemming from (A.38) are finally pulled back to \( k_x \mapsto \underline (k_x) \), yielding Proposition 4.6.

Lemma A.4

The map \( \upsilon : \underline \mapsto U \), defined by Eq. (4.51), between boundary conditions \( \underline \) and von Neumann unitaries U, cf. Definition 2.2, is:

(i)

Well-defined as a map on orbits \( [\underline] \), cf. Eq. (2.16);

(ii)

A bijection \( [\underline] \leftrightarrow U \).

This Lemma rests on the following claim.

Claim A.5

Let

$$\begin \underline = (A_1, A_2) \in \operatorname _ (}) \,, \end$$

(A.39)

with \( A_i \in \operatorname _ (}) \), \( (i = 1,2) \) and

$$\begin A_1 A_2^* + A_2 A_1^* = 0 \,. \end$$

(A.40)

Then any among \( \operatorname \underline = 2 \), \( \operatorname (A_1 + A_2) = 2 \), \( \operatorname (A_1 - A_2) = 2 \) implies the others.

Proof

For any matrix M, we have

$$\begin \operatorname M = \operatorname (M M^*) \,, \end$$

(A.41)

both being equal to \( \operatorname M^* \). Moreover, for any matrix M having two rows, we have

$$\begin \operatorname (M M^*) = 2 \ \iff \ \det (M M^*) \ne 0 \,, \end$$

(A.42)

since \( M M^* \) is a square matrix of order 2. We note that

$$\begin \begin \underline \, \underline^* = A_1 A_1^* + A_2 A_2^* \,, \\ (A_1 \pm A_2) (A_1 \pm A_2)^* = A_1 A_1^* + A_2 A_2^* \,, \end \end$$

(A.43)

where the first equation comes from (A.39) and the second one from (A.40).

Applying now the above to \( M = \underline \) and \( M = A_1 + A_2 \), or \( M = \underline \) and \( M = A_1 - A_2 \), the claim follows. \(\square \)

Proof of Lemma A.4 (i)

The map \( \upsilon \) is well-defined on orbits because

$$\begin \upsilon (G \underline)&= (G A_1 + G A_2)^ (G A_1 - G A_2) = (A_1 + A_2)^ G^ G (A_1 - A_2) \nonumber \\&= (A_1 + A_2)^ (A_1 - A_2) = \upsilon (\underline) \,, \end$$

(A.44)

namely \( \underline \) and \( G \underline \) share the same image through \( \upsilon \), for all \( G: k_x \mapsto G(k_x) \) with \( G(k_x) \) invertible a.e. in \( k_x \).

(ii)

By picking a target space coincident with its range, we make \( \upsilon : [\underline] \mapsto U \) surjective by construction. All that is left to check is injectivity, which holds iff

$$\begin \upsilon (\underline) = U = } = \upsilon (\underline}}) \ \Rightarrow \ \underline, \underline}} \in [\underline] \,. \end$$

(A.45)

The equality \( U = } \) means

$$\begin (A_1 + A_2)^ (A_1 - A_2) = (}_1 + }_2)^ (}_1 - }_2) \,. \end$$

(A.46)

By assumption of self-adjointness, \( \underline, \underline}} \) are maximal rank a.e. in \(k_x\). Then, by Claim A.5 so are \( A_1 + A_2 \) and \( }_1 + }_2 \). Therefore, there exists \(G: k_x \mapsto G(k_x)\), \( G(k_x) \) invertible a.e. in \( k_x \), such that

$$\begin (A_1 + A_2) = G (}_1 + }_2) \,. \end$$

(A.47)

Indeed, G is explicitly given by

$$\begin G = (A_1 + A_2) (}_1 + }_2)^ \,. \end$$

(A.48)

Writing \( A_1+A_2 \) as in (A.47) inside of (A.46) results in

$$\begin & (}_1 + }_2)^ G^ (A_1 - A_2) = (}_1 + }_2)^ (}_1 - }_2) \nonumber \\ & \longleftrightarrow \quad (A_1 - A_2) = G (}_1 - }_2) . \end$$

(A.49)

Summing and subtracting Eqs. (A.47), (A.49) yields

$$\begin A_1 = G }_1 \quad \wedge \quad A_2 = G }_2 \,, \end$$

(A.50)

namely \( \underline = G \underline}} \) or \( \underline}} = G^ \underline \) with G invertible, as desired.

\(\square \)

The calculations leading to the proof of Proposition 4.7 are simplified by recalling the following elementary facts in linear algebra, gathered in a claim. We recall the adjugate of a matrix \( M \in \operatorname _n (}) \) for \( n=2 \):

$$\begin M = \begin m_ & \quad m_ \\ m_ & \quad m_ \end \,, \qquad \operatorname (M) = \begin m_ & \quad -m_ \\ -m_ & \quad m_ \end \,. \end$$

(A.51)

Claim A.6

The operation \( \operatorname \) enjoys the following properties:

1.

\( M^ = (\det M)^ \operatorname (M) \), for all \(M \in \operatorname (n, }) \);

2.

\( \operatorname (M+N) = \operatorname (M) + \operatorname (N) \), for all \( M,N \in \operatorname _n (}) \);

3.

Let \( M, N \in \operatorname _2 (}) \) be given in terms of column vectors \( m_i, n_i \in }^2 \ (i=1,2) \), i.e., \( M = (m_1,m_2) \) and \( N = (n_1, n_2) \). Then,

$$\begin \operatorname (M) \cdot N - \operatorname (N) \cdot M = \begin - m_1 \wedge n_2 - m_2 \wedge n_1 & \quad - 2 m_2 \wedge n_2 \\ 2 m_1 \wedge n_1 & \quad m_1 \wedge n_2 + m_2 \wedge n_1 \end \,, \end$$

(A.52)

where

$$\begin z \wedge w = \begin z_1 \\ z_2 \end \wedge \begin w_1 \\ w_2 \end :=\det (z,w) = z_1 w_2 - w_1 z_2 \,, \end$$

(A.53)

is the cross product \( \wedge : }^2 \times }^2 \rightarrow }\).

Proof

1. is known and 2. is obvious.

3. Upon writing \( m_i = (m_, m_)^T \) in terms of components, the entries of M end up transposed as compared to (A.51), and likewise for N. The property is then shown by direct calculation:

$$\begin \operatorname (M) \cdot N&= \begin m_ & \quad -m_ \\ -m_ & \quad m_ \end \begin n_ & \quad n_ \\ n_ & \quad n_ \end \nonumber \\&= \begin -m_2 \wedge n_1 & \quad -m_2 \wedge n_2 \\ m_1 \wedge n_1 & \quad m_1 \wedge n_2 \end \,, \end$$

(A.54)

cf. Eqs. (A.51, A.53).

By interchanging M, N and taking differences, we obtain (A.52) \(\square \)

Proof of Proposition 4.7

(i) Using the adjugation map \( \operatorname (\cdot ) \), cf. (A.51), and its properties 1 and 2 in Claim A.6, we notice that

$$\begin (A_1 + A_2)^ (A_1 - A_2)&= \frac ( \operatorname (A_1) + \operatorname (A_2) ) (A_1 - A_2) \nonumber \\&= \frac \big ( \operatorname (A_1) A_1 - \operatorname (A_1) A_2 \nonumber \\ &\quad + \operatorname (A_2) A_1 - \operatorname (A_2) A_2 \big ) \nonumber \\&= \frac \big ( (\det A_1 - \det A_2) \mathbb _2 \nonumber \\ &\quad + \operatorname (A_2) A_1 - \operatorname (A_1) A_2 \big ) \,. \end$$

(A.55)

The calculation of

$$\begin U = (A_1 + A_2)^ (A_1 - A_2) \end$$

(A.56)

thus proceeds by explicitly writing out \( A_1, A_2, A_1+A_2 \) and their determinants in cases DD–NN, and applying the formula (A.55). The following expressions

$$\begin \underline (k_x) = (A_1 (k_x), \, A_2 (k_x)) \,, \ A_1 (k_x) = (a_1, \, a_1' + k_x b_1) \,, \ A_2 (k_x) = (a_2' + k_x b_2, \, a_2) \,, \end$$

(A.57)

cf. (4.36, 4.38) will come in handy.

DD:

The starting point is Eq. (A.7). By Lemma 4.4, family DD has \( a_1 = a_2 = 0 \). Self-adjointness is then achieved for all \( a_1',a_2',b_1,b_2 \in }^2 \), cf. Proposition 4.5. The general element \( \underline: k_x \mapsto \underline (k_x) \) thus has

$$\begin \underline (k_x) = (0, \, a_1' + k_x b_1, \, a_2' + k_x b_2, \, 0 ) \equiv (A_1 (k_x), \, A_2 (k_x)) \,. \end$$

(A.58)

The associated von Neumann unitary \( U = \upsilon (\underline): k_x \mapsto U(k_x) \) is given by

$$\begin U (\underline (k_x)) :=(A_1(k_x) + A_2 (k_x))^ (A_1 (k_x) - A_2 (k_x)) \,, \end$$

(A.59)

cf. (4.51), where explicitly

$$\begin A_1(k_x) + A_2 (k_x)&= (a_2' + k_x b_2, \, a_1' + k_x b_1) \,, \nonumber \\ A_1(k_x) - A_2 (k_x)&= (-a_2' - k_x b_2, \, a_1' + k_x b_1) \,, \end$$

(A.60)

by Eq. (A.58).

In this simple case, there is no need to resort to the general formula (A.55). Indeed, observing that

$$\begin A_1(k_x) - A_2 (k_x) = (A_1(k_x) + A_2 (k_x)) \cdot \begin -1 & \quad 0 \\ 0 & \quad 1 \\ \end \,, \end$$

(A.61)

immediately yields

$$\begin U (k_x) = \begin -1 & \quad 0 \\ 0 & \quad 1 \end \equiv J \end$$

(A.62)

by Eq. (4.51). Equation (A.62) holds for all \( k_x \in }\) and \( a_1',a_2',b_1,b_2 \in }^2 \).

ND:

Lemma 4.4 and self-adjointness, cf. Proposition 4.5, produce the general form

$$\begin \underline (k_x) = (a_1, \, a_1' + k_x b_1, \, \textrm(k_x \lambda + \lambda ') a_1 - } (a_1' + k_x b_1), \, \alpha a_1 ) \equiv (A_1 (k_x), \, A_2 (k_x)) \end$$

(A.63)

for \( \underline (k_x) \), cf. (A.7). It follows that

$$\begin \begin A_1 (k_x) & = & (a_1 \,, \ a_1' + k_x b_1) \\ A_2 (k_x) & = & (\textrm(k_x \lambda + \lambda ') a_1 - } (a_1' + k_x b_1) \,, \ \alpha a_1) \\ (A_1 + A_2) (k_x) & = & ((1 +\textrm(k_x \lambda + \lambda ')) a_1 - } (a_1' + k_x b_1) \,, \ \alpha a_1 + (a_1' + k_x b_1) ) \,, \end \end$$

(A.64)

whence

$$\begin \begin \det A_1 (k_x) & = & \chi \\ \det A_2 (k_x) & = & + | \alpha |^2 \chi \\ \det (A_1 + A_2) (k_x) & = & ( 1 + \textrm(k_x \lambda + \lambda ') + | \alpha |^2 ) \chi \,, \end \end$$

(A.65)

where

$$\begin \chi :=a_1 \wedge (a_1' + k_x b_1) \,, \end$$

(A.66)

for the scope of this proof. Moreover,

$$\begin \operatorname (A_2) A_1 - \operatorname (A_1) A_2 = \chi \begin - \textrm(k_x \lambda + \lambda ') & \quad - 2 \alpha \\ 2 } & \quad \textrm(k_x \lambda + \lambda ') \end \,. \end$$

(A.67)

Eq. (4.53) now follows by (A.55) and singling out J, cf. (4.52), as a separate summand.

NN:

Within this family, Lemma 4.4 and self-adjointness, cf. Proposition 4.5, produce the general form

$$\begin \underline (k_x)&= (a_1, \, (\mu ' + k_x \mu ) a_1 + \textrm(\lambda _1' + k_x \lambda _1) a_2, \, \textrm(\lambda _2' + k_x \lambda _2) a_1 - (}' + k_x }) a_2, \, a_2 ) \nonumber \\&\equiv (A_1 (k_x), \, A_2 (k_x)) \,. \end$$

(A.68)

It follows that

$$\begin \begin A_1 (k_x) & = & (a_1 \,, \ (\mu ' + k_x \mu ) a_1 + \textrm(\lambda _1' + k_x \lambda _1) a_2) \\ A_2 (k_x) & = & (\textrm(\lambda _2' + k_x \lambda _2) a_1 - (}' + k_x }) a_2 \,, \ a_2) \\ (A_1 + A_2) (k_x) & = & \begin (1 +\textrm(\lambda _2' + k_x \lambda _2)) a_1 - (}' + k_x }) a_2 \\ (\mu ' + k_x \mu ) a_1 + ( 1 + \textrm(\lambda _1' + k_x \lambda _1) ) a_2 \end^T \,, \end \end$$

(A.69)

whence

$$\begin \begin \det A_1 (k_x) & = & \textrm(\lambda _1' + k_x \lambda _1) (a_1 \wedge a_2) \\ \det A_2 (k_x) & = & \textrm(\lambda _2' + k_x \lambda _2) (a_1 \wedge a_2) \\ \det (A_1 + A_2) (k_x) & = & \big [ ( 1 + \textrm(\lambda _1' + k_x \lambda _1) ) (1 +\textrm(\lambda _2' + k_x \lambda _2)) \\ & & + (\mu ' + k_x \mu ) (}' + k_x }) \big ] (a_1

Comments (0)

No login
gif